Striking the balance – EU AI Act and its impact on cyber security

WithSecure’s response to the EU AI Act and its impact on cyber security

Share

Introduction

The European Union’s proposed AI Act has ignited a debate on the potential impact of AI regulation on innovation and, more specifically, its implications for the cyber security industry. Sceptics argue that stringent regulation might stifle innovation, while proponents assert that it is crucial to mitigate the risks associated with AI technologies. Balancing innovation and regulation is a delicate task, especially in the realm of cyber security where the implications of AI are profound.

Would regulation kill AI?

The concern that AI regulation might stifle innovation and allow non-Western countries to gain dominance in AI is a valid one. The AI industry is currently in an industrial arms race, with large technological players vying for supremacy. However, innovation should not come at the cost of security, ethics, and the protection of fundamental human rights.

While it’s true that overly restrictive regulations could slow down the industry, it’s equally essential to recognize that AI technologies have far-reaching consequences. Without proper regulation, we risk creating a Wild West scenario in which AI is developed and deployed without adequate safeguards. This could lead to privacy breaches, job losses, and potential manipulation of public opinion.

Why do we need regulation?

AI regulation is an essential step, but it carries its own risks. If regulation is too stringent, it can stifle innovation and create a false sense of control. Limiting market access may also lead to a skewed view of control, as companies may still evade regulations, leading to a revolving door of industry players, lobbyists, and politicians influencing AI regulation.

The consequences of getting AI wrong are substantial. Unlike previous technologies, AI has the potential to impact society and industry in ways that cannot be easily reversed. AI can significantly affect the economy, societal structures, and personal lives. Thus, the stakes are high, and we must approach AI with a greater degree of caution and responsibility.

The risks that warrant the need for regulation in AI include:

Lack of Transparency: Many AI systems are trained on biased source material, leading to potential bias in their outputs. Ensuring transparency in how AI systems handle these biases is crucial.

Invasion of Privacy: AI can be used to extract data and make inferences that may violate privacy laws, putting individuals at risk.

Job Displacement: AI’s mass automation capabilities could lead to job losses in various industries. Regulations should address the impact on employment.

Market Inequality: Not all companies have the resources to compete in the AI space, leading to market monopolies.

Societal Division: If only certain countries can afford AI technologies, it may exacerbate global inequalities.

Manipulation of Opinion: AI can propagate harmful biases and “truths” through media, undermining public discourse.

Violation of Human Rights: AI systems can perpetuate profiling, racism, and sexism, violating fundamental human rights.

Security Risks: AI can be exploited for cyberattacks and pose threats to cyber security.

Identity Theft: Users should have the right to know when they are interacting with AI systems to prevent identity theft and impersonation.

Patent Trolling: Regulatory measures should prevent the rise of patent troll companies exploiting AI creators.

It’s noteworthy that leading AI corporations are testifying to governments and advocating for regulation. However, given the complexity of AI systems and the many stakeholders involved, some scepticism about the feasibility of comprehensive regulation is warranted.

Certain industries have become so complex and resistant to regulation that the call for regulation may seem like a double bluff. Nonetheless, there is a genuine need for some level of involvement to ensure that AI is developed and used responsibly. The stakes are high, and we cannot afford to get AI wrong, as we have with other technologies like IoT and social media.

The risks that warrant the need for regulation in AI include:

  1. Lack of Transparency: Many AI systems are trained on biased source material, leading to potential bias in their outputs. Ensuring transparency in how AI systems handle these biases is crucial.
  2. Invasion of Privacy: AI can be used to extract data and make inferences that may violate privacy laws, putting individuals at risk.
  3. Job Displacement: AI’s mass automation capabilities could lead to job losses in various industries. Regulations should address the impact on employment.
  4. Market Inequality: Not all companies have the resources to compete in the AI space, leading to market monopolies.
  5. Societal Division: If only certain countries can afford AI technologies, it may exacerbate global inequalities.
  6. Manipulation of Opinion: AI can propagate harmful biases and “truths” through media, undermining public discourse.
  7. Violation of Human Rights: AI systems can perpetuate profiling, racism, and sexism, violating fundamental human rights.
  8. Security Risks: AI can be exploited for cyberattacks and pose threats to cyber security.
  9. Identity Theft: Users should have the right to know when they are interacting with AI systems to prevent identity theft and impersonation.
  10. Patent Trolling: Regulatory measures should prevent the rise of patent troll companies exploiting AI creators.

It’s noteworthy that leading AI corporations are testifying to governments and advocating for regulation. However, given the complexity of AI systems and the many stakeholders involved, some scepticism about the feasibility of comprehensive regulation is warranted.

Certain industries have become so complex and resistant to regulation that the call for regulation may seem like a double bluff. Nonetheless, there is a genuine need for some level of involvement to ensure that AI is developed and used responsibly. The stakes are high, and we cannot afford to get AI wrong, as we have with other technologies like IoT and social media.

Open-Source AI as an antidote

If the main threats posed by an oligarchy of AI players are opaqueness, bias and a monolithic railroading of the tech industry, then open source might be the antidote to these and other threats. The open-source movement has always stood for taking back power from large for-profit corporations and to make sure that open and/or freely available alternatives are possible when it comes to what one wants to do with a computer or device that runs any kind of software.

The open-source movement could play a continued essential role in democratizing AI, preventing an unhealthy concentration of power, and ensuring that the technology is developed and used responsibly. A few of the advantages of having open-source AI alternatives when it comes to the Large Language Models, the source data as well as the weights and structure:

Transparency and Trust: Proprietary AI systems are opaque and make it hard for outsiders to understand their workings, biases or vulnerabilities. Open-source projects are by their very nature open to scrutiny. This transparency can lead to greater trust among the public, developers, and researchers.

Collaborative Development: Open source can be used to teach how AI systems work and how to make them better. It encourages a community-driven approach where a global community of developers, researchers and enthusiasts can contribute to and enhance the technology. This diverse input can lead to more robust, versatile, and efficient AI systems.

Avoiding Monopolies: By keeping at least a few core AI technologies open source, the barriers to entry for startups and individual developers are significantly reduced. This promotes a more competitive landscape, preventing a few companies from holding a disproportionate amount of control and influence over the technology and its applications.

Ethical Standards: The open-source community often promotes ethical standards and best practices. By working together, the community can ensure that AI is developed and used responsibly, considering societal impacts, fairness, and human rights. E.g. certain models can never be used for weapons systems or social credit score systems. This also means that governments and regulatory bodies can better understand, assess, and regulate AI technologies. This can lead to more informed policy decisions that consider public interest and safety.

Global Inclusivity: Not only that, it ensures that every country or organization that does not have the resources to develop AI from scratch can benefit from AI advancements. This means AI technology can be made accessible to a broader range of people, irrespective of their geographical location or financial capabilities.

Security: While it might seem counterintuitive, open-source projects can be more secure. The vast number of eyes on the code means vulnerabilities can be spotted and rectified faster. The “many eyeballs” theory suggests that the more people who can see and test a set of code, the more likely any flaws will be caught and hopefully addressed quickly.

WithSecure’s recommendations for EU private and public organizations

The EU AI Act presents a critical opportunity to shape the future of AI and address its potential risks. While concerns about stifling innovation are valid, we must strike a balance to ensure the responsible development and deployment of AI.

Open-source AI solutions, transparency, and ethical standards can play a pivotal role in democratizing AI while keeping it secure and accountable. As the AI landscape continues to evolve, it is essential to prioritize cyber security and privacy without unduly hindering progress in this transformative field.

To address the cyber security challenges presented by EU AI Act, several recommendations are warranted:

  1. Cyber security Problems of AI: The AI industry must focus on addressing data privacy issues, biases, and model explainability to enhance AI cyber security.
  2. Cyber security Implications of the EU AI Act: The EU AI Act is a positive step, but it should be carefully crafted to avoid limiting the effectiveness of cyber security measures.
  3. Pros and Cons for the Cyber security Industry: The EU AI Act will create transparency and restrict excess data gathering, but it may also create a gap between defenders and malicious actors.
  4. Harnessing AI in Cyber security: The cyber security industry should leverage AI, particularly GAI and LLMs, to improve threat detection and response, and educate non-experts in safe practices.

We strongly recommend that all public and private organizations within the EU fully utilize the two-year implementation period provided for the development and deployment of AI standards for high-risk systems. This period, mandated by the EU AI Act, presents a strategic opportunity to advance beyond mere security compliance and foster a robust AI security culture within organizations.

To effectively leverage this opportunity, we suggest the following steps:

  1. Develop a Comprehensive AI Risk Model: Align with the EU AI Act by defining clear business objectives, mapping AI use cases, identifying systems that constitute high risk, and prioritizing AI assets for protection. This step should serve as a crucial link between your business strategy and the technical development of AI, ensuring a cohesive approach to AI deployment.
  2. Conduct Thorough Threat Modelling for High-Risk AI Initiatives: Adhere to the EU AI Act by meticulously documenting and reviewing system architectures. Identify the requirements for human oversight, as well as the data and model controls necessary to thwart potential attackers. Utilize this process as an opportunity to educate technical teams on security best practices, integrating these practices into the workflows of developers and data scientists.
  3. Simulate AI Cyber Threats: Confirm the efficacy of the detection and response controls of your models and infrastructure. It’s essential to test the robustness and reliability of AI systems proactively before they are targeted by real-world attackers. This proactive approach not only ensures compliance with current regulations and forthcoming AI cybersecurity standards but also prepares your organization for future threats and challenges in the AI security landscape.

By following these steps, organizations can not only comply with current regulations but also position themselves at the forefront of AI security and innovation.

In the ever-changing landscape of AI and cyber security, the EU AI Act stands as a significant milestone. While it poses challenges, it also offers a unique opportunity to create a regulatory framework that balances innovation with accountability.

By addressing the risks associated with AI and fostering responsible innovation, the cyber security industry can harness the potential of AI technologies while ensuring a secure digital future for all. Collaboration, transparency, and ethical standards should be at the core of this transformative journey, paving the way for a harmonious coexistence between innovation and regulation in the realm of AI-driven cyber security.

Related content

April 12, 2024 Webinars

Building secure LLM apps into your business

Gain practical understanding of the vulnerabilities of LLM agents and learn about essential tools and techniques to secure your LLM-based apps. Our host Janne Kauhanen is joined by Donato Capitella, Principal Security Consultant at WithSecure™.

Read more
May 17, 2024 Our thinking

Prompt injections could confuse AI-powered agents

We wanted to explore how attackers could potentially compromise large language model (LLM) powered AI applications.

Read more
June 1, 2024 Our thinking

Insights into the NIS2 Directive

NIS2 is not just an update; it's a significant expansion of scope and ambition to address the evolving cyber threat landscape.

Read more

Check out our latest research on WithSecure Labs

For techies, by techies – we share knowledge and research for public use within the security community. We offer up-to-date research, quick updates, and useful tools.

Go to WithSecure Labs

Our accreditations and certificates

Contact us!

Our team of dedicated experts can help guide you in finding the right solution for your unique issues. Complete the form and we are happy to reach out as soon as possible to discuss more.