Generative AI Security

Secure your GenAI-powered applications and solutions

Embrace GenAI while mitigating security risks

Generative Artificial Intelligence (GenAI) is rapidly transforming industries, and organizations are increasingly integrating Large Language Models (LLMs) into their services and products. Whether using off-the-shelf models, customizing pre-trained solutions, or developing proprietary AI, the transformative power of these technologies is undeniable.

While GenAI should be recognized and embraced as a game-changer for busines innovation, it’s essential to be aware of the potential cyber security risks beyond the hype.

We see the majority of cyber security risks stemming from how AI models are integrated into systems and workflows rather than from the models themselves.

Cyber risks in AI models infograph

Failing to address these risks can expose your organization and customers to various threats, including data breaches, unauthorized access, and compliance violations.

We can help you address the practical risks associated with integrating GenAI into enterprise systems and workflows. As a leading cyber security assurance testing company, we have extensive experience in helping organizations navigate the complexities of adopting new technologies such as GenAI and LLMs.

Common pitfalls in the use of GenAI

Practical risks associated with GenAI don’t exist in isolation but are mostly related to the context in which the organization is using it. When building GenAI and LLM integrations, it’s crucial to consider the potential security risks and implement robust safeguards from the outset.

These are the most common security pitfalls that we have identified associated with the use of AI for businesses.

Warning icon

Jailbreak and prompt injection attacks

Malicious actors attempt to “jailbreak” an LLM by injecting carefully crafted prompts, tricking it into executing unauthorized actions or revealing sensitive information.

Warning icon

Excessive agency and
malicious intent

GenAI systems with excessive agency get manipulated by attackers (via jailbreak and prompt injection attacks) causing the system to execute malicious actions and posing significant security risks.

Warning icon

Insecure tool/
plugin design

 

When tools, plugins, or integrations for LLMs are poorly designed or insecurely implemented, they can introduce significant vulnerabilities leading to unauthorized access and data breaches.

Warning icon

Insufficient monitoring, logging, and rate limiting

Inadequate monitoring, logging, and rate-limiting mechanisms hinder the detection of malicious activity, making it challenging to identify and respond to security incidents promptly.

Warning icon

Lack of
output validation

 

Failure to validate and sanitize the output from GenAI models can lead to the disclosure of confidential information or the introduction of client-side vulnerabilities like Cross-Site Scripting (XSS).

Ensure the security of your LLMs and GenAI solutions

Whether your organization is in the early stages of planning or developing GenAI-powered solutions, or already deploying these integrations or custom solutions, our consultants can help you identify and address potential cyber risks every step of the way.

We can support your organization in adopting and integrating AI securely by assessing the potential security flaws of the GenAI/ LLM integrations and interaction to your systems and workflows and providing recommendations on secure deployment.

Depending on your use case, the different assessment approaches may include any of the below.

Contact us to discuss the best approach for your specific case!

Governance, risk and threat modeling for AI

Our services to support you in the planning phase.

AI Governance icon

AI Governance

Defining the AI adoption objectives and acceptable use cases.

 

Adapting or creating ad-hoc risk management frameworks based on your organization’s needs and regulatory requirements.

AI Risk Modeling icon

AI Risk Modeling

Identifying and prioritizing security risks at an organizational and use case level.

 

Creating a shared risk understanding between development teams, cyber security, and business units.

AI Threat Modeling icon

AI Threat Modeling

Identifying the most relevant attack paths based on risk prioritization technical analysis.

 

Identifying control gaps and prioritizing control implementations through cost/ benefit analysis.

Implementation and integration of AI solutions

Our services to support you in the implementation phase.

LLM application pentesting icon

Pentesting LLM Applications

Identifying and addressing the cyber security weaknesses in your organization’s LLM applications and integrations.

 

Understanding the exploit vulnerabilities of the LLM applications, the specific cyber risks they pose, and the attacker goals that will most likely lead to them being targeted.

AI infrastructure pentesting icon

Pentesting AI-supporting Infrastructure

Identifying high risk attack paths leading to your AI-powered applications and offering recommendations to protect these.

 

Ensuring secure hosting and AI-management, protecting AI data and access points.

LLM applications security canvas

Here you can download our LLM Application Security Canvas.

It condenses our battle-tested approach to help clients harness the transformative power of LLMs and safely deploy their application to production by implementing security controls at all stages of the LLM pipelines.

 

LLM application security canvas
Click to download our LLM application security controls canvas.

We can help

We are the trusted cyber security partner and industry-accredited, global provider of cyber security assurance services, with over 30 years of experience.

We understand the unique challenges that arise during the development and implementation of AI-powered solutions.

That’s why we offer comprehensive cyber security consulting services to support you every step of the way.

Our experienced and specialized team can help your organization leverage the full potential of AI technology while maintaining a resilient and secure infrastructure.

Want to talk in more detail?

Contact us to find out how we can support your organization in the secure deployment of GenAI and LLMs.

Tell us about your case and we’ll help you find the right solution.

Case Studies

Securing an LLM-powered customer support agent for a tech start-up

Client’s challenge

A tech start-up was developing an LLM-powered virtual agent to automate customer support experience for organizations. The agent would have access to customer accounts and the ability to perform operations like updating address details.

The client’s primary concern was ensuring the security and privacy of customer data while maintaining the agent’s functionality and effectiveness.

Our solution

Our team conducted a comprehensive security assessment of the client’s LLM integration, including a thorough evaluation of the agent’s tools and APIs.

We identified a critical vulnerability wherein the API allowed the LLM to specify the userID, opening the door for prompt injection or jailbreaking attacks. Malicious actors could potentially force the agent to invoke the API with a different userID, enabling unauthorized access and modification of confidential information across customer accounts.

To mitigate this risk, we advised the client on redesigning the API’s access controls. We recommended removing the userID parameter from the API and supplementing it as part of a secure session management system.

Additionally, we guided the client in integrating LLM guardrail pipelines that inspect untrusted input and limit the success rate of jailbreak or prompt injection attacks.

Outcome

By providing expert guidance and recommendations, we helped the tech start-up implement robust security measures for their LLM-powered customer support agent. The redesigned API and guardrail pipelines, implemented by the client based on our advice, ensured strong access controls and protection against malicious prompt injection attempts.

Customer data remained secure, and the agent could function effectively without compromising privacy or exposing the client to potential data breaches or unauthorized access.

AI Security Strategy and Risk/Threat Modeling for a Large Enterprise

Client’s challenge

A large multinational corporation aimed to enhance their workforce’s capabilities by adopting GenAI solutions, both off-the-shelf productivity tools and integration of proprietary LLMs via access to external, 3rd party APIs.

They sought guidance on evaluating the security implications of these GenAI implementations and establishing best practices for future GenAI projects. Given the dynamic nature of AI solutions, the client recognized the need for a tailored AI security strategy that complements traditional cybersecurity methods.
 

Our solution

We engaged with the client through a multi-phased approach, combining risk modeling and threat modeling in the context of AI to create a holistic view of the risks associated with leveraging API-based AI solutions for the organization.

Phase 1:Risk Modeling
Our team conducted a comprehensive risk assessment, identifying potential vulnerabilities and threats specific to the organization’s AI implementation. We evaluated factors such as data privacy, model biases, transparency, and integration with existing systems.

Phase 2: Threat Modeling
Building upon the risk assessment, we performed threat modeling exercises to understand the potential attack vectors and scenarios that malicious actors could exploit within the AI ecosystem. This included analyzing risks related to prompt injection, model hijacking, and insecure API integrations.

Phase 3: AI Security Checklist
Based on our findings from the risk and threat modeling phases, we developed a tailored AI security checklist to guide the client in implementing robust security measures for future API-based AI projects. This checklist encompassed best practices for secure data handling, model validation, API access controls, monitoring, and incident response.
 

Outcome

By engaging in this comprehensive AI security strategy, the client gained a deep understanding of the potential risks and threats associated with their AI implementations. Equipped with our risk and threat modeling insights, they could make informed decisions and implement appropriate security controls to mitigate these risks effectively.

Moreover, the AI security checklist provided a robust framework for future AI projects, ensuring a consistent and proactive approach to addressing security concerns from the outset. This empowered the client to leverage the transformative power of AI while maintaining the highest levels of security and protecting their organization’s critical assets and data.

Further resources

October 18, 2024 Our thinking

Generative AI security: Findings from our research

With our ongoing research on GenAI, we are looking to continuously deepen the understanding and raise awareness of these vulnerabilities to help organizations defend against potential exploits and ensure that they can safely leverage AI technologies.

Read more
April 12, 2024 Webinars

Building secure LLM apps into your business

Gain practical understanding of the vulnerabilities of LLM agents and learn about essential tools and techniques to secure your LLM-based apps. Our host Janne Kauhanen is joined by Donato Capitella, Principal Security Consultant at WithSecure™.

Read more
May 17, 2024 Our thinking

Prompt injections could confuse AI-powered agents

We wanted to explore how attackers could potentially compromise large language model (LLM) powered AI applications.

Read more
July 9, 2024 Our thinking

When your AI assistant has an evil twin

How attackers can use prompt injection to coerce Gemini into performing a social engineering attack against its users.

Read more
May 16, 2024 Our thinking

Should you let ChatGPT control your browser?

On the perils of LLM-driven browser agents.

Read more
May 10, 2024 Our thinking

Generative AI – an attacker’s view

This research article sheds light into the world of Generative AI from the perspective of the attacker.

Read more
Highlight

Free 60-minute consultation

What questions do you need answered? Choose a topic and book your private session with one of our consultants. Our experts are ready to talk through your pain points and get you some answers.

Learn more
Highlight

Current State Analysis

Ensure the security and privacy of your web and mobile applications. Application security testing identifies vulnerabilities before attackers do, ensuring continuous availability of your services and protecting your reputation.

Learn more
Highlight

Attack Path Mapping

Explore the potential routes an attacker might use to compromise your systems. Assess your security extensively with a collaborative, time-efficient exercise to pinpoint remediation activities that yield the greatest business impact.

Learn more

Check out our latest research on WithSecure Labs

For techies, by techies – we share knowledge and research for public use within the security community. We offer up-to-date research, quick updates, and useful tools.

Go to WithSecure Labs

Our accreditations and certificates

Contact us!

Our team of dedicated experts can help guide you in finding the right solution for your unique issues. Complete the form and we are happy to reach out as soon as possible to discuss more.