Blog_Lead (24)
Chris Rutter
Chris Rutter Security Principal

Data, Gen AI Wed 3rd April, 2024

How to secure your first generative AI integration

Generative AI is already providing businesses with exciting functionality and productivity gains. However, it isn’t always easy to understand the security implications of GenAI tools. Engineering teams wanting to use or explore these tools may therefore struggle to gain security approval. 

Most organisations have a regulatory responsibility to risk assess new software. These risk assessments include exercises like architectural reviews, data protection assessments, penetration tests and procurement processes. There are several reasons these activities are difficult when evaluating GenAI tools:

  • GenAI tools are new, and many security analysts are unclear on how they work, what they’re used for, how data flows through them and what new threats are emerging on a seemingly daily basis.
  • Companies hosting GenAI tools are still maturing, and until recently, have lacked clear policies and guarantees around privacy and data protection.
  • There is a lack of industry-standard benchmarks, security policies, reference architectures or hardening guides, which traditionally help security analysts gain confidence when assessing unfamiliar technologies.
  • High-profile news stories covering misuse of GenAI tools create a negative picture and raise concerns of reputational damage.

We see risk assessment teams asking, “How much more risks will using GenAI expose us to?” without understanding how tools work and what security guarantees tool providers are offering.

Here’s how you can plan a successful security review of Gen-AI-backed systems and secure them in your organisation.

Fill in the knowledge gaps with a clear proof-of-concept

The key to securing any emerging technology is for your engineering teams to be proactive, helping to fill knowledge gaps for security teams and making risk assessments as easy as possible.  

First, your engineering team must get their hands dirty with a proof-of-concept (POC) and understand the tool’s functionality, security controls and data flows. Often, engineers are still learning how to use this new technology, so hands-on experience and getting to know the ins-and-outs of the tool are the best ways to produce all the information your security reviewers need to do their jobs.

Gaining approval for a POC can be a challenge, but we’ve seen teams succeed by raising a proposal with a clear and documented low-risk scope; meaning no internal or personal data, no output in front of customers and no integrations with production systems. Using test or public data, isolated environments, and even virtual workstations can all help teams to gain approval.

Once a POC is complete, put together an information pack that allows any security reviewer to fully understand the data flow and any security controls available. Show a demo of how the tool works, provide a clear explanation of its features, and draw out a data flow diagram, like the below:

With a clear demonstration of functionality, security controls and data flow, your security reviewers have enough understanding and information to carry out an architectural risk assessment, just as they would for any other system or tool.

Choose a tool with strong policy guarantees

All organisations we’ve worked with have a policy on procuring SaaS tools that process data. These checks are necessary to comply with data protection regulations, as well as due diligence to protect from damage caused by a SaaS provider failure.

Until recently, popular AI tool providers like OpenAI had very basic privacy and data protection policies, some of which were missing explicit answers or guarantees in respect to these (very important) areas.. For many organisations, these unclear policies were part of their decision not to use AI tools.

We’ve seen many providers reaching a level of policy maturity, especially with emerging enterprise offerings (for example OpenAI’s enterprise privacy policy), so we recommend you engage with the procurement process again to re-evaluate available policies.

Create a threat model on how the tool could be abused 

One of the most compelling aspects of GenAI tools are the almost-unlimited ways they can be used, seemingly restricted only by the prompts that can be imagined. From a risk assessment perspective, somebody needs to prove that potential abuse of this unlimited functionality has been assessed.

The best way to reason about these potential misuse scenarios is to carry out a threat-modelling exercise, in which you work with your security teams to identify ways an attacker could misuse a system, and then design security controls to prevent this from happening.

Ensure that engineering and security teams are familiar with new GenAI-specific threats. Then, hold a session with your engineering and security team to run through each point of the system data flow diagram, with the mindset of an attacker. Brainstorm different ways the system could be misused and which security controls could be implemented.

An example finding from a threat modelling session around employees using ChatGPT might be:

This talk by AWS provides an excellent walkthrough of carrying out a full threat model of generative AI workloads. 

If a team can bring together engineering and security/risk teams to produce a robust risk assessment using shared understanding, architectural reviews, strong vendor policies and threat models, we’ve seen a clear path for organisations to begin experimenting and productionising GenAI tools.