Chris Rutter

Security Principal
AI

March 18, 2026

Practical AI security controls that mitigate real-world risks

There’s a lot of high-level AI security guidance available right now, and most of it is focussed on high-level policy and principles. Delivery teams building out new workflows and products using AI often struggle with something much more concrete:

What controls should we actually implement to secure our new AI systems and workflows? And which of them genuinely reduce risk?

You don’t need entirely new security disciplines

One misconception I see regularly is that AI systems require a completely new security operating model.

AI does introduce new threat categories like prompt injection, model abuse, and unintended data exposure, but the techniques we use to mitigate them are familiar:

  • Understand the data flow
  • Identify trust boundaries
  • Apply least privilege
  • Validate inputs and outputs
  • Automate assurance

If your organisation already does those things well, securing AI can become an extension of existing practices and not a reinvention. Let’s explore some concrete controls to mitigate AI-specific threats.

Validate model outputs

One of the biggest practical risks in AI-enabled systems is over-trusting model output.

Models generate text that looks authoritative but that doesn’t mean it’s safe or correct, and this means:

  • Never pass model output directly into system commands
  • Always manually and/or automatically review model output used in an impactful context (i.e software code, published content)
  • Apply validation rules to anything consumed by downstream services

Apply least privilege to agents

Local coding agents and MCP servers run with your user permissions by default.

If you allow broad permissions to resources like file systems, code repositories, local tool execution, production APIs, then you’ve created unnecessary exposure and could be operating at risk.

At a minimum:

  • Use “approve each action” as the default when using local coding agents
  • Restrict agent filesystem access to the minimum possible
  • Ensure agents have unique identities, short-lived credentials and least-privilege permissions

These are fundamental controls which significantly reduce risk, and serve to apply traditional security objectives on AI systems.

Protect sensitive data deliberately

When using emerging technologies like AI, data privacy risk often comes not from malicious attack, but from having an insufficient level of security controls applied to different classes of data.

Free or entry-level licences for AI models may allow undesirable behaviour like models storing prompts, retaining outputs and using proprietary data for model training. Sensitive or confidential data should not be used with these licenses.

Enterprise licences typically provide stronger guarantees, but they must be verified. For sensitive environments, it’s important to:

  • Confirm data retention terms
  • Disable training where possible
  • Consider private cloud or local deployments
  • Restrict who can access AI tools

Consistently automate SDLC guardrails

AI can generate code and other outputs very quickly, which makes consistent and automated controls even more important than before.

At a minimum, development workflows should include:

  • Enforced branch protection with human review
  • CI pipelines with linting, secret detection and security scanning (SAST)
  • Automated unit test coverage checks
  • Third-party library / dependency vulnerability scanning (SCA)

Threat model AI workflows and systems

AI-enabled workflows behave differently from traditional applications, and are inherently less deterministic. Threat modelling is an excellent method used to reason about the security of a new workflow or system.

Threat modelling sessions should explicitly consider threats like:

  • Prompt injection vectors
  • AI Tool misuse scenarios
  • Model hallucination impacts
  • Data leakage through logging

When engineering and security teams run these exercises together, they can build shared understanding and more effective controls.

The goal is enablement

Security should not be the function that blocks AI adoption but it should be the function that ensures that risks are understood, controls are proportionate and experimentation can move into production safely.

The key to AI security is understanding how existing fundamental security principles can be applied to new AI tools, workflows and architectures. Only then can new threats be mitigated and risks brought under control.

Download the AI security playbook

If you’re looking for a practical way to think about securing AI systems, we’ve pulled these ideas together in our AI security playbook. It outlines the real controls that matter in delivery environments and how teams can apply them without slowing development down.

Download the playbook

About the author

A specialist in security, platforms and modernisation, Chris has spent over a decade helping teams in finance, retail and government build delivery-focussed, scalable and secure software systems. Having worked in product delivery, security and advisory roles, Chris specialises in building large-scale technical capabilities and introducing DevSecOps practices to help teams deliver securely at scale.

You may also like

Blog

AI security is not a tooling problem. It’s an adoption risk

Blog

Scaling AI in software delivery: Lessons from the Lighthouse approach

Blog

Are penetration tests all you need to ensure your cyber security?

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.