Chris Rutter

Security Principal
AI

November 19, 2025

Unlocking AI Innovation with Adaptive Guard Rails

At Equal Experts, we help organisations adopt AI-accelerated delivery to increase productivity and drive innovation. I often speak with senior leaders who would love to introduce Generative AI pilot initiatives but are blocked or restricted by the wait for org-wide, static security guard rails and policies.

We’ve worked with several organisations to help them build adaptive guard rails that evolve and adapt with a pilot initiative, making sure the right controls are in place when they’re needed and unlocking experimentation and innovation.

The right guard rails are those that provide enough controls so that sensible risks can be accepted to drive business goals, not those that block innovation until all risks are mitigated.

A New Set of Threats to Guard Against

The productivity benefits on offer when using GenAI to generate code and build software also come with a new set of threats and some new challenges when it comes to existing security requirements.  Some of the most common things our clients tell us they worry about are:

  • What if AI-generated code contains serious bugs or is insecure? We need a way to reliably check the quality of generated code.
  • Could vulnerable third-party libraries be accidentally included in software? How can I check that libraries selected by an LLM are safe?
  • Could my protected data, IP, or secrets be leaked? How do I know what data new tools have access to and where it ends up?
  • Can I meet my data protection obligations using AI platforms?

These are all concerns that must be taken seriously when using GenAI to help deliver real software, both in a pilot and a wider release, but mitigations can be implemented in more than one way.

Adaptive Guard Rails – sensible controls when they’re needed

Adaptive guard rails are lightweight protections that keep initiatives safe as they evolve, and they grow stronger and broader as your pilot matures.  They protect based on the actual risk of your initiative, not the potential risks of what might be implemented in years’ time.

By starting small and iterating, you can rapidly experiment safely with basic controls that protect against the actual risks of your pilot, adjusting them as your initiative scales.  For example, we’ve implemented the following basic controls with several organisations and successfully unlocked AI pilots, which then went on to full adoption:

Model and Tool Approval List
Maintain a list of approved models and tools with acceptable security, data and privacy policies.  Clearly document the criteria used to define “acceptable.”

Data Security & Exfiltration Policy
Create a strong policy on what data can be used with AI models e.g. no customer data or valuable intellectual property.  Ensure pilot users attest to following this policy.

Secure Configuration Checklist
Create a standard configuration checklist that engineers can use to control coding assistant data access and model privacy settings.  Require screenshots from pilot users as evidence during onboarding.

Output Review and Assurance Process
Enforce human review of all code built with AI, and use in-line security scanners to check for vulnerabilities.  Produce a report showing all code has been reviewed and scanned.

All of these controls can be tied together with a lightweight onboarding process that incorporates a short training session, a simple form filled out by engineers to attest they will follow policy and upload evidence of secure configuration, and a report showing evidence of these activities that can be consumed by compliance teams.

In 2025 we worked with the UK Government Agency Department DEFRA to deliver a highly successful AI-accelerated delivery pilot, and we were able to rapidly augment existing teams with AI expertise and increase productivity.  We unlocked this innovation by using similar techniques which proved all users understood a clear policy and followed a clear set of procedures.

Matching the guard rails to the risk and reward

For long-established and stable systems, org-wide static guard rails make sense.  Risks to the business are well understood, and workflows are usually settled.  Standardised and cross-cutting controls save money and effort and enable managing robust controls across a large number of systems.

For AI pilot initiatives the pace of change is massive, and freedom to experiment without waiting for procurement or org-wide controls is essential.  The scope of a pilot can be effectively limited, so lightweight controls that are quick to implement can bring risk down to acceptable levels.

Every organisation’s risk profile and tolerance is different, but if engineering teams proactively define lightweight policies and processes for their pilots, then conversations with security and compliance teams can be collaborative, productive and avoid a simple yes/no response.

Conclusion

If you want to roll out a business-enabling AI pilot initiative, you don’t need to wait for advanced, all-encompassing static guard rails that are designed to mitigate all potential future risks.

Focus instead on adaptive guard rails that enable you to innovate by allowing you to accept reasonable risks.  That’s how you can protect your business and unlock the innovation AI can bring.

If you’re struggling to get an AI-accelerated delivery pilot off the ground because you’re blocked waiting for enterprise-level policies and controls, get in touch with us and we can help you drive ahead with pragmatic and proportional controls.

You may also like

From madness to method with AI coding part 2 Guardrails

Blog

From madness to method with AI coding: Part 2 – Guardrails

Case Study

Engineering AI into software delivery: How Travelopia launched software to production

Case Study

How Travelopia improved customer wait time from 24 hours to instant with GenAI

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.