Ben Wilkes

Technical Lead
AI

March 30, 2026

Secure AI engineering requires discipline

Coding agents are clearly capable tools, and in many cases they allow teams to move more quickly through certain tasks. But once you move beyond demonstrations and early experimentation, the focus shifts from how quickly something can be produced to whether what is produced can be relied upon in a production environment.

Production systems carry expectations that do not change because AI is involved. They need to be secure, maintainable, compliant with organisational standards, and able to evolve without introducing unnecessary risk. As the pace of output increases, maintaining those standards requires more deliberate control, not less.

Moving from prototype to production

AI agents are particularly effective at generating working prototypes. They can assist with building features, exploring ideas, and accelerating early-stage development.

Production environments operate under different constraints. Systems must be robust, sustainable, observable, subject to audit, and capable of being maintained by teams who may not have been involved in their original creation.

The gap between experimentation and production readiness is where problems can emerge. Teams may be comfortable using AI tools, but less certain about how to apply the same level of rigour that would normally be expected in delivery.

Understanding the risks in practice

From experience working with teams doing AI-assisted engineering, the risks tend to fall into three areas.

The first is quality. When outputs are generated quickly, it becomes easier to accept them without the same level of scrutiny. Over time, this can introduce inconsistencies, unnecessary complexity, or fragile implementations that would normally be addressed during review. In production, model output should be treated as untrusted input and validated accordingly, particularly where it flows into code, configuration, or externally visible content.

The second is security. AI-assisted workflows introduce additional ways for systems to be influenced or misused, particularly when models are connected to tools, repositories, or external services. There are already well-documented examples of systems behaving in unintended ways when controls are insufficient, especially where agents are given broad access or operate without clear constraints.

The third is data privacy. Many AI tools operate under licensing models that include data retention or model training. Without a clear understanding of how those tools handle data, there is a risk of exposing sensitive or proprietary information.

Managing agent permissions

Coding agents typically run with user-level permissions, which can result in broader access than intended if not carefully controlled.

Restricting access to the minimum necessary scope, requiring explicit approval for actions, and avoiding persistent or overly permissive configurations all contribute to reducing exposure. These are established access control practices — what changes is the need to apply them consistently to a new category of tooling.

Considering data usage and storage

Depending on the tool and licensing model, prompts and outputs may be stored or used for training. These behaviours are not always obvious and can vary between providers.

For organisations working with sensitive information, it is important to understand these characteristics, verify vendor guarantees, and select deployment approaches that align with the level of control required.

Reinforcing delivery guardrails

As AI increases the speed at which code and other artefacts are generated, automated controls become more important. Branch protection, linting, automated testing, code quality scans, security scanning and secret detection provide a consistent deterministic baseline that supports quality and security regardless of how quickly changes are produced.

Using threat modelling to understand change

Threat modelling remains a useful way to understand how AI-enabled systems behave and where risks may emerge. Although the technique itself is familiar, it needs to account for systems that are less deterministic and more influenced by context.

Considering scenarios such as prompt injection, misuse of integrated tools, unintended outputs, and data exposure through logging helps teams build a clearer picture of how their systems behave and where controls are needed.

Maintaining ownership and accountability

Introducing AI into delivery workflows changes how work is produced, but not who is responsible for the outcome. Treating AI as a tool that supports engineering, rather than something that replaces responsibility, helps maintain clarity. Human judgement continues to play a central role in reviewing outputs, making decisions, and ensuring that systems behave as intended.

The teams seeing the most benefit from AI are those that apply established engineering practices consistently as they adopt new tools. AI increases the speed and volume of delivery, but it also increases the importance of maintaining control over how systems are designed, built, and operated.

If you’re looking for a more structured way to approach this, we’ve pulled these ideas together in our AI security playbook, which looks at how generative AI changes risk in practice and how teams can apply sensible controls in real delivery environments.

About the author

With over 20 years in solution architecture and software engineering, Ben helps organisations deliver complex digital transformations, design scalable systems and build high-performing engineering teams.

Now focused on Generative AI, Ben helps teams move beyond prototypes to production-ready AI solutions. His approach combines disciplined engineering practices with hands-on delivery experience – grounded in Agile and XP principles – to ensure AI initiatives deliver real outcomes.

You may also like

Blog

Practical AI security controls that mitigate real-world risks

Blog

AI security is not a tooling problem. It’s an adoption risk

Blog

How to Use Large Language Models (LLMs) safely and securely

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.