Abstract organic neural data network, machine learning.

Edd Grant

Modernisation & Platforms Service Principal
AI

October 20, 2025

GenAI: Rocket Fuel for the Aligned, Chaos for the Disjointed

The relentless pace of AI innovation presents both immense opportunity and significant challenges for engineering leaders. At Equal Experts, we’ve seen examples where the use of Generative AI (GenAI) in the Software Delivery Life Cycle (SDLC) has acted as a positive accelerant, but also examples where it has accelerated organisations into technical anarchy. 

Organisations are increasing their use of GenAI in their SDLC, so it is more important than ever that they do so in ways that actually benefit them and don’t cause problems further down the road.

In this article, we look at some examples of the application of GenAI in the SDLC and investigate what organisations can do to harness it as a positive accelerant for software development, without hurtling towards technical anarchy. A key insight here is that meaningful, appropriate guardrails are essential to achieving this positive acceleration.

GenAI in the SDLC: Successes and Failures

We recently worked with a UK government department using GenAI in the SDLC of a new government digital service. They were able to create, from scratch, a brand new digital service, fully compliant with Government Digital Service (GDS) standards, in around 2 weeks. Typical development timescales for a service such as this would require a number of months, so to be able to create a service in this timeframe, and to be able to validate its behaviour against the required standards is an incredible feat!

Conversely we have seen multiple instances where GenAI has been used in the SDLC of services, but has resulted in delivery delays and increases in unplanned work.

Analysis

In all of the instances we looked at, the organisations had talented engineering teams making use of GenAI LLMs. So why were the outcomes so different? What did one organisation (the UK government department) do which allowed it to create a standards compliant service in just 2 weeks, when the others fell into cycles of difficulty?

I believe that, to a large extent, the answer lies in each organisation’s level of technical alignment and -crucially- the guardrails (product and technical standards, specifications and other corporate knowledge) that their GenAI models had access to.

Let’s look at the resources that were used by the UK government department, and understand how these were beneficial:

Product Standards

The UK government publishes the GOV.UK Service Manual: An online guide for teams creating and running government services in the UK. It defines the Service Standard, a set of 14 points that UK government digital services must meet. It has been refined iteratively over a number of years and is the authoritative specification within this context. For the UK government department the Service Manual provided an excellent product specification, and being publicly available meant that the LLM used had already been trained on it, and was aware of its requirements.

Technical Standards

For technical specification, the UK government department was able to use internal platform documentation to create distilled coding practices, instructing the LLM to adhere to them. This resulted in the LLM understanding the organisational requirements regarding what tech stack to code for, how to structure the code, what approach to take to writing tests, how to handle configuration etc.

Well Defined Problem Domain

Finally the UK government department had a well defined problem statement. This described the problem that existed, broken down to a series of small prompts, each of which focussed on a particular area of the problem space.

In one particularly interesting example the GenAI model suggested implementing a behaviour using an Accordion component from the GOV.UK Design System, which being publicly available it had already seen in its training data. A highly appropriate choice for a government digital service.

Technical Alignment Assurance

Technical Alignment Assurance (TAA) is the discipline of continuously verifying that work adheres to appropriate guardrails. During the development of the UK government service the LLM proactively highlighted areas where suggested changes would have reduced alignment with its guardrails, in one instance pointing out that changing a background colour would make text difficult to read, and referring to appropriate guidance in the GOV.UK Design System.

High Alignment

In the development of the UK government service, providing sufficient guardrails resulted in a “high-alignment” situation, whereby the LLM had enough high quality information in order for it to create appropriate code which aligned with the organisation’s preferences, when provided with well thought out prompts describing the problem domain.

Conversely, in the cases where GenAI was not considered to positively accelerate the SDLC, the guardrails provided to the LLM were either low quality, ambiguous, inexact, or the context provided was irrelevant. In these “low-alignment” situations the direction most highly predicted by the LLM’s training corpus was not aligned with the organization’s own goals. When combined with vague, low-quality problem domain prompts, these organizations quickly found themselves in situations of chaos and technical anarchy.

Conclusion

In his How to overcome technology anarchy with aligned autonomy article, Steve Smith talks about how increasing alignment means that better decisions are made:

It seems clear that having high alignment can directly benefit the SDLC, for both traditional software development, done by engineers, but also for GenAI enabled development.

Many organisations have already spent time developing the necessary guardrails to create a high alignment environment, but those that don’t should focus on doing this.

The next step is to start using organisational guardrails to train and inform internal GenAI models, to ensure their contribution becomes an accelerant towards, rather than away from the organisation’s goals.

The key takeaway for engineering and executive leaders is clear: If you want GenAI to empower the company, then you need to align this with how your organisation works: Decide what meaningful, appropriate guardrails are for your context. Create, deploy and utilise them, both standalone and within your internal AI models. This will ensure that when your teams use GenAI, they benefit from its training on this information.

About the author

Edd’s expertise spans software engineering, technical architecture and high-performing organisational culture. Drawing on experience as the architect of HMRC’s digital tax platform, he provides strategic guidance in complex, multi-team environments.

Edd is passionate about building capabilities from the ground up, prioritising openness and learning to ensure delivery success is achieved through alignment between culture and code. He shares this experience to engagement teams and clients across the USA, UK, and Europe. Connect with Edd on LinkedIn. 

You may also like

Blog

Are enterprises getting left behind in AI-powered SDLC?

Case Study

Engineering AI into software delivery: How Travelopia launched software to production

Blog

When (and when not) to use AI in platform engineering – today!

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.