At Equal Experts, we’re helping enterprise organisations worldwide introduce GenAI into the software delivery lifecycle in legacy systems, modern cloud-native services, and platform engineering.
We empathise with the challenges facing senior leaders who want to understand how AI can be applied to platform engineering when the pace of innovation is so high.
We’ve been working in platform engineering for over a decade, and we’ve long believed that a platform team should be run like a product-oriented team. This is crucial for creating platform capabilities that are fast, reliable, and consistent. That accelerates outcomes for teams and allows you to unlock true economies of scale.
In a previous article, my colleague Tommy Hinrichs discussed why platform engineering is essential for scaling AI delivery and accelerating enterprise change. This article continues that discussion, providing our current recommendations for when (and when not) to use AI in platform engineering.
AI can supercharge your platform engineers like any other team. We believe that platform engineers will soon use AI for the majority of their tasks, from querying cloud estates to generating platform components. You need to adopt AI in your platform engineering right now, but be aware of its limitations. AI is a co-pilot in platform engineering, not an autopilot (for now).
When to use AI in platform delivery
Whatever your mix of hyperscaler services and SaaS products, your platform still needs plenty of custom code to create high alignment and self-service paved roads for your teams. AI-powered coding assistants are already an indispensable toolkit here.
For example, I worked at a UK financial services organisation that wanted to move their unautomated infrastructure to a robust Terraform setup to kickstart their platform engineering efforts. Pairing platform engineers with an AI sped up code development on a self-serve Cloudfront CDN solution for teams, complete with a comprehensive suite of Terraform tests. AI helped us to complete the project ahead of time and helped people to see the art of the possible.
When not to use AI (yet)
Whilst AI helped us speed up code development, we had a different experience when using it to convert legacy configurations to the new Terraform format. Due to AI’s inherent lack of determinism, the process silently introduced inconsistencies, which slowed progress and eroded team confidence in AI. AI isn’t yet able to create fast, reliable, and consistent capabilities alone.
Further examples:
- Commissioning new services requires execution to an exact specification, so service #40 is identical to services #1-39. An LLM would silently create different scaffolds for all your services, introducing errors that are time-consuming to find
- Data security requires API guardrails to protect organisational data. It’s hard to predict the exact sequence of API calls used by an LLM, which could easily put your data at risk.
This is why we recommend AI as a co-pilot only (for now). AI is another tool in the platform engineer’s toolkit. They set boundaries for AI, guide translations, review and validate its output, and know when to put the tool down and rely on a repeatable process.
We do expect these limitations to change, as AI innovation continues at breakneck speed, and teams integrate AI features into their own products. Demand can only grow for foundational AI support, like AWS Bedrock.
Conclusion
AI is an invaluable ally for accelerating platform feature development, but you still need a human in the loop. AI is a co-pilot in platform engineering, not an autopilot (for now). That means the platform engineer’s role is more critical than ever, applying human judgment where it matters most and knowing at what level to set the AI co-pilot dial.
So, start now, but start wisely. The key is to foster a culture of critical thinking about where, and where not, to apply AI. This involves debunking common myths, such as the idea that AI’s primary role is just code generation.
While using AI to develop platform capabilities is a great start, it only scratches the surface. The real opportunity comes when teams ask a broader question: “Where else across our entire delivery pipeline can our AI co-pilots help us accelerate delivery?”
About the author
Edd’s expertise spans software engineering, technical architecture and high-performing organisational culture. Drawing on experience as the architect of HMRC’s digital tax platform, he provides strategic guidance in complex, multi-team environments.
Edd is passionate about building capabilities from the ground up, prioritising openness and learning to ensure delivery success is achieved through alignment between culture and code. He shares this experience to engagement teams and clients across the USA, UK, and Europe. Connect with Edd on LinkedIn.