AI is rapidly reshaping how software is built and delivered, and platform engineering is emerging as a critical enabler for scaling its impact across the enterprise. By supporting AI in delivery, platform teams can reduce developer cognitive load, improve governance, and turn early experimentation into reliable, secure innovation.
Software delivery is changing so radically that approaches from even a year ago can feel outdated. With AI accelerating this shift, teams must constantly evolve just to keep up.
Curiosity about tools like ChatGPT, Copilot, Cursor, or Windsurf has exploded into something bigger. Teams are no longer just experimenting with AI; they’re scrambling to respond to AI-driven changes in delivery, developer experience, risk, governance, observability, and security.
At Equal Experts, we’re regularly talking about the key role that platform engineering is playing in an increasingly AI-first world and why enterprise organisations need to enable AI at a platform level to maximise impact.
The shift we’re seeing
Everyone is wrestling with the pace of AI adoption and how to get from the “Wild West” of experiments to sustainable, methodical approaches to delivering software with AI. Some of the AI-accelerated software delivery work that we’ve done with clients like Travelopia and Defra, is already showing strong results and offering insights into the value of a structured, engineering-centric approach. Our former technical principal, Wes Reisz, recently shared some of his thoughts during a talk at the recent Equal Experts G[=]K25 event. Watch the talk: AI-assisted software delivery: Leveraging ChOP & LLMs to create more effective learning experiences.
A recent conversation with an engagement manager in the US centred on the patterns we’ve seen regarding AI adoption around platform engineering. The conversation ended up revolving around three areas where platform teams are supporting AI work:
- AI in delivery
- AI inside or AI as a part of the software that teams ship
- AI tooling, a new evolving space where platform teams are being asked to support tools like Model Context Protocol (MCP), Agent2Agent (A2A), and Agent Communication Protocol (ACP)
Why is AI a platform engineering problem now?
AI is no longer just making itself known in enterprise; it’s already changing how we work. As PlatformEngineering.org contributor Luca Galante explained, “AI isn’t just knocking on the door; it’s already walking into the room and rearranging the furniture of how we build and run systems.” Google Cloud’s research echoes this, reporting that 86% of organisations say platform engineering is essential to realising AI’s full value, and 94% consider AI “critical” or “important” to the future of platform engineering.
In November of 2024, Anthropic announced the MCP specification. MCP is a standard that, when implemented, allows an organisation to expose resources, prompts, and tools to databases or other systems directly to an LLM. Already in its first few months, hundreds of these tools have popped up in places like hub.docker.com. They are enabling access to hundreds of tools, from Jira and Confluence to GitHub to MongoDB, and they’re landing in the enterprise right now.
This isn’t about distant trends. It’s about changes that need to be embraced and planned for right now. If the enterprise doesn’t pay attention to these things, we’ll quickly find ourselves in an era of cost overruns, security concerns, and governance/compliance issues, similar to the issues we saw early on in the move to the cloud.
Platform engineering needs awareness of three AI forces
AI in delivery
AI is now woven directly into the software delivery lifecycle, including in requirements, planning, coding, testing, documentation, and operations. Practices like supervised (and sometimes unsupervised) agentic engineering are enabling teams to encode knowledge into prompts and code and automate what used to be painstaking manual work. Smaller teams are able to deliver faster, with fewer handoffs, and the DevEx platform’s role of removing developer friction and reducing cognitive load must extend to support these delivery modes. The new demand? Platforms must support not just cloud or crosscutting resources but also structure guardrails, services, and even prompts around AI. Done well, this continues to be a multiplier for developer experience and value delivery.
AI “inside” the runtime
AI isn’t just supporting delivery. It’s become part of the application runtime. Teams utilising Retrieval Augmented Generation (RAG), ML models, and prompt-based agent usage directly into the products they are building. Machine learning is a first-class concern of many of the apps we build. Platform teams now face the challenge of supporting not just code, containers, CI/CD, IaC, but increasingly models, pipelines, new types of monitoring, and all the components required for these AI-infused capabilities.
AI tooling
AI tools like MCP (and A2A or ACP) are making knowledge work easier by connecting LLMs to data stores, but their unchecked use runs the risk of creating new forms of shadow IT (and all the risks that come with it). Developers and teams are wiring up MCP servers to automate workflows with access to GitHub, Jira, and Confluence, often without guardrails or governance. This is a realm for platform teams to consider. The danger is in over-permissioned agents, data leakage, and untraceable actions. We’ve seen cases where an MCP agent, given too much scope, closed tickets and merged unreviewed code, which caused production incidents, not through malice but through lack of oversight caused by overaggressive use of AI.
The opportunity and the risks
Every major change comes with both a concern and an opportunity. AI represents tremendous potential for enterprise software but can also expose us to more risks that need to be actively mitigated against.
With 43% of data leaders citing privacy and security as blockers, it’s clear the opportunity will be lost if we don’t bring structure to the chaos. The platform team is the natural enabler to address and offer guidance in this area.
A new charter for platform engineering
So what does this mean for platform teams?
The platform is no longer just a set of infrastructure APIs, golden paths, or reusable cross-cutting concerns. It’s the key to operationalising initiatives across teams in the enterprise, including AI.
Platform teams should now be thinking about:
- Empowering teams to experiment, but with the right guardrails in place.
- Providing and enabling verifiably secure, auditable integrations with AI tools like MCP servers – to stop shadow IT before it can start.
- Enabling teams with reusable prompts, templates, model endpoints and blueprints. For more information, see The Blueprint for MCP on Lambda.
- Incorporating observability and drift monitoring as first-class citizens in an AI stack
- Setting up pipelines and controls for model updates, permissions, and data access
Platform teams are central to reducing friction on teams. The goal is not to block innovation but to guide it while embedding trust, visibility, auditability, and repeatability.
Experiments to confidence: Operationalising AI
We’re still in the early days of operationalising AI, but one thing is clear: platform engineering will be key. With the lessons learned in the adoption of open source, cloud, and agile, we’ve earned the right to be part of the discussion today. By combining some of these lessons with what we’re learning from AI exemplars, we can use platform engineering to help companies operationalise AI.
By enabling AI at the platform level, we continue to reduce developer cognitive load and accelerate change in the enterprise. If you’re interested in finding out more about ways platform teams can adopt AI, contact us today