Framing the challenge: AI and the Mikado effect
Adopting AI in the software delivery lifecycle is less like installing a new tool and more like playing a game of Mikado, a game in which players carefully remove individual sticks from a tangled pile without disturbing the rest. Each move, introducing AI into analysis, design, implementation, testing, or operations, appears local, but subtly shifts tension across the whole system. Touch one stick, and others move, sometimes in unexpected ways. Decisions that were once buffered by friction surface earlier; boundaries between roles and phases blur and latent dependencies become active constraints. In this environment, progress is not determined by how quickly individual moves can be made, but by how clearly the organisation can see what moves are safe, reversible, and worth making. This blog explores how organisations can observe these shifts deliberately, recognise the signals they create, and make decisions that account for the full system; not just the stick they intend to move.
Why organisations are exploring AI in the SDLC
Most organisations exploring AI in the software delivery lifecycle are not reacting to a single crisis, nor starting from a blank slate. Pressure accumulates gradually. Internally, teams experience delivery inefficiencies, rising coordination costs, slow or fragile decision-making, and increasing cognitive load concentrated on a small number of experienced individuals. Burnout is often persistent rather than acute, and ways of working that once felt effective increasingly feel heavy and unresponsive.
Externally, organisations observe peers and competitors experimenting with AI-enabled delivery, shortening feedback loops and signalling momentum. For some, this creates a sense of FOMO even where nothing is visibly “on fire”. Others, particularly first movers, approach AI from a position of confidence rather than urgency, seeing an opportunity to learn faster, improve efficiency and effectiveness, reduce cognitive load, and create better ways of working for their people.
Across these contexts, a consistent pattern has emerged: most AI-in-the-SDLC initiatives fail not because of model quality or tooling choice, but because decision-making, funding cadence, and organisational boundaries cannot keep pace with AI-accelerated execution.
Why traditional transformation playbooks fall short
Earlier transformation efforts often succeeded by focusing change where friction was visible, commonly summarised as “start where it hurts.” This approach assumed a relatively stable understanding of the SDLC, with work moving through recognisable phases and friction localised within them.
In this blog, the software delivery lifecycle is described using IM4; Imagine, Model, Make, Move, and Maintain (see footnote). While the principle of starting from real friction remains relevant, AI changes the conditions under which it applies. To provide a familiar yet AI-relevant abstraction of the SDLC, we developed this IM4 articulation specifically to ground conversations about AI-enabled transformation. Although it builds on long-standing software delivery thinking, this particular framing and its application as a lens for AI in the SDLC is original to this work and not presented elsewhere. It is used here not as a prescriptive lifecycle, but as an analytical reference to observe how AI alters the cost, timing, and interaction of work across the SDLC.
AI reduces the cost and time of analysis, design, implementation, testing, and operational preparation. As a result, long-standing boundaries between IM4 modes collapse, assumptions surface earlier, and decisions are forced sooner. What once appeared primarily as a delivery problem increasingly reveals itself as a decision-making, coordination, and trust problem. In this environment, prescriptive roadmaps, fixed target states, and one-size-fits-all playbooks become obstacles.
From assumptions to signals
We propose shifting from the traditional assumption-driven transformation to a signal-driven change. Signals are observable patterns that emerge when organisations examine how their existing ways of working behave under AI-induced acceleration.
Familiar signals such as flow friction, feedback latency, cognitive load concentration, and rework remain relevant, but AI causes them to surface sooner and more clearly. At the same time, new or amplified signals emerge: boundary collapse between SDLC stages removes waiting and enables immediate feedback, but can expose decision gaps and latent dependencies as execution outpaces decision readiness, trust tension emerges around AI-generated outputs, operational controls lag behind delivery, and reversibility stress increases as cheap execution encourages early commitment.
In this context, “start where it hurts” becomes an emergent signal rather than a fixed starting point. Organisations may begin where pain is already visible, but must remain attentive to new friction that appears as AI changes how work is done.
Before discussing AI adoption, organisations must establish their starting point and scope. Most established organisations are operating within an existing SDLC shaped by people, processes, and controls that cannot be reset wholesale. The practical question is therefore not whether to adopt AI, but where to augment first — whether in imagining, modelling, making, moving, or maintaining.
Once a candidate area is chosen, the organisation must explore where an AI-enabled design collides with governance, risk, compliance, operations, funding, or platform constraints. These points of friction are not obstacles to bypass, but signals that inform scope, sequencing, and trade-offs.
Financial and human signals matter more than ever
AI changes how value and risk accumulate. While the marginal cost of producing artefacts drops, the cost of poorly timed or delayed decisions increases. Meaningful financial signals include imbalances between the cost of delay and cost of change, misalignment between funding cadence and accelerated experimentation, hidden costs of decision fatigue, and reversibility as a measure of financial exposure.
Human signals are equally important. Curiosity, enthusiasm, defensiveness, dismissal, and trust are strong indicators of where learning can occur safely and where AI adoption is likely to stall. These signals often determine success more reliably than technical readiness alone.
Models as lenses, not prescriptions
Signals do not emerge without deliberate observation. This approach uses models as lenses, not prescriptions. Widely understood models such as SDLC framings, Team Topologies, and different systems thinking models provide shared language and alignment. Additional lenses help surface AI-specific dynamics, including decision latency, cognitive load concentration, coordination patterns, and blast radius.
The purpose of these models is not to recommend change, but to make options, trade-offs, and consequences visible — ensuring decisions remain owned by the organisation.
From signals to options – and decisions
Once a meaningful pain point is identified and its blast radius understood, the focus shifts to mapping credible options rather than converging on a single solution. Each option makes explicit the prerequisites that must be addressed to proceed responsibly; what the organisation would need to start doing, stop doing, or invest in; where accountability and cognitive load would shift; how team interactions would change; which AI tooling would be involved; what responsible-use guidelines would apply; and which signals should be tracked as feedback.
In line with Team Topologies principles, these options are not treated as fixed designs but as evolutionary steps, refined through fast feedback and repeated observation. The process is intentionally circular rather than linear: signals are revisited as work progresses, options are adjusted as new evidence emerges, and only the data that meaningfully informs decisions is retained. This discipline prevents overwhelm by distinguishing signal from noise, ensuring just enough understanding to move forward without slowing the organisation with exhaustive analysis.
The outcome is not a roadmap, reorganisation, or tooling strategy, but a set of justified decisions: where AI should be applied next, where it should not yet be applied, what prerequisite work would be required to proceed safely, and where the organisation may consciously choose not to change.
One-line takeaway
AI lowers the cost of action in the SDLC, but raises the cost of poor decisions – making signal-driven, option-based, and reversible change essential.
IM4
This paper uses IM4 as a shared reference model for understanding how work flows through the software delivery lifecycle; from Imagine (shaping intent and making sense of the problem), through Model (testing assumptions and defining structures), Make (building and automating), Move (releasing and observing in real use), and Maintain (learning and evolving over time). IM4 is not treated as a linear process or target state, but as an analytical lens that helps surface where activities actually occur today, where boundaries blur under acceleration, and where AI changes the cost and timing of work across the lifecycle
IM4 is used here purely as an analytical reference for observing how AI alters work across the SDLC. It is not presented as a new or alternative Equal Experts delivery methodology, but as a lens to support signal-driven decision-making in this context.
About the Authors
João Rosa
João has experience helping organisations move to flow-based operating models. He has worked with scale-ups, mid-cap, and enterprises, helping to bridge strategy to execution. As part of his consultancy practice, he applies principles and practices from Beyond Budgeting and Team Topologies, supporting leaders and their organisations to thrive in a complex world. Find out more about João’s work here.
Dave Sammut
Dave is a hands-on technical architect at Equal Experts, who bridges business and engineering to turn vision into execution. Working across all levels of an organisation, he drives clarity, reduces risk, and promotes technical excellence. He specialises in delivering AI solutions responsibly, using modern engineering practices and rapid iteration to integrate AI into real-world systems and deliver practical, data-driven value from day one. Connect with Dave on LinkedIn.