Dom

Dominic Spinks

Software and Platform Engineer
Data & AI

January 28, 2026

Discipline is the shortcut: Lessons from our 60% speed improvement using AI

In my last blog about how we improved development time for a data platform by 60% using AI, you’ll know that our small experiment pairing engineers with an AI assistant turned out to be surprisingly effective. To recap: we didn’t re-invent engineering, we applied existing principles, added structure around AI, and the results spoke for themselves.

But speed is a double-edged sword. Once you realise AI can accelerate you, it’s tempting to go full throttle and see just how much it can do. That’s exactly when things start to fall apart; time can be wasted going through re-write after re-write. Whilst the first post was about possibility, this one’s about the other side of the equation: using AI with discipline.

AI as an accelerator, not a replacement for good engineering

I want to re-iterate that we used AI as an accelerator. It helped us move faster, but only in the direction we were already facing.

We found this out the hard way. Copilot hallucinated outdated provider versions, ignored our Makefile mechanism for deployments and even started skipping confirmation steps (keeping the human in the loop). Left unchecked, AI almost relentlessly pushes you towards “vibe coding”, the opposite of disciplined engineering.

That’s when we doubled down on existing best practices:

  • Code reviews: Every AI-generated change still went through a human. AI can propose, but humans decide.
  • Pairing: We continued using the driver–navigator model to constantly share knowledge between engineers, only now our AI acted as a third silent partner, ready to suggest ideas but never to lead unsupervised.
  • Documentation: We let AI help maintain logs, plans, and architecture diagrams but always under human review to maintain its readability and comprehensibility for our whole team.

The lesson? AI doesn’t replace discipline, but rewards it. The more structure we gave it, the better its output became. When we got lazy, its quality dropped right alongside ours.

The hidden key: Understanding the context window

For those who haven’t bumped into the term yet, an AI’s context window is basically its thinking capacity. “An LLM’s context window can be thought of as the equivalent of its working memory,” says IBM. “It determines how long of a conversation it can carry out without forgetting details from earlier in the exchange.”

In our project, Copilot kept forgetting our rules after a few prompts. It stopped using the Makefile, ignored our guidelines, and started generating new files we didn’t ask for. It wasn’t being mischievous, it was just overloaded.

Our fix was simple but transformative:

  • We kept our files concise, only loading what was essential.
  • We logged each session so we could pick up cleanly next time, with a focus on very short summaries for older tasks and succinct bullet points for the most recent tasks.
  • We started new chats every few prompts. This was the single most effective thing we found we could do.

This reset process became a habit of ours as engineers that ensured our AI stayed predictable, aligned with our rules and gave us confidence in its output. That is to say, it made the final output increasingly deterministic, which is the best we can ask for from our AI.

Clearing the context window worked because we re-anchor both the AI and ourselves in good practice. We re-load our current context in a minimal form, ensuring we stay on track with the bigger picture and give the AI some fresh thinking space. It’s engineering hygiene disguised as prompt management.

Discipline in practice: AI collaboration with existing practices

Here’s what we’ve learned works consistently, regardless of which AI tool you’re using and assuming you’re wanting to use existing practices:

Start each session with structure

Define the problem statement and rules up front. (Our ai-collaboration-protocol.MD and problem-statement.MD files became the team’s source of truth for structure.)

Work in small, testable increments

Don’t ask AI to build an entire platform; ask it to deploy one resource, one module, one test. Test driven development becomes significantly easier.

Keep the human in the loop

Review, reason, and decide. If you can’t explain what the AI just did, don’t ship it. Code development time is replaced by time spent reviewing and critically thinking about AI’s output, which is typically a much quicker loop if you’ve got an idea of what you’re trying to achieve.

Regularly clear the context window

If your AI starts “forgetting”, that’s your cue to reset. Many AI agents now have ways of querying how much of the context window has been filled up (e.g. executing /context in claude code).

Document as you go

AI is great at writing logs and summaries, so why not make it part of your workflow? Future you, your teammates and your AI agent can trace what happened and get up to speed easily.

The discipline is the shortcut

If there’s one takeaway, it’s this: easily accessible results right now don’t come from “using AI more” and unplugging your brain. They come from using AI with intention.

In our 60% improvement story, the success didn’t solely come from the model’s brilliance, it came from the discipline we built around it. The resets, the structure, the human oversight. These together were the multipliers. We’ve found that AI amplifies what’s already there. If you’re disciplined, it accelerates excellence. If you’re sloppy, it accelerates chaos.

Dom is a Technical Consultant at Equal Experts with over 10 years of experience in platform engineering and cloud architecture. He specialises in designing and building scalable, resilient systems and teams that prioritise and deliver key business value. Dom’s current focus is utilising generative AI in data platforms, with a focus on solving real business problems with measurable outcomes for accelerated delivery whilst keeping high-performing teams empowered.

You may also like

Blog

How we improved development time for a data platform by 60% using AI

What Is a Data Product Owner? Responsibilities, Challenges and Best Practices

Blog

What is a Data Product Owner? Responsibilities, challenges and best practices

Blog

How synthetic data can speed up development

Blog

What we really mean by ‘data products’

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.