From Madness to Method with AI Coding

Marco Vermeulen

Software Engineer
AI

September 4, 2025

From madness to method with AI coding: Part 1 – Meta-prompting

Our team at Equal Experts is always keen to explore the potential of new technologies – provided they create genuine business value for our customers. Organisations in our network know that generative AI can be applied to software development, but we’re increasingly noticing that people are frustrated and disappointed with the results they’re getting when coding with an AI assistant. They tell us how they end up in the infamous AI “doom loop” whilst wrestling to keep the LLM on track, and wonder: is AI-assisted coding really worth it? 

We’ve experienced the same frustrations, but we knew there had to be a way to help AI do a better job. Our industry has spent decades developing standard practices rooted in a rigorous engineering mindset, so we wanted to intentionally apply more method to the use of AI in coding. This blog is the first in a five-part series about what we’ve learned about how to make AI a true accelerator for software development.

From madness to method: Part 1 – meta-prompting as an engineering method

So, you’ve experimented with AI-assisted coding and it wasn’t the fix-all solution you thought it might be. At this point, you need to take a step back and develop a unified approach that yields the highest quality results; code that should be virtually indistinguishable from your own best work. Here’s a step-by-step guide:

1. Establish a prompt template

Meta-prompting is the foundation of the approach we use; it’s the building block for everything else we will cover in this blog series. Everything begins with a prompt, so we start by instructing an LLM to generate its own prompt. We provide the LLM with guidelines regarding the general structure of the prompt and offer an example of what a good prompt looks like, using a prompt template. Here is an example of a template we’ve used on a pilot project with one of our customers:

# [Feature/System Name]

*Write a brief paragraph describing the high-level purpose and context. What problem does this solve?
What is the main objective? Keep this concise and focused on the "why" rather than the "how".*

## Requirements

*List the specific, measurable acceptance criteria that define when this feature is complete. These
should be testable and unambiguous. Think of these as your definition of done.*

- Requirement 1
- Requirement 2
- Requirement 3

## Rules

*Specify any rules files that should be included when working on this feature. This includes any
rules files that might be relevant for this slice to be implemented. The rules files are usually
found under the `rules/` directory of this project.*

- rules/my-rules-1.md
- rules/my-rules-2.md
- rules/my-rules-3.md

## Domain

*If applicable, describe the core domain model using pseudo-code in a modern language like
TypeScript or Kotlin. Focus on the key entities, relationships, and business logic. This
section helps establish the mental model for the feature.*

// Core domain representation in markdown block

## Testing Considerations

*Describe how you want this feature to be tested. What types of tests are needed? What
scenarios should be covered? What are the quality gates?*

*Examples: unit test coverage requirements, integration testing adapters, acceptance
testing a workflow, performance benchmarks, etc.*

## Implementation Notes

*Document your preferences for how this should be built. This might include architectural
patterns, coding standards, technology choices, or specific approaches you want followed.*

*Examples: preferred design patterns, coding style, performance requirements, technology
constraints*

## Specification by Example

*Provide concrete examples of what the feature should do. This could be API request/response
examples, Gherkin scenarios, sample CSV representation, or user interaction flows. Make the
abstract requirements tangible.*

*Examples: Gherkin scenarios, JSON payload, UI mockups, CSV samples*

## Extra Considerations

*List important factors that need special attention during implementation. This often grows
as you discover edge cases or constraints during development. Think about non-functional
requirements, constraints, or gotchas.*

- Consideration 1
- Consideration 2
- Consideration 3

## Verification

*Create a checklist to verify that the feature is complete and working correctly. These
should be actionable items that can be checked off systematically.*

- [ ] Verification item 1
- [ ] Verification item 2
- [ ] Verification item 3

---

You can save your template in the prompts/ directory of your project. This location is also where all your other prompts should live, perhaps sequentially numbered in the order that you introduce them.

2. Start meta-prompting

Now the scene is set for meta-prompting. In our client’s case, we wanted to write code using an AI-assisted approach that isolates each interaction and treats the LLM as a stateless function rather than a conversational partner, to minimise random effects. We view the prompt as a detailed imperative for the LLM to build something in a single operation, or a “Oneshot prompt”.

We start by opening a new chat window, where we detail everything we want the LLM to achieve in the next piece of work. Here’s an example:

Please generate a prompt that instructs an LLM to perform the following tasks:

* Implement a Supplier API at `/api/suppliers`
* Use this API to populate the appropriate supplier ref and name fields on the supplier page (2) 
  of the product incident creation workflow.

The API should serve an array of suppliers, with each supplier represented by a supplier reference
(text identifier) and a supplier name.

```
{
    [
        { "SUP-1234": "Supplier 1" },
        { "SUP-1235": "Supplier 2" }
    ]
}
```

Further, the API should be a mock that returns 10 suppliers with realistic names for product suppliers.

Ensure that we have acceptance tests for the supplier page that renders the correct data retrieved
from our mock API. Ensure that all components on the page have the appropriate unit tests. Ensure
that you are following our test rules.

Please follow the prompt structure closely as detailed in `prompts/00-prompt-template.md`. Write the
prompt as Markdown and place it under the `prompts` directory as `07-supplier-functionality.md`

---

In our prompt, we include a reference to our template, then run it. The resulting prompt explains how the LLM should implement the feature in a single operation – faster and with greater accuracy than it would otherwise achieve. 

5 basic rules for effective Meta-prompting

To get the best results with AI-assisted coding: 

  1. Start fresh – always open a new chat so the AI has no prior context.
  2. Provide a meta-prompt template – a structured guide for what a “good prompt” looks like (we keep ours in a project directory, versioned alongside code).
  3. Write your “rambly” request – in plain words (with plenty of detail and context), explain what you want the AI to build.
  4. Ask the AI to rewrite it – the model produces its own precise prompt based on the template.
  5. Run the generated prompt – the AI then produces the code in one clean pass.

Conclusion

The effect is powerful: given the right context, the AI is better at writing prompts than we are. By applying engineering discipline to prompt design, we reduce randomness, eliminate conversational drift, and get closer to production-quality code quicker. We also reduce frustration for engineering teams, and put an end to the uncertainty about whether AI is a net benefit or a distraction.

Meta-prompting is just one building block in a broader framework we’re developing at Equal Experts. We’ll be sharing more of this method in upcoming posts. If you’d like to explore how this methodology could transform software delivery in your organisation, get in touch. 

 

Theory is good, but practice is better. Here’s a quick demo showing meta-prompting applied in real coding:

You may also like

Case Study

Engineering AI into software delivery: How Travelopia launched software to production

Tommy Hinrichs

Blog

Cooking while GPT-5 cooks: testing a one shot, unsupervised prompt

Blog

Could AI code tools be your new favourite pair programming partner?

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.