Marco Vermeulen

Software Engineer
AI

January 12, 2026

From madness to method with AI coding: Part 4 – Corrective Actions

If you’ve been following our series on AI-assisted coding, you’ll be familiar with the idea of progressive precision refinement in AI-enabled software development, a process where you plan your approach and start your trajectory toward a target, then make increasingly fine course corrections as you close in. Like a golfer approaching the green, the better your initial swing, the fewer adjustments you’ll need at the end.

This post explores the next stage of corrective actions – the equivalent of refining your shot on the green – where precision and discipline matter most.

From driving shot to putter: Switching modes for the final stretch

Staying with our golfing analogy, the driving shot represents the first active phase of development – the powerful swing that gets you close to your goal. Now it’s time to land the ball in the hole, so you switch to your putter to make small, precise corrections.

As before, we want to bring methodology to this goal. At this stage, we don’t want to engage in lengthy conversations with an AI agent. Instead, we apply a rigorous, repeatable process that replaces open-ended prompting with a set of clear imperatives that yield predictable outcomes. We do this by compiling a TODO List.

Plotting the course

We ended our last post by committing a complete generation of source code that had an acceptable number of imperfections. Our next task is to scrutinise this code and review every single line. Resist the urge to start making changes and fixing the code; instead, we take a disciplined approach, peppering the generated code with TODO statements. For example:

// TODO: Introduce a new DownloadResponse class in the rest package
// that prevents the DownloadInfo domain class from leaking to the presentation layer

Or

// TODO: Rename this to AuditCommand

Or even

// TODO: DO NOT use Thread.sleep! Use an await retry strategy for these tests instead!

You should spend a fair amount of time on this stage, because the more thorough you are about clarifying your intent in those TODO comments, the more successful the subsequent steps will be. You’ve probably already guessed it, but we’re leaving a trail of metaprompts in the form of hints for the LLM about what’s wrong with the code. The more explicit you are, and the more you tell it what you want, the more context it will have to make the right changes. An LLM can do many things, but reading your mind is not one of them!

Once you’ve added TODO comments to everything that bothered you about the initial code generation, commit them to Git – it’s time to apply a template.

Meta-prompting for a precision road map

Our template instructs the LLM to search for every TODO comment we scattered about our codebase, group them logically, and compile a master TODO file. Here’s the task list:

  • Instruct the LLM to search our entire codebase for TODO comments
  • Group all related comments by topics
  • Compile a comprehensive task list by topic
  • Compose each task in that list as a mini-prompt in its own right
  • Reference the affected files for each prompt
  • Assign a neat checkbox to each task, allowing us to keep track of what was completed

Here is the template I use:

# Corrective actions

I have generated a one-shot feature in the application using a single prompt. I have reviewed the feature and found some issues.
I have marked each issue with a TODO comment.

I would like you to review the source code in this repository and read all the TODO comments I have added.
Group these comments into logical tasks that we can tackle as small units of work. Sometimes these tasks
could span multiple TODO comments across different files. At this point you've only done a 
grouping _without_ writing anything to a file.

Then I want you to generate a new document called `TODO.md` in the root directory of this project. 
This file should contain only the list of logical tasks that you compiled in the previous step. 
Each entry in this file should contain the following details:

* A checkbox to indicate if the task is done or not
* The headline of the task
* A description of what the task entails
* A short prompt that instructs an LLM to address this task
* A list of affected files

Here is an example of what each item in the TODO list should look like:

```markdown
### Task 1: Update Database Schema Column Types

- [ ] Convert VARCHAR columns to TEXT in audit table schema

**Prompt**: Update the Flyway migration script `V1__Initial_audit_table.sql` to change all VARCHAR column definitions to TEXT type.
The current schema uses VARCHAR with specific length constraints (VARCHAR(50), VARCHAR(100), etc.), but these
should be changed to TEXT for better flexibility and consistency with PostgreSQL best practices.

**Files affected**:
- `src/main/resources/db/migration/V1__Initial_audit_table.sql`
```

## IMPORTANT!

* Each entry in `TODO.md` should be structured as a prompt that we can use to generate the code to complete
that task. The prompt must have all the relevant details and context for an AI agent to complete the entire task.
* Each prompt should be followed by the complete list of files that are affected.
* Each task should be scoped correctly so it will be completed in a single shot.
* Do not make up your own TODOs! Only use the ones provided in the source code!
## Execution plan

Please add the following section to the top of the TODO.md, referencing any relevant rules:

```markdown
Consider the following rules during execution of the tasks:
- rules/rule1.md
- rules/rule2.md
- rules/rule3.md
```

Please add the following section to the end of the TODO.md:

```markdown
## Execution plan workflow

The following workflow applies when executing this TODO list:
- Execute only the **SPECIFIED TASK**
- Implement the task in **THE SIMPLEST WAY POSSIBLE**
- Run the tests, format the code, and perform static analysis if relevant.
- **Ask me to review the task once you have completed and then WAIT FOR ME**
- Mark the TODO item as complete with [X]
- Commit the changes to Git when I've approved and/or amended the code
- **STOP and await further instructions**
```

I save this file at the top of my prompts directory, naming it 00-todo-template.md, and let the agent loose on my code:

Please execute the following prompt on my entire codebase: prompts/00-todo-template.md

Within a few seconds, I get a comprehensive TODO list, with each meta-prompt representing one small, controlled ‘putt’ towards the final goal. Here is a short extract of what mine looks like:

- [ ] **Task 2: Extract Business Logic from Service to Domain Models**

**Description**

The VersionService contains business logic that should be moved to the domain models to follow 
Domain-Driven Design principles. The platform determination logic for audit entries should be extracted 
to a well-named method on the Version domain model. Additionally, the conversion logic from 
AuditRequest/AuditCommand to Audit should be moved to the command object as a toAudit() method.

**Prompt**: Extract the platform determination logic currently in the VersionService 
(lines around TODO comment about actualDist) into a well-named method on the Version domain model, 
such as `getEffectivePlatform(requestedPlatform: Platform)`. Also move the audit conversion logic from the 
`createAuditEntry` method to a `toAudit()` method on the AuditCommand class. Update the VersionService to use
these new domain methods.

**Files affected**:

- `src/main/kotlin/io/sdkman/broker/application/service/VersionService.kt`
- `src/main/kotlin/io/sdkman/broker/domain/model/Version.kt`

---

- [ ] **Task 3: ... **

Sticking with the methodology

Every successful process needs discipline. We now scrutinise each line, making any changes we deem appropriate to help guide the LLM in executing our individual prompts. If you used my template, you’ll see that it has a simple but strict TODO Execution Plan that ensures consistent, high-quality delivery:

## Execution plan

Please add the following section to the top of the TODO.md, referencing any relevant rules:

```markdown
Consider the following rules during execution of the tasks:
- rules/rule1.md
- rules/rule2.md
- rules/rule3.md
```

Please add the following section to the end of the TODO.md:

```markdown
## Execution plan workflow

The following workflow applies when executing this TODO list:
- Execute only the **SPECIFIED TASK**
- Implement the task in **THE SIMPLEST WAY POSSIBLE**
- Run the tests, format the code, and perform static analysis if relevant.
- **Ask me to review the task once you have completed and then WAIT FOR ME**
- Mark the TODO item as complete with [X]
- Commit the changes to Git when I've approved and/or amended the code
- **STOP and await further instructions**
```

This execution plan might sound rigid, but they’re what transform LLM-assisted development from an art into a repeatable engineering discipline.

‘Putting’ it all together

With our TODO list and rules in place, we can now progress through each task one by one.

Please execute Task 1 in prompts/TODO.md

The agent goes off and does its work, runs some checks, then pauses for me to review what it has done. I review and approve the changes; the change, along with the updated TODO list showing a completed task, is then committed to Git in a single commit.

I now issue the same prompt for each subsequent task in the TODO list – each time opening a New Agent window with a fresh context – and keep doing this until I’ve worked through my entire list.

Expressing the flow visually can help us to fully understand and memorise the complex procedure we followed above. I’ve marked all human steps in colour, and the agent’s steps in white. I’d suggest using this as a reference while you are internalising the process:

A visual flowchart explaining the process from the driving shot to the corrective actions completion, as explained throughout the blog.

Finished on par

We’ve come a long way using an iterative two-stage approach backed by solid engineering practice. The initial “driving shot” has evolved through a series of disciplined “putts” into a production-ready feature ready for review by the rest of your team.

In our next post, we’ll explore the uncertainties and gotchas – the tricky situations that can throw your game off just when you think you’ve mastered the green.

Until then, if you’ve tried this process yourself or have refinements to share, drop me a message. We’re all still learning how to make AI a trusted part of our SDLC toolkit, and every iteration helps us play a better round.

About the author

Marco is a Principal Consultant at Equal Experts with over 20 years of experience in backend development on the JVM. A specialist in functional programming and distributed systems, he is the co-author of Functional Programming in Kotlin (Manning Publications) and is the creator and maintainer of SDKMAN!, a widely adopted tool for managing parallel versions of Software Development Kits.

At Equal Experts, he focuses on the intersection of disciplined engineering and emerging technology, currently exploring the practical application of Generative AI within the software development lifecycle. His work is defined by a commitment to well-crafted, maintainable code and a pragmatic approach to solving complex technical challenges at scale.

Disclaimer

This blog is a record of our experiments and experiences with AI. It reflects what we tried, learned, and observed, but does not represent Equal Experts’ official practices or methodologies. The approaches described here may not suit every context, and your results may vary depending on your goals, data, and circumstances.

You may also like

From Madness to Method with AI Coding

Blog

From madness to method with AI coding: Part 1 – Meta-prompting

From madness to method with AI coding part 2 Guardrails

Blog

From madness to method with AI coding: Part 2 – Guardrails

Blog

From madness to method with AI coding: Part 3 – The Driving Shot

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.