From madness to method with AI coding: Part 2 – Guardrails
In part 1 of our series on applying methodology to AI, we talked about using meta-prompting as a way to get closer to the mark – quicker – when using AI-assisted coding. Think of every AI-generated solution as a golf shot: you tee off, sometimes landing the ball neatly on the fairway, hopefully not into the rough. If you’re lucky, you’re approaching the green, where a few corrective actions with your putter get you quickly to your desired result. It’s just like using prompts in the software development life cycle – a gradual narrowing down through a progressive precision refinement process.
Before we even tee off, we need to know the boundaries of the fairway or the green, so we can aim to land the ball there. This is where guardrails come in. Guardrails – or rules, as they’re known in software development – provide the LLM with guidelines – a system context for every prompt that we issue. So, what do we consider as the “fairway” when we’re building a feature? What system context can we provide to bring us closer to our mark?
How to use guardrails in AI-assisted coding
The trick is to carefully craft rules that are relevant to the project you’re working on. For instance, one engineer’s rules in a Node.js frontend app will look completely different to those in a Kotlin Ktor app. So, what do we include in the rules of our projects? And what format should they have? How do we write them? How long should they be? We’ll go through these questions one by one.
Why guardrails matter
AI coding assistants are powerful, but they don’t know your rules by default. LLMs have been trained on code that reflects every opinion and pattern on the internet. If you want them to follow your way of working, you need to be explicit.
For example, when I’m writing code, I care about concepts like Domain-Driven Design, Hexagonal Architecture, outside-in testing and functional programming. So, I capture each of these values as a rule in the project. (As an example, here is a rules directory I use for my Kotlin project.) Your team’s rules will differ, but the principle is the same: define what matters to you and give the LLM that context up front.
Rule formation
So, what makes a good rule? Firstly, the less verbose the rules are, the better. In my experience, rules should never exceed ~200 lines — the longer they get, the more likely the LLM is to veer off into the rough. Meta-prompting (from part 1 of this series) helps here. For example, I asked the LLM:
Please write me a rule in markdown to be used by an LLM that describes Domain Driven Design by Eric Evans in under 200 lines.
To keep rules consistent, I use a simple template, the same way I do with prompts. Here it is:
# [Rules Topic/Domain Name]
*Brief description of the rules domain and its scope. What aspect of development, architecture, or method do these rules
govern? Keep this concise and focused on the purpose and boundaries of these rules.*
## Context
*Provide the situational context where these rules apply. This helps the LLM understand when and why to apply these rules.*
**Applies to:** [Project types, layers, components, or scenarios where these rules are relevant]
**Level:** [Strategic/Tactical/Operational - helps prioritize rule application]
**Audience:** [Developers/Architects/Product Team - who should follow these rules]
## Core Principles
*List the fundamental principles that underpin all the detailed rules. These are the "why" behind the rules and help
with decision-making when specific rules don't cover a scenario.*
1. **Principle Name:** Brief explanation of the principle and its importance
2. **Principle Name:** Brief explanation of the principle and its importance
3. **Principle Name:** Brief explanation of the principle and its importance
## Rules
### Must Have (Critical)
*Non-negotiable rules that must always be followed. Violation of these rules should block progress.*
- **RULE-001:** Description of the rule with clear, actionable guidance
- **RULE-002:** Description of the rule with clear, actionable guidance
- **RULE-003:** Description of the rule with clear, actionable guidance
### Should Have (Important)
*Strong recommendations that should be followed unless there's a compelling reason not to.*
- **RULE-101:** Description of the rule with clear, actionable guidance
- **RULE-102:** Description of the rule with clear, actionable guidance
- **RULE-103:** Description of the rule with clear, actionable guidance
### Could Have (Preferred)
*Best practices and preferences that improve quality but are not blocking.*
- **RULE-201:** Description of the rule with clear, actionable guidance
- **RULE-202:** Description of the rule with clear, actionable guidance
- **RULE-203:** Description of the rule with clear, actionable guidance
## Patterns & Anti-Patterns
### ✅ Do This
*Concrete examples of what good implementation looks like*
// Example of good practice in code block
// Clear, concise code example
### ❌ Don't Do This
*Concrete examples of what to avoid*
// Example of anti-pattern in code block
// Clear example of what not to do
## Decision Framework
*Provide guidance for making decisions when rules conflict or when faced with novel situations*
**When rules conflict:**
1. Step 1 for resolution
2. Step 2 for resolution
3. Step 3 for resolution
**When facing edge cases:**
- Guideline 1
- Guideline 2
- Guideline 3
## Exceptions & Waivers
*Define when and how these rules can be broken*
**Valid reasons for exceptions:**
- Reason 1 (with approval process if needed)
- Reason 2 (with approval process if needed)
- Reason 3 (with approval process if needed)
**Process for exceptions:**
1. Document the exception and rationale
2. [Additional approval steps if needed]
3. [Time-bound review if applicable]
## Quality Gates
*Define how adherence to these rules should be verified*
- **Automated checks:** What can be validated through tooling
- **Code review focus:** What reviewers should specifically look for
- **Testing requirements:** How rule compliance should be tested
## Related Rules
*Reference other rules files that complement or interact with these rules*
- `rules/related-rules-1.md` - Brief description of relationship
- `rules/related-rules-2.md` - Brief description of relationship
- `rules/related-rules-3.md` - Brief description of relationship
## References
*Links to external resources, standards, or documentation that inform these rules*
- [Resource 1](url) - Brief description
- [Resource 2](url) - Brief description
- [Resource 3](url) - Brief description
---
## TL;DR
*Ultra-concise summary of the most critical rules and principles. This section should be scannable in under
30 seconds and capture the essence of all rules above.*
**Key Principles:**
- Principle 1 in one sentence
- Principle 2 in one sentence
- Principle 3 in one sentence
**Critical Rules:**
- Must do X
- Must not do Y
- Always ensure Z
You can now open a new chat (to ensure no previous context) with your LLM and provide it with the prompt, along with the template provided above. In my case, it resulted in this Domain Driven Design rules file. Once you have your template, you can generate clean, uniform rules that are easier for an LLM to follow. (Here’s the meta-prompt I used to generate this template.)
Guardrails keep you on the fairway
By applying guardrails, you’ll spend less time untangling the LLM’s ‘creative interpretations’ and more time creating value. Remember, keep your rules:
Concise (under ~200 lines, to keep the AI focused).
Consistent (shared structure across rulesets).
Customisable (different projects will have different guardrails).
Guardrails don’t just reduce frustration, they set you up for success in the subsequent steps. With clear rules in place, your LLM is more likely to stay on track and get you closer to the outcome you want. In other words: more fairway, less time hacking your way out of the bushes.
If you’d like help defining your own guardrails for AI-assisted coding, get in touch here.
In the next part of this series, we will move ahead with our golf analogy and examine teeing off: the Driving Shot.
Watch Marco’s video to see guardrails in AI-assisted coding in practice
About the author
Marco Vermeulen is a Principal Software Engineer at Equal Experts
The views shared in this blog reflect the personal experiences of the author and they do not represent Equal Experts’ official practices or methodologies.
You may also like
Blog
From madness to method with AI coding: Part 1 – Meta-prompting
Blog
From specification to code: A practical AI-powered workflow for developers
Blog
Cooking while GPT-5 cooks: testing a one shot, unsupervised prompt
Get in touch
Solving a complex business problem? You need experts by your side.
All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.
If you’d like to find out more about working with us – get in touch. We’d love to hear from you.