Paul Brabban

Paul Brabban

Data Engineer
AI

October 29, 2025

AI-assisted threat modelling

AI-assisted threat modelling offers a practical way to unblock yourself or your teams, scale your infosec expertise, and ultimately build more secure systems, faster.

In the foreword for the recently published 2025 annual review from the UK’s National Cyber Security Centre, CEO Richard Horne said:

“The recent cyber attacks must act as a wake-up call. The new normal is that cyber criminals will target organisations of all sizes, operating in any sector… nearly half of all incidents handled by the NCSC over the last 12 months were of national significance. And 4% of these were categorised as ‘highly significant’ – attacks which we define as “having a serious impact on central government, UK essential services, a large proportion of the UK population, or the UK economy.” That marks a 50% increase in highly significant incidents for the third consecutive year.”

This is a stark reminder that building resilient systems is not optional. Every successful attack supports and drives a thriving industry of criminals and hostile powers. For leaders, the question is: how can we embed security practices at scale without slowing down delivery?

A key building block for cyber resilience, recommended by the NCSC, CISA, NIST and others, is threat modelling. It’s a structured process for looking at a system from an attacker’s perspective, allowing teams to find and fix security flaws on the whiteboard, not in production. The benefits of threat modelling are clear. Less last-minute rework, smoother interactions with security teams, more robust systems and fewer easy targets for threat actors.

So why isn’t every team doing it?

The scalability bottleneck

In my experience working with organisations of all sizes, the problem isn’t a lack of motivation. Information security teams are usually the biggest advocates for threat modelling, but they can’t scale to support every development team. Well-intentioned delivery teams are left staring at a blank page, unsure of how to start and sustain without an expert in the room to guide them.

What if we could provide that guidance at scale, using tools your teams already have?

Hands-on with a practical demonstration

I’ve been experimenting with a pragmatic approach that does just that. By providing a generic AI chatbot with a clear set of instructions, we can turn it into a “threat modelling coach” that guides a developer or a team through the process. It asks useful questions, injects insights from the content it was trained on, structures the conversation, and documents the output. You don’t need new infrastructure, procurement or supply chains.

In the video below, I walk through exactly how I build and use an AI threat modelling coach. As well as tips for staying in control of the process, I’ll share how I handle reviewing, work around common problems, and highlight some unique advantages that come from using LLM technology to support the threat modelling process.

From bottleneck to empowerment

As the video demonstrates, this isn’t about replacing security experts. It’s about empowering your delivery teams, reducing cognitive load and making the best use of your specialists’ time.

  • I’m able to bootstrap a threat model in a couple of hours, a task that would have taken much longer before. We can do it as a team, or I can do it alone, bringing the team something compelling to engage with instead of scheduling workshops and starting with a blank page and a reading list.
  • Once we have a threat model, the team can give that to the coach and look for additional threats and mitigations. We can proactively look at how new features or changes we’re considering affect the security posture, and we can create new models for aspects that we pushed out of scope. AI assistance lowers the cognitive load, making it easier to efficiently incorporate threat modelling in your development lifecycle.
  • We don’t need specialised or expensive tooling. It runs in the same AI chat interfaces that many of your teams are already using, lowering the barrier to entry without adding new supply chain risks.
  • Infosec teams can get involved in the process, advising, iterating and shaping the AI coach with their insight and expertise. The coach can scale their experience to every team, directly assisting the development process and referring more challenging or risky aspects back to the human experts.

A pragmatic and responsible approach

As with any other use of AI, care is needed.

  • Handle sensitive data with care. Team members must comply with your confidentiality policies. No one should paste confidential client information or intellectual property into a public AI model without explicit permission. In the absence of approved tools, the coach can provide useful output with generalised, safe descriptions of a system’s architecture.
  • Assistance and coaching, not delegation. As with any AI interaction, humans still need to take responsibility for what is produced. These prompts are designed specifically to use AI as a coach, supporting but empowering the humans in the room throughout the process.
  • It’s not a replacement for teamwork. The prompting intentionally stops before prioritising risks or assigning actions. Those decisions require the right stakeholders to be in the room.

This is about making that human collaboration more effective, not eliminating it.

“Threat modelling is the single most valuable security activity that is still missing from most software development lifecycles. It has been proven to help teams build more secure systems, empower and educate engineers, and support rapid, frequent software releases.

The biggest obstacle to adoption has always been a shortage of resources; few organisations have enough skilled security professionals to lead and mentor engineers conducting regular threat modeling exercises.

Thanks to LLMs, I think we’re now at an inflection point. For the first time, engineers can have immediate, on-demand access to expert-level guidance on threats. With standardised prompts and targeted model training, we can deliver consistently valuable threat models at virtually unlimited scale.”

– Chris Rutter, Principal Security Consultant at Equal Experts

Get started today

The challenge laid out by the NCSC is significant, but the tools to meet it are evolving. This AI-assisted approach is valuable for everyone, from lone open-source maintainers to the largest enterprises. It can help with pragmatic, sustainable threat modelling for the smallest detail and the most complex systems.

The prompts are available in this Github repository. They are the culmination of several iterations, but are still just a starting point upon which you can build. Share what works so that we can all build more secure systems without sacrificing autonomy and speed.

Disclaimer

This blog is a record of our experiments and experiences with AI. It reflects what we tried, learned, and observed, but does not represent Equal Experts’ official practices or methodologies. The approaches described here may not suit every context, and your results may vary depending on your goals, data, and circumstances.

About the authors

Paul Brabban is a lead consultant with Equal Experts, bringing over 24 years of experience in demanding engineering and leadership roles. He specialises in solving complex data-intensive problems at scale with lean, cost-effective methods and has a relentless focus on value. Paul’s experience covers six-person startups to multinationals in multiple industries including retail and financial services. He provides technical leadership on data strategy and execution, engaging with stakeholders up to director and C-suite level. Alongside Equal Experts, he shares his experience at tempered.works.

Chris Rutter is a principal consultant with Equal Experts, specialising in secure delivery. He brings over 13 years’ experience helping teams to design, build and run secure software systems. Chris has helped startups, scale-ups and enterprises across several industries to transform their security processes and capabilities and achieve rapid, secure and compliant software delivery.

You may also like

Blog

2023 Trust & Safety Hackathon: Key Insights from Julia Wilson

Blog

How to secure your first generative AI integration

production_line_main

Blog

Securing the production line

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.