AI-assisted discovery in legacy systems: architecture diagrams, user journeys, and repo documentation

Aditya Goyal

Developer
AI

September 9, 2025

Accelerating discovery with AI: A developer’s shortcut to system understanding

What happens when a delivery team experiments with AI to solve a real discovery challenge? At Equal Experts, we’re always looking for ways to improve how we work — and that includes learning what new tools can (and can’t) do in practice. In this article, we share how one team used generative AI to make sense of 40+ legacy repositories, map user journeys, and visualise system architecture — all in a matter of days. If you’re working in a complex environment, here’s how AI might help you move faster and build confidence sooner.

Written in collaboration with Munish Malik

Why we tried this: Too many repos, too little time

You’ve probably been there: dropped into a complex legacy system, dozens of repositories, sparse documentation, and every question gets the same answer — “we need to investigate.”

In traditional discovery, the path forward is clear but time-consuming. Workshops, interviews, endless code reading. But when deadlines loom, the real question becomes: is there a faster way to understand the system well enough to start delivering?

That’s what our team set out to test — whether AI could accelerate the discovery phase and give us something meaningful to build on. What follows isn’t a theoretical pitch. It’s a practitioner’s play-by-play of using LLMs, diagrams, and structured prompts to understand a tangled system faster, without cutting corners.

The AI-accelerated discovery methodology

Step 1: Repository-level AI analysis

We started with over 40 repositories across two systems. Instead of wading through them manually, we pointed LLMs at the source and asked for:

  • A business overview of each service
  • A technical summary of what it does
  • How it validates or modifies data
  • Key algorithms and workflows
  • Data model documentation
  • Entity relationships
  • API contract breakdowns

We used structured prompting techniques to generate these outputs consistently, then used docsify to format it all into a searchable knowledge base. So if someone wanted to understand where a certain business logic lived, they could search for the term and land on the right repo instantly.

Result: A fast, searchable documentation base for the entire codebase.

Step 2: Journey mapping across real scenarios

Once we had service-level visibility, we pushed further: tracing actual business scenarios end-to-end. With AI’s help and a few prompt iterations, we built journey maps that integrated:

  • Customer-facing experiences
  • System-level processes
  • Component interactions
  • Sequence diagrams
  • A working domain glossary

We could now see how customer interactions flowed through the system, how various business rules were applied, and how that event was ultimately processed. Not only did this make system behaviour visible, but it also helped us quickly build a shared understanding of the domain.

Result: Comprehensive journey maps across complex business cases.

Step 3: Architecture diagrams, powered by prompt and polish

For architecture, we paired AI-generated system insights with existing team artefacts — mainly Miro boards — and generated C4 diagrams, which gave us C4 views showing boundaries, responsibilities, and system interactions in a way that non-devs could navigate, and engineers could drill into.

Result: Clean architecture diagrams that made onboarding and reasoning about the system easier.

Step 4: Drilling down to use cases, powered by the resynthesizing process

With a high-level view established, we shifted focus to specific business use cases. Here, the true power of our AI-generated artefacts shone through.

We resynthesised the architecture diagrams, data models, and journey maps to rapidly build context around critical business flows.

This layered approach was a game-changer, allowing us to quickly understand specific interactions, data transformations, and system behaviours relevant to particular scenarios.

Result: We were able to move beyond general system understanding and pinpoint exactly how individual business concerns were handled

Tools and Models Used

Here’s what powered our AI-accelerated discovery process:

Tools: Cursor, Claude Code, Claude Desktop

Libraries & Frameworks: Docsify, likeC4, PocketFlow

Models: Claude Opus 4.1, Claude Sonnet 4, Claude Sonnet 3.7

These choices weren’t about finding the “best” tool — they were what worked well for this context. Our focus was always on reducing discovery time while building meaningful, usable outputs for the team.

A tool we didn’t expect to love — but did

We tested PocketFlow, which turned out to be surprisingly effective at generating “how-to” style documentation. Think of it as AI turning code into developer guides. Instead of just summarising what a module does, it explained:

  • How authentication is implemented
  • What happens during payment processing
  • Step-by-step flows, complete with code snippets

This proved incredibly useful for onboarding — faster than dry documentation and closer to real “developer onboarding” guides.

Reflections: What worked and how our role has changed

What worked well:

  • Reusing and layering AI outputs (e.g. feeding repo summaries into journey mapping prompts)
  • Rapid prompt iteration — fast feedback cycles let us refine outputs in minutes, not days
  • Using journey maps as a scaffold to discuss with domain experts
  • We worked in a “mobbing” style – product and engineering together daily, making decisions in real-time with minimal handoffs

The human element still matters

LLMs read the code. Humans read between the lines. We had to contextualise what was AI-generated:

  • What patterns were deliberate vs. technical debt
  • Which oddities had business logic behind them
  • Which terms were outdated or misaligned with current thinking

Final thoughts: Try it, refine it, share it

This wasn’t about skipping the hard work of understanding a system. It was about doing the groundwork faster, so we could ask better questions sooner. For us, using AI in discovery meant moving from “we’ll get back to you” to “here’s what we think is happening — can you confirm?” in days, not weeks.

If you’re about to start discovery in a messy system, this approach is worth trying. Start small — pick a repo, write a prompt, see what comes back. Then layer in journey mapping and diagramming as your confidence grows.

If you’ve been experimenting with AI in delivery, we’d love to hear about it. Or if you’re facing the uphill battle of legacy system discovery and want to chat approaches, get in touch. We’re always up for swapping war stories, prompts, or diagramming tricks.

Disclaimer

Equal Experts is not affiliated with or commercially connected to any of the tools mentioned in this post. We’re sharing this approach purely to demonstrate how we experimented with AI tooling to accelerate delivery in a real-world context. Your mileage may vary — and that’s part of the fun.

You may also like

Blog

“My CEO keeps coming and asking me how we are using AI in the SDLC!” – AI Enabled Delivery According to 50+ Tech Leaders

Blog

GenAI: The assistant, not the expert in user research

Blog

AI in the SDLC: Lessons from Travelopia’s Director of Product & Innovation

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.