Saqib Afghan

Saqib Afghan

Product Principal
AI

November 6, 2025

Using LLMs to bridge the technical divide in product teams

Equal Experts helps clients modernise their software systems and architecture, where a common challenge is the time and effort required to analyse legacy code when few, if any, employees remain who fully understand it.

We have successfully used AI tools to analyse and understand clients’ codebases at a global travel company, a global payment provider, and a prominent UK premium retailer. In all cases, we’ve successfully proven that it is possible to use AI to rewrite legacy applications with fewer team members and in shorter timeframes than non-AI-assisted methods.

These tools have greatly benefitted engineers but they also prompted us to ask: “How well could an LLM explain technical complexity to non-developers? Is this even desirable and what are the implications for team dynamics between engineers and non-engineers?”.

This article describes an experiment exploring these questions, what we learned about its limitations, and the guardrails that should be in place.

What did we learn?

Equal Experts was asked by a global insurance client to apply our AI-assisted methodology to understand and rewrite a legacy application scheduled for decommissioning, which only a handful of engineers fully understood.

Our primary mission was to use LLMs to generate analysis detailed enough for the client’s developers to rewrite from. However, we also conducted a secondary experiment whereby a technically-aware but non-engineer team member (our Product Manager) used LLMs to interrogate and understand the application and its code.

Note: While our experiment granted the Product Manager (PM) direct code access, we recognise this isn’t always realistic in typical enterprises. However, these insights remain broadly applicable since a non-technical team member could also use LLMs to process outputs from engineers as well, such as code summaries, architectural diagrams, or technical documentation.

Verbal information from client discovery meetings was immediately fed into an LLM, generating instant reports on the application’s technology stack, coding languages, user journeys and system logic flow while the meeting was ongoing. Before concluding, the client confirmed the LLM’s real-time findings were broadly correct and a solid foundation for the start of the engagement.

Once consultants had access to the codebase and the primary engineering-centric investigation was underway, our PM began the secondary investigation. Using plain English prompts, they generated reasonably sophisticated explanations of the code’s folder structure, component diagrams, C4 architecture models, sequence diagrams, business rules, and user stories. They also created high-level clickable prototypes directly from the code to validate the LLM’s sequence diagram outputs.

Whilst our engineering consultant’s LLM-driven analysis remained the core mission and was far more detailed and ‘developer friendly’, our secondary investigation showed that even a non-engineer (with access to code and knowledge of technical analysis) could use plain English prompts for a comparable first-pass technical understanding of the system.

Unlocking new potential for product teams

LLM-assisted systems analysis, if used thoughtfully, can address long-standing challenges in product teams’ ability to grasp technical complexity.

Faster, more confident onboarding

Non-engineering roles joining complex products often face steep learning curves. It can take weeks of engineers’ time to explain where logic resides, and documentation is often outdated. Self-served LLM-generated technical analysis — even when based on engineer’s documentation and outputs, not just raw code — can dramatically shorten this curve for new joiners, freeing engineers to focus on shipping features and delivering outcomes.

Legacy system user story and BDD creation

When knowledge of a legacy system is limited, prompts that extract relevant information from the codebase can help PMs and BAs accelerate the creation of user stories and Behavior-Driven Development (BDD) scenarios for system rewrites.

Building a shared understanding with developers

LLMs can transform intricate technical information and code into tangible artifacts that both non-engineers and engineers can engage with. “Does this look right?” becomes the start of a productive team conversation. Instead of waiting for engineers to explain technical complexity or create documents, PMs and BAs can bring draft functional diagrams or system flow maps to engineers for critique, bridging the gap between strategy and technical implementation.

Understanding trade-offers in technical decisions

Perhaps the most compelling shift is enabling PMs, Designers and BAs to contextualize technical trade-offs with greater confidence. The analysis won’t be flawless, but it provides a crucial starting point in designing experiences with technical limitations in mind, or helping a PM to prioritise technical debt, defects and architectural improvements. This fosters a less hierarchical, more collaborative dynamic between product, design and engineering.

Navigating the risks

Every powerful capability brings side effects, and LLMs are no exception. There are real risks if product managers, designers, and business analysts fail to establish clear boundaries.

Overstepping and role blurring

When non-engineers access code summaries, there’s a temptation to make technical judgments without the necessary depth. The objective isn’t to replace developer insight, but to interact with it intelligently. If these roles speak as if they’ve conducted the analysis themselves, it can quickly erode trust within the team.

Losing the “critical friend” stance

The best PMs, designers, and BAs bridge business objectives, user needs and technical possibilities. Spending too much time immersed in technical detail or understanding code risks “going native” – i.e. aligning too closely with engineering and losing sight of customer value or broader business trade-offs. An overly-technical product team member may stop asking “should we?” and focus too heavily on “can we?”

Shallow confidence, deep risk

LLM outputs often appear authoritative – diagrams are clean, and text is fluent. It’s easy to forget that some information might be incorrect. Basing critical decisions on unvalidated outputs risks steering the team towards costly rework, or, even worse, operating under a false sense of certainty.

Eroding unique value

If everyone can use LLMs to extract user flows, generate diagrams, and summarise logic, what differentiates PMs, designers, and BAs? If these roles become primarily about information retrieval, AI could soon perform them faster. The true differentiator must remain product sense and design judgement, not access to information.

Thoughtful use of LLMs for product teams

LLM use in product development is still an emerging, fast-changing field. These principles can help guide your approach:

  1. Use LLMs for context, not conclusions: AI-generated technical insights can enhance understanding but are not definitive answers. Always validate them with developers and be transparent about their origin. Treat every diagram or summary as a hypothesis to test, not a truth to blindly trust.
  2. Stay anchored to the business problem: Don’t mistake technical fluency for genuine product insight. A PM or designer’s core responsibility remains understanding users, customers, and desired outcomes – not optimising code. If they spend more time in the repository than with stakeholders, they’ve likely drifted off course.
  3. Be transparent with your team: Tell engineers when AI has been used to generate artefacts and invite their critique. Collaboration builds trust and shared learning.
  4. Protect your boundaries: Understanding code doesn’t mean owning it. Use AI to connect disciplines, not collapse them.
  5. Treat misuse as a signal: If teams rely on AI outputs because they lack timely access to developers or documentation, that’s an organisational “smell.” The tool isn’t solving your underlying problem; it’s merely masking it.

A thoughtful evolution

Now is an ideal time to explore these questions, given the growing belief that AI will increasingly blur the lines between developers, designers and product managers. Many in the product community believe these non-engineering roles will need to comprehend and even create code using AI to build fully functioning prototypes during Product Discovery.

What is clear to us is that LLMs already offer product managers, designers and business analysts powerful leverage: faster understanding of technical complexity and more meaningful conversations with engineers.

But this also demands discipline. The real transformation isn’t about product teams becoming pseudo-engineers; it’s about discovering a new layer of shared literacy. It’s a way for non-engineering roles to reason about complex systems together in an age of AI, when their roles increasingly overlap and intertwine.

Disclaimer

This blog is a record of our experiments and experiences with AI. It reflects what we tried, learned, and observed, but does not represent Equal Experts’ official practices or methodologies. The approaches described here may not suit every context, and your results may vary depending on your goals, data, and circumstances.

About the author

Saqib Afghan is a product leader and coach with over 20 years’ experience helping organisations of all sizes, across a wide variety of industries and domains. He specialises in defining clear product strategies and building operating models that improve clients’ speed to market and achieving measurable outcomes.

His recent work focuses on helping enterprises make the shift from project to product ways of working, and on exploring how AI can enhance the craft and practice of product management. Known for his pragmatic, systems-minded approach, Saqib helps organisations focus on product practices that work in the real world, not just what’s fashionable.

You may also like

Blog

Coding with LLMs: are we re-inventing linguistics with prompts?

Blog

Consulting with LLMs – how to make ChatGPT say more than “it depends”

Blog

Safer LLM responses using build-in guardrails

Get in touch

Solving a complex business problem? You need experts by your side.

All business models have their pros and cons. But, when you consider the type of problems we help our clients to solve at Equal Experts, it’s worth thinking about the level of experience and the best consultancy approach to solve them.

 

If you’d like to find out more about working with us – get in touch. We’d love to hear from you.