QCon 2025 NYC, held at the New York Academy of Medicine, brought together senior engineers, architects, and technology leaders to share hard-won lessons from building and operating modern software systems at scale. While AI featured prominently throughout the conference, the overwhelming themes did not revolve around hype or acceleration, but more around reality and challenges faced.
Across keynotes, panels, and practitioner talks, a consistent picture emerged. It became clear that, although enterprise demand for AI is strong, production adoption remains limited. Security and risk endure, particularly in regulated environments. And meaningful gains require disciplined engineering approaches rather than isolated tooling experiments.
These key takeaways reflect the most consistent and actionable signals from the event.
Key takeaways
Few attendees have AI workloads in production
When a speaker asked the room whether their organisation had an AI workload in production, only two or three representatives responded positively. These were typically from software vendors selling AI solutions.
This mirrors what we’ve seen at other industry events. While experimentation is widespread, production-grade AI adoption remains limited. This gap represents both a challenge and an opportunity; moving from proofs of concept to reliable, governed, value-generating systems remains an unsolved problem for many.
Keynotes focused on the challenges of AI, not the hype
It was notable that QCon chose a cautionary keynote to open each day. It set a clear tone that while AI’s promise is real, its challenges are substantial.
On day one, Hilary Mason, founder of Hidden Door, introduced the concept of an AI business stack that begins with business model design and extends all the way down to data model design. Each layer of the enterprise perceives different values and meanings from the term “AI,” creating communication challenges for engineers as they try to convey the value AI brings to the enterprise, which ultimately creates misalignment. Her point was clear, we can’t rely on a single generic AI value proposition, we need to translate impact appropriately throughout the organization while staying aware of adjacent values and concerns.
On day two, Shuman Ghosemajumder, co-founder and CEO of security firm Reken, delivered some humorous yet slightly unsettling facts, such as:
- AI can solve CAPTCHA challenges far better than humans (99.8% vs. ~33%).
- AI-generated deepfakes are rapidly improving, creating new opportunities for organised crime.
- Detecting AI-generated images is becoming increasingly difficult, as virtually every smartphone image already undergoes AI processing.
His recommendations were to improve security training using simulated attacks, enforce MFA, and consider zero-trust architectures within enterprises. This perspective strongly resonated with attendees from regulated industries, such as the medical industry, for example. The takeaway for highly regulated environments was that security and risk management must be front and center in any AI conversation.
CTOs are upbeat (but pragmatic) on AI
A CTO panel featuring leaders from SpotHero, HealthEdge, Gather.dev and others shared a range of AI adoption strategies from “let many flowers bloom” experimentation to more risk-stratified approaches.
Despite differing tactics, there was broad consensus:
- Near-term productivity gains of 10–20% are realistic.
- Greenfield work shows higher upside.
- Results vary widely, with some workflows shrinking from two sprints to two days.
Perhaps the most important insight here is that AI adoption requires the same discipline and structured approach as cloud migration. We should treat it as a transformation program, with centers of excellence, and workforce enablement, not as a set of tools handed to teams with a “go figure it out” mandate. It shows that the familiar operating models we put in place matter; they’re easier to fund, support, and scale.
LinkedIn engineering shows what’s possible
Many QCon talks focused on prompt engineering, context creation, or tooling mechanics. LinkedIn’s engineering presentation stood apart by showing what the future of software engineering could actually look like.
LinkedIn operates at extraordinary scale, with:
- Around 7,000 engineers.
- 10,000+ repositories.
- 45 trillion Kafka messages per day.
- Over 1 million pull requests annually.
At that scale, even marginal efficiency gains translate into huge business impact.
Their mantra: “AI is the new execution model for engineering.”
Their core pattern is an Intent → Plan → Execute → Validate → Output loop:
- Intent – Engineers describe what they want to change using clear, structured specifications including scope, constraints, and desired outcomes.
- Plan – Agents convert intent into ordered steps, tool usage, and acceptance criteria.
- Execute – Orchestrators run agents in sandboxes with tightly controlled permissions.
- Validate – Automated tests, static analysis, and safety checks gate changes before humans review and merge.
Crucially, LinkedIn treats AI as platform infrastructure, not a collection of team-specific experiments:
- Shared orchestration, tooling schemas, and safety guarantees.
- Versioned, entity-based tools with permissions, retry logic, and observability.
- Rich contextual inputs (code graphs, dependencies, ownership, incidents, historical PRs).
- Memory layers (both short-term task context and long-term institutional knowledge) to reduce hallucinations.
They support multiple invocation modes, namely interactive (chat-like), event-driven (triggered by errors or changes), and batch (scheduled migrations or regressions). They then match use cases to the appropriate execution model, based on rules and scripts first, then commercial LLM APIs, then light fine-tuning, with custom models as a last resort.
Concrete examples include:
- Augmented GitHub Copilot using MCP servers to inject LinkedIn-specific patterns and APIs.
- Spec-to-PR coding agents that generate auditable pull requests from structured specs.
- Incident agents that auto-triage errors and propose fixes.
- UI QA agents validating server-driven UIs across iOS, Android, and web using natural-language tests.
- Analytics agents exposing complex data via chat, charts, and narratives.
Several principles showed up consistently in LinkedIn’s approach:
- Don’t let agents guess. Use structured specs and schemas.
- Design for human-in-the-loop authority, not micromanagement.
- Centralise abstractions for tools, context, and evaluation.
- Prioritise open standards and existing infrastructure to stay adaptable.
Importantly, human intervention is designed into the architecture, not bolted on as an afterthought.
From insight to action
QCon 2025 reinforced that while interest in AI is widespread, moving safely and effectively into production remains difficult. Organisations need strong engineering fundamentals, clear operating models, and experience turning emerging technology into repeatable outcomes. This is where Equal Experts can help. Working alongside leaders and practitioners to apply modern engineering practices, build the right platforms and governance, and turn AI ambition into practical, sustainable impact. If you’d like to talk to us about how we might be able to help you take AI from experimentation to production, get in touch, we’d love to speak to you.