How to measure product delivery success

At Equal Experts, we’re frequently asked about success measures for product delivery. It can be hard to figure out what to measure – and what not to measure!

We often find ourselves working within multiple teams that share responsibility for one product. For example, an ecommerce organisation might have Equal Experts consultants embedded in a product team, a development team, and an operations team, all working on the same payments service.

When we’re asked to improve collaboration between interdependent teams, we look at long-term and short-term options. In the long-term, we advocate moving to cross-functional product delivery teams. In the short-term, we recommend establishing shared success measures for interdependent teams.

By default, we favour measuring these shared outcomes: 

  • High profitability. A low cost of customer acquisition and a high customer lifetime value.
  • High throughput. A high deployment frequency and a low deployment lead time.
  • High quality. A low rework time percentage.
  • High availability. A high availability rate and a low time to restore availability

If your organisation is a not-for-profit or in the public sector, we’d look at customer impact aside from profitability. Likewise, if you’re building a desktop application, we’d change the availability measures to be user installer errors and user session errors

These measures have caveats. Quantitative data is inherently shallow, and it’s best used to pinpoint where the right conversations need to happen between and within teams. What “high” and “low” mean is specific to the context of your organisation. And it’s harder to implement these measures than something like story points or incident count – and it’s still the right thing to do.

Beware per-team incentives

‘Tell me how you measure me and I will tell you how I will behave’ – Eli Goldratt

People behave according to how they’re measured. When interdependent teams have their own measures of success, people are incentivised to work at cross-purposes. Collaboration becomes a painful and time-consuming process, and there’s a negative impact on the flow of product features to customers. 

At our ecommerce organisation, the product team wants an increase in customer page views. The delivery team wants more story points to be collected. The operations team wants a lower incident count. 

This encourages the delivery team to maximise deployments thereby increasing its story points, and the operations team to minimise deployments to decrease its incident count. These conflicting behaviours don’t happen because of bad intentions. They happen because there’s no shared definition of success, so the teams have their own definitions.

Measure shared outcomes, not team outputs

All too often, teams are measured on their own outputs. Examples include story points, test coverage, defect count, incident count, and person-hours. Team outputs are poor measurement choices. They’re unrelated to customer value-add, and offer limited information. They’re vulnerable to inaccurate reporting, because they’re localised to one team. Their advantage is their ease of implementation, which contributes to their popularity.

We want to measure shared outcomes of product delivery success. Shared outcomes are tied to customers receiving value-add. They encode rich information about different activities in different teams. They have some protection against bias and inaccuracies, as they’re spread across multiple teams.   

When working within multiple teams responsible for the same product, we recommend removing any per-team measures, and measuring shared outcomes instead. This aligns incentives across teams, and removes collaboration pain points. It starts with a shared definition of product delivery success.

Define what success means

When we’re looking at inter-team collaboration, we start by jointly designing with our client what delivery success looks like for the product. We consider if we’re building the right product as well as building the product right, as both are vital. We immerse ourselves in the organisational context. A for-profit ecommerce business will have a very different measure of success than a not-for-profit charity in the education sector. 

We measure an intangible like “product delivery success” with a clarification chain. In How To Measure Anything, Douglas Hubbard defines a clarification chain as a short series of connected measures representing a single concept. The default we recommend to clients is:

product delivery success includes high profitability, high throughput, high quality, and high availability

In our ecommerce organisation, this means the product team, delivery team, and operations would all share the same measures tied to one definition of product delivery success.

These are intangibles as well, so we break them down into their constituent measures.

Pick the right success measures

It’s important to track the right success measures for your product. Don’t pick too many, don’t pick too few, and don’t set impossible targets. Incrementally build towards product delivery success, and periodically reflect on your progress.

Profitability can be measured with cost of customer acquisition and customer lifetime value. Cost of customer acquisition is your sales and marketing expenses divided by your number of new customers. Customer lifetime value is the total worth of a customer while they use your products. 

Throughput can be measured with deployment frequency and deployment lead time. Deployment frequency is the rate of production deployments. Deployment lead time is the days between a code commit and its consequent production deployment. These measures are based on the work of Dr. Nicole Forsgren et al in Accelerate, and a multi-year study of Continuous Delivery adoption in thousands of organisations. They can be automated.

Quality can be measured with rework time percentage. It’s the percentage of developer time spent fixing code review feedback, broken builds, test failures, live issues, etc. Quality is hard to define, yet we can link higher quality to lower levels of unplanned fix work. In Accelerate, Dr. Forsgren et al found a statistically significant relationship between Continuous Delivery and lower levels of unplanned fix work. Rework time percentage is not easily automated, and a monthly survey of developer effort estimates is a pragmatic approach.

Availability can be measured using availability rate and time to restore availability. The availability rate is the percentage of requests successfully completed by the service, and linked to an availability target such as 99.0% or 99.9%. The time to restore availability is the minutes between a lost availability target and its subsequent restoration. 

In our experience, these measures give you an accurate picture of product delivery success. They align incentives for interdependent teams, and encourage people to all work in the same direction. 

If your organisation is a not-for-profit or in the public sector, we’d look at customer impact aside from profitability. Likewise, if you’re building a desktop application, we’d change the availability measures to be user installer errors and user session errors

Measuring shared outcomes rather than team outputs makes collaboration much easier for interdependent teams, and increases the chances of product delivery success. It’s also an effective way of managing delivery assurance. If you’d like some advice on how to accomplish this in your own organisation, get in touch using the form below and we’ll be delighted to help you.

 

Find out what happened when we brought four experts together to discuss what it takes to be a great digital product owner with Thames Water staff 

As part of our digital transformation and product work with Thames Water, we’ve been helping them find the right narrative for talking about rapid delivery, building confidence in their team’s capabilities and creating the foundations needed to be a ‘digital first’ water company.  

We brought together a team of digital product experts for a panel discussion on how to be an effective product owner. It was an opportunity for Thames Water employees to hear how other organisations have embedded product owners into their way of working.

The group of specialists was composed of:

Matt Walker, Head of Product at Moneysupermarket and Equal Experts Associate

Darren Irish, Senior Product Owner at Three

Neha Datt, Product Consultant at Equal Experts, whose clients include PlayStation, Siemens Healthineers and various infrastructure companies

Julia Bellis, Product Consultant at Equal Experts, whose clients include UK passport application platform, Pret and Domino’s

The event was facilitated by Katy Beale, Comms and Content Consultant at Equal Experts and Amanda Kirby, Delivery Lead at Thames Water. 

We were very lucky to have four very experienced speakers who have worked across multiple sectors who bring valuable insight into embedding product owners and creating effective teams. 

The lively discussion started with the question, “What top three skills do you need to have to be a great digital product owner?”. Communication and people skills came through as essentials here. Matt mentioned the need for adaptability in the day to day; Neha said key strengths were needed in facilitation to listen, contextualise and support great team working; Julia pointed out that curiosity is a much needed key skill, as well as diplomacy; Darren said you needed to have trust and respect for team members to support and motivate.

The panel were asked about their experience of saying ‘no’ and why it is vital. Brilliantly, what came across from all the panellists was that it’s not about saying no. It’s learning how to say no, in a professional way, which is nearer to saying “Not now, not yet”. Being a great product owner is about prioritisation and expectation setting, based on user and business needs, and deciding the right thing to be working on at any given time.  

There was an inspiring debate about whether you are born with the personality to make you a great product owner or if you can learn the skills. Matt believes that not everyone can “just slot into a Product Owner role. I believe there are certain traits, for example a level of assertiveness and leadership that if you don’t have it’s potentially unfair on the individual”. Neha countered with, “I think assertiveness is more a skill than a trait. Some people might be more naturally inclined to do these default but anyone can learn these. The valuable point here is that if you aren’t naturally inclined towards being assertive your line managers need to support you with this adequately.”

Ultimately, being a great product owner comes from experience. All the panellists talked about the power in building relationships, in doing and learning on the job, and being open to a ‘learning culture’ of always improving.

As an invite-only event at Thames Water with attendees from across the organisation, we had product owners, potential POs and leadership on the call, who all gave great feedback:


“Fantastic session – thanks ever so much” Mike Potter, CTO

“As a new product owner, this was a really useful session – thank you speakers!” Vicki Smith, Product Owner

 

“Thanks for setting this up, really informative!” Tanya Jacques, Business Change and Transformation Project Manager

 

“Thanks, brilliant nuggets of experience” Denise Clifford, Product Owner

The attending audience also had some brilliant questions of their own about user research, remote working and experimentation. 

It was fantastic to share knowledge and bring experts together in this way. You could see perceptions shifting on the call and we’re pleased to say that there’s appetite to do future events on different themes. 

Zoom screen Thames Water