Product_Del_Success_Lead
Steve Smith
Steve Smith Head of Scale Service

Our Thinking Wed 20th January, 2021

How to measure product delivery success

At Equal Experts, we’re frequently asked about success measures for product delivery. It can be hard to figure out what to measure – and what not to measure!

We often find ourselves working within multiple teams that share responsibility for one product. For example, an ecommerce organisation might have Equal Experts consultants embedded in a product team, a development team, and an operations team, all working on the same payments service.

When we’re asked to improve collaboration between interdependent teams, we look at long-term and short-term options. In the long-term, we advocate moving to cross-functional product delivery teams. In the short-term, we recommend establishing shared success measures for interdependent teams.

By default, we favour measuring these shared outcomes: 

  • High profitability. A low cost of customer acquisition and a high customer lifetime value.
  • High throughput. A high deployment frequency and a low deployment lead time.
  • High quality. A low rework time percentage.
  • High availability. A high availability rate and a low time to restore availability

If your organisation is a not-for-profit or in the public sector, we’d look at customer impact aside from profitability. Likewise, if you’re building a desktop application, we’d change the availability measures to be user installer errors and user session errors

These measures have caveats. Quantitative data is inherently shallow, and it’s best used to pinpoint where the right conversations need to happen between and within teams. What “high” and “low” mean is specific to the context of your organisation. And it’s harder to implement these measures than something like story points or incident count – and it’s still the right thing to do.

Beware per-team incentives

‘Tell me how you measure me and I will tell you how I will behave’ – Eli Goldratt

People behave according to how they’re measured. When interdependent teams have their own measures of success, people are incentivised to work at cross-purposes. Collaboration becomes a painful and time-consuming process, and there’s a negative impact on the flow of product features to customers. 

At our ecommerce organisation, the product team wants an increase in customer page views. The delivery team wants more story points to be collected. The operations team wants a lower incident count. 

This encourages the delivery team to maximise deployments thereby increasing its story points, and the operations team to minimise deployments to decrease its incident count. These conflicting behaviours don’t happen because of bad intentions. They happen because there’s no shared definition of success, so the teams have their own definitions.

Measure shared outcomes, not team outputs

All too often, teams are measured on their own outputs. Examples include story points, test coverage, defect count, incident count, and person-hours. Team outputs are poor measurement choices. They’re unrelated to customer value-add, and offer limited information. They’re vulnerable to inaccurate reporting, because they’re localised to one team. Their advantage is their ease of implementation, which contributes to their popularity.

We want to measure shared outcomes of product delivery success. Shared outcomes are tied to customers receiving value-add. They encode rich information about different activities in different teams. They have some protection against bias and inaccuracies, as they’re spread across multiple teams.   

When working within multiple teams responsible for the same product, we recommend removing any per-team measures, and measuring shared outcomes instead. This aligns incentives across teams, and removes collaboration pain points. It starts with a shared definition of product delivery success.

Define what success means

When we’re looking at inter-team collaboration, we start by jointly designing with our client what delivery success looks like for the product. We consider if we’re building the right product as well as building the product right, as both are vital. We immerse ourselves in the organisational context. A for-profit ecommerce business will have a very different measure of success than a not-for-profit charity in the education sector. 

We measure an intangible like “product delivery success” with a clarification chain. In How To Measure Anything, Douglas Hubbard defines a clarification chain as a short series of connected measures representing a single concept. The default we recommend to clients is:

product delivery success includes high profitability, high throughput, high quality, and high availability

In our ecommerce organisation, this means the product team, delivery team, and operations would all share the same measures tied to one definition of product delivery success.

These are intangibles as well, so we break them down into their constituent measures.

Pick the right success measures

It’s important to track the right success measures for your product. Don’t pick too many, don’t pick too few, and don’t set impossible targets. Incrementally build towards product delivery success, and periodically reflect on your progress.

Profitability can be measured with cost of customer acquisition and customer lifetime value. Cost of customer acquisition is your sales and marketing expenses divided by your number of new customers. Customer lifetime value is the total worth of a customer while they use your products. 

Throughput can be measured with deployment frequency and deployment lead time. Deployment frequency is the rate of production deployments. Deployment lead time is the days between a code commit and its consequent production deployment. These measures are based on the work of Dr. Nicole Forsgren et al in Accelerate, and a multi-year study of Continuous Delivery adoption in thousands of organisations. They can be automated.

Quality can be measured with rework time percentage. It’s the percentage of developer time spent fixing code review feedback, broken builds, test failures, live issues, etc. Quality is hard to define, yet we can link higher quality to lower levels of unplanned fix work. In Accelerate, Dr. Forsgren et al found a statistically significant relationship between Continuous Delivery and lower levels of unplanned fix work. Rework time percentage is not easily automated, and a monthly survey of developer effort estimates is a pragmatic approach.

Availability can be measured using availability rate and time to restore availability. The availability rate is the percentage of requests successfully completed by the service, and linked to an availability target such as 99.0% or 99.9%. The time to restore availability is the minutes between a lost availability target and its subsequent restoration. 

In our experience, these measures give you an accurate picture of product delivery success. They align incentives for interdependent teams, and encourage people to all work in the same direction. 

If your organisation is a not-for-profit or in the public sector, we’d look at customer impact aside from profitability. Likewise, if you’re building a desktop application, we’d change the availability measures to be user installer errors and user session errors

Measuring shared outcomes rather than team outputs makes collaboration much easier for interdependent teams, and increases the chances of product delivery success. It’s also an effective way of managing delivery assurance. If you’d like some advice on how to accomplish this in your own organisation, get in touch using the form below and we’ll be delighted to help you.