Why Kubernetes, Kafka, or Istio can derail your platform engineering efforts

Platform engineering means creating user-centric capabilities that enable teams to achieve their business outcomes faster than ever before. At Equal Experts, we’ve been doing platform engineering for a decade, and we know it can be an effective solution to many scaling problems. 

Unfortunately, it’s easy to get platform engineering wrong. There are plenty of pitfalls, which can contaminate your engineering culture and prevent you from sustainably scaling your teams up and down. In this series, I’ll cover some of those pitfalls, starting with the power tools problem.

How to measure a platform capability

A platform capability mixes people, processes, and tools (SaaS, COTS, and/or custom code) to provide one or more enabling functions to your teams. In order to stay user-centered and focussed on your mission, you need to measure a capability in terms of: 

  • Internal customer value. How much it improves speed, reliability, and quality for your teams. The higher this is, the faster your teams will deliver.
  • Internal customer costs. How much unplanned tech work it creates for your teams. The lower this is, the more capacity your teams will have.
  • Platform costs. How much build and run work it creates for your platform team. The lower this is, the fewer platform engineers you’ll need.

Whether it’s data engineering or a microservices architecture, it’s all too easy for your well-intentioned platform team to make the wrong trade-offs, and succumb to a pitfall. Here’s one of those tough situations. 

The hidden costs of power tools

Implementing core platform capabilities with power tools like Kubernetes, Kafka, and/or Istio is one of the biggest pitfalls we regularly see in enterprise organizations. Power tools are exciting and offer a lot of useful features, but unless your service needs are complex and your platform team knocks it out of the park, those tools will require a lot more effort and engineers than you’d expect. 

Here’s a v1 internal developer platform, which uses Kubernetes for container orchestration, Kafka for messaging, and Istio for service mesh. A high level of internal customer value is possible, but there are also high internal customer costs and a high platform cost. It’s time-consuming to build and maintain services on this platform.

Version1 of an internal developer platform. A large and heavy weight containing Kubernetes, Istio and Kafta capabilities. On the right is a horizontal bar chart showing the high levels of internal customer value, internal customer costs and platform costs of heavyweight power tools.

This pitfall happens when your platform team prioritizes the tools they want over the capabilities your teams need. Teams will lack capacity for planned product work, because they have to regularly maintain Kubernetes, Kafka, and/or Istio configurations beyond their core competencies. And your platform team will require numerous engineers with specialized knowledge to build and manage those tools. Those costs aren’t usually measured, and they slowly build up until it’s too late.

For example, we worked with a Dutch broadcaster whose teams argued over tools for months. The platform team wanted Kubernetes, but the other teams were mindful of deadlines and wanted something simpler. Kubernetes was eventually implemented, without a clear business justification. 

Similarly, a German retailer used Istio as their service mesh. The platform team was nervous about upgrades, and they waited each time for a French company to go first. There was no business relationship, but the German retailer had a documented dependency on the French company’s technology blog.

Transitioning from heavyweight to lightweight tools

You escape the power tools pitfall by replacing your heavyweight capabilities with lightweight alternatives. Simpler tools can deliver similar levels of internal customer value, with much lower costs. For example, transitioning from Kubernetes to ECS can reduce internal customer costs as teams need to know less and do less, and also lower your platform costs as fewer platform engineers are required. 

Here’s a simple recipe to replace a power tool with something simpler and lower cost. For each high-cost capability, use the standard lift and shift pattern:

  • Declare it as v1, and restrict it to old services
  • Rebuild v1 with lightweight tools, and declare that as v2
  • Host new services on v2
  • Lift and shift old services to v2
  • Delete v1

As with any migration, resist the temptation to put new services onto v1, and design v2 interfaces so migration costs are minimized. Here’s v2 of the imaginary developer platform, with Fargate, Kinesis, and App Mesh replacing Kubernetes, Kafka, and Istio. Capability value remains high, and costs are much lower.

The heavy weight containing platform capabilities in version 1 has been transitioned to lightweight platform capabilities, demonstrated in v2 with App mesh, Kinesis and Fargate in bubbles. The impact of this is shown in a horizontal bar chart comparing the high internal customer and platform costs of the heavyweight capabilities with the lower costs in the lightweight system.

Conclusion

Power tools are a regular pitfall in platform engineering. Unless your platform team can build and run them to a high standard, they’ll lead to a spiral of increasing costs and operational headaches. Transitioning to lighter, more manageable solutions means you can achieve a high level of internal consumer value as well as low costs. 

A good thought experiment here is “how many engineers want to build and run a Kubernetes, Kafka, or Istio a second time?”. My personal experience is not many, and that’s taking managed services like EKS and Confluent into account.

I’ll share more platform engineering insights in my talk “Three ways you’re screwing up platform engineering and how to fix it” at the Enterprise Technology Leadership Summit Las Vegas on 20 August 2024. If you’re attending, I’d love to connect and hear about your platform engineering challenges and solutions.

DORA and Accelerate (Forsgren et al.) define “Lead Time for Change” as “the amount of time it takes a commit to get into production.”

By being specific about how and when you take the measurements, you can create a Deployment Lead Time metric that can help your platform team identify improvements to reduce Lead Time for Change across multiple teams.

Change = Deployments || Releases, but Deployments != Releases

Software changes are done in organisational events such as releases, or can happen frequently throughout the day such as deployments. However, releases often require collaboration with enabling teams such as marketing, legal, and customer operations to ensure a successful outcome, and are a form of organisational change events. Deployments are technical change events that do not require the same level of collaboration across the organisation. They can be done frequently throughout the day and with sufficient preparation, such as using feature flagging capabilities. They do not cause incidents that impact the availability or the user experience. 

Release lead time, or cycle time will vary significantly depending on how the organisation has optimised for the flow of work, and significantly reducing the cycle lead time can be outside the scope of the interactions between a platform team and the product teams it works with. Deployment lead time however, can be optimised by interactions between a platform team and the stream-aligned product teams it works with. 

Measuring deployment lead time will provide information on the common path to production across teams, whilst the measurement of cycle time will inform on the ways of working of the team and the other organisational activities that have to happen to release change to users.

Deployment lead time is good for comparing across teams without getting too involved in specific teams’ ways of working

If your platform team aims to optimise the path to production across many teams, prefer starting the Deployment Lead Time measurement from when the commit hits the main branch until that commit is deployed to production deployment (Commit D in the diagram). 

By measuring when the work is ready to go to production, we gain accurate data on the process and pipelines required for the path to production that is easily comparable across teams and there is a reduction in bias for specific team’s ways of working like branching strategies, peer-review approach, testing strategy.

When measuring from the first commit of the branch (Commit A in the diagram), we’ll produce the team’s cycle time measurement. It’ll include the time it takes to produce the work, integrate that work with others, and peer-review it (if a separate stage). 

The timing of the first commit being created can be easily gamed by engineers. Still, by avoiding measuring from the first commit for deployment lead time, we leave individual team preferences and ways of working alone and measure when the work from that team is ready for production.

Mean averages are good but do pay attention to your 50th, 90th, and 95th percentiles

Watch the median (50th percentile) to understand how you’re doing with most of your changes getting to production, and the long-tail percentiles (90th, 95th) to understand what happens when things are weirder than usual on their journey to production.

When making changes to production are happening quickly and safely, with a team that has a good understanding of how that software operates, you’ll find your long tail comes towards your median.

How to measure Lead Time for Change

There are many potential points in a typical developer’s workflow that you can use to measure how long it takes for a commit to get into production, be wary of accidentally measuring cycle time, pipeline time, or time to create value. 

Instead, measure Deployment Lead Time to ensure your platform team can take action on the metric’s results and meaningfully impact it by changing the user experience of the product teams.