Architecting for speed
ABRAHAM_MARÍN-PÉREZ
Abraham Marin-Perez Equal Experts Alumnus

Our Thinking Tue 6th June, 2017

Engineering systems for a faster build

Speed is one of the key factors in any successful endeavour. But we tend to see the importance of speed and performance in running applications, often forgetting that it is just as important at the time of building them.

In the era of Continuous Integration and Continuous Deployment, big applications are creating big, often bloated build pipelines. These provide late feedback to developers and affect the ability of a business to react to events.

The right idea; the wrong approach

Companies are beginning to realise this threat and are acting on it. Unfortunately, many of the approaches we’re seeing are rather idealistic concepts – ones which sound good on paper but don’t usually deliver the expected value.

One example is that of teams rewriting their entire stack into a microservices architecture, thinking that smaller components will be faster to build. However, they fail to realise that once the system grows big enough, those microservices will have shared components and interdependencies that will slow the build down.

Another is that of teams switching to experimental build tools like Twitter’s Pants or breaking code encapsulation with monorepositories. While these do provide some benefits, as with any tool they come with disadvantages and the trade-off may not always be worth it.

Back to basics

There is a simpler way to keep things fast, and one which is easily spotted if we just understand the root of the problem. The whole idea of a Continuous Integration pipeline is that, whenever a change is made, everything that is impacted by that change is rebuilt so as to ensure that we are always up to date.

It follows that the real problem is where code becomes so entangled that every single change impacts large portions of the system, meaning there’s a lot to rebuild.

The solution is therefore simple (in principle!): just reshape the architecture of your code so that code changes affect a smaller portion of the overall system. In turn, only a smaller portion needs to be rebuilt, resulting in shorter build times.

For instance, if you have a library that is used by several other components, every time you modify that library you’ll have to rebuild all the dependent components. If however you separate that library into its API and its implementation, then you might be able to reduce its impact: when you make a change to the implementation and it doesn’t affect the overall behaviour, dependents won’t be affected and so you won’t need to rebuild them.

Classic approaches can trump new fads

Obviously, IT is a very fast-paced industry. New ideas and technologies come up every day and we need to make sure we evaluate them so as to keep up. However, the fact that new things appear doesn’t necessarily mean we always need to drop everything that came before it.

In the end, the oldest trick in the book is still one of the most effective ones: if you want to run things smoothly, make sure everything is appropriately tidied up as you go. You’ll not only save a lot of pain, but it’s the only way to keep that all-important performance up.

Abraham Marin-Perez will be exploring this topic in more detail with his talk Architectural Patterns for an Efficient Delivery Pipeline at Better Software West, on Wednesday, June 7, 2017.