Closing the loop: managing production stability versus delivery velocity

People who come from a traditional IT environment will recognize the conflicts that occur when a release goes from development into production. In a situation where the development and operations organisations are separate, there is a huge disconnect between the priorities of each party. Development, working with their customers breathing down their necks, want to deliver features to production as fast as possible. Operations, keeping a beady eye on the extremely tight Service Level Agreements (SLA’s) signed off by their bosses, would rather that nothing changes at all, because changes break things. Both consider the other to be needlessly difficult and contrary.

And then we discovered there was a better way

Breaking this pattern is one of the reasons devops came into being, a fact not lost on my employer when we adopted devops as a standard over 4 years ago. However, there is more to breaking down the barriers between dev and ops than just putting them into one team. The essential dilemma here is that feature delivery does frequently cause production disruption, either due to unreliable delivery mechanisms, or due to technical debt in the code. Spotting these issues, and ensuring they get fixed, is what this blog article is about.

The need for feedback

To be able to make the determination on whether development effort should go into features or fixes, you need feedback from how your application is performing in production. To be able to determine this, there must be agreement on what acceptable performance is. The primary tool for this are the application’s Service Level Objectives (SLO’s) [1]C. J. J. P. N. R. M. Betsy Beyer, Site Reliability Engineering: How Google Runs Production Systems, O’Reilly Media,, 2016., and the Service Level Indicators (SLI’s) that are linked to these. For every application, there should be agreement on how available the application should be. This is frequently expressed as % availability, and people often talk about how many 9’s there are after the decimal. However, this can also be expressed as a time that the application can be down. For example, an application with an SLO for availability of 99.99% can be down for 52 minutes per year.

Let’s look at this differently…

We can use this time as an error budget [2]C. J. J. P. N. R. M. Betsy Beyer, Site Reliability Engineering: How Google Runs Production Systems, O’Reilly Media,, 2016.. It might sound a bit awkward, but an SLO of 99.99% gives the devops team a budget of 52 minutes of downtime per year. If the budget is rapidly being used up by incidents, then development effort must be shifted from feature releases to solving the underlying issues causing the downtime, be that in the code itself, or in fixing issues with the Continuous Integration and Delivery infrastructure. Focus here is in either implementing or improving the automation in the pipeline, or in solving performance or reliability issues in the application code itself.

Too much of a good thing however…

You would think that an application that is meeting its error budget with uptime to spare is a good thing. This isn’t necessarily the case. Bad things can happen if a component in a larger whole is perceived to be more reliable than it is required to be. These components have a habit of being reused without taking into account that they might not be available. This is why it is important to artificially generate downtime on components if they are consistently exceeding their SLO’s. Doing this will very quickly identify downstream dependencies that are making unwarranted assumptions about availability, and will help the devops team identify where further mitigation is required. Only by doing this is it possible to create a truly resilient production architecture. An example of tooling that supports this is Netflix Chaos Monkey.

Error budgets in the real world

Together with my colleagues I implemented the practises described in this article together with one of our clients. The traditional managed services contract with penalty clauses for SLA breaches was replaced. Instead the client agreed that responsibility for SLA breaches lies with all parties. Our team worked embedded in the development team to implement an error budget per application, and to work with the development specialists on making sure this budget was not breached. We are also moved from a single SLA to SLO’s per application, and ensured that the monitoring of the state of the SLO’s was visible for the development teams responsible for the applications in question. Through making adjustments to these procedures, and using the embedded team to fully automate the CI pipelines, we achieved a feature delivery velocity of 2-3 production releases/day/application, without loss of application reliability.

In conclusion

The use of error budgets and monitoring feedback to control application reliability and delivery velocity is a devops practise that has wide applicability. It is less of a technical fix, as it is a best practise for collaboration that leads to faster, more dependable production releases. Using these practises does involve a deep commitment from all parties involved in the environment. Without customer buy-in at the contract level, implementing these changes is extremely difficult. Thankfully, the benefits are obvious enough that getting that buy-in should not be an issue.

How to implement:


  • Set realistic SLO’s, SLI’s and SLA’s for each application in your environment.
  • Have a generic set of SLO’s, SLI’s and SLA’s available for use by new applications.
  • Express the current level of SLO realisation in monitoring as an error budget.
  • Have the current error budget visible on dashboards for the teams responsible for the applications.
  • Have both development and operations made responsible for meeting the error budget.
  • Have the procedure for dealing with error budget breaches set in the Standard operating procedures for dev and ops. This should involve using dev resources for ops automation if at all possible.


  • Fail to get management by-in at all levels.

Further information

The procedures described in this article are discussed in much greater depth in the excellent book “Site Reliability Engineering: How Google Runs production systems” [3]C. J. J. P. N. R. M. Betsy Beyer, Site Reliability Engineering: How Google Runs Production Systems, O’Reilly Media,, 2016.. Chapters 1.3, 3, 4 and 6 describe in some depth the concepts touched upon in this article.

Originally published in a slightly modified form on the Mirabeau Blog. You can read the original article here.



1, 2, 3 C. J. J. P. N. R. M. Betsy Beyer, Site Reliability Engineering: How Google Runs Production Systems, O’Reilly Media,, 2016.