January 29, 2020

304 words 2 mins read

SLO burn

SLO burn

Jamie Wilkinson offers a brief overview of SLOs, shares a practical guide to implementing sustainable SLO-based alerting for systems of any size, and outlines the tooling required to supplement the system in the absence of cause-based alerting.

Talk Title SLO burn
Speakers Jamie Wilkinson (Google)
Conference O’Reilly Velocity Conference
Conf Tag Building and maintaining complex distributed systems
Location New York, New York
Date October 1-3, 2018
URL Talk Page
Slides Talk Slides
Video

As systems grow, they get more components—and more ways to fail. The alerts of the last system’s design can slowly “boil the frog,” and all of a sudden the SRE team finds they have no time left to address scaling problems because they’re constantly firefighting. Alert fatigue sets in, and the team burns out. Naturally, maintenance work will always increase as the system itself grows. To make alerting sustainable, instead of on cause, only page on symptom, and even then only by declaring what the acceptable threshold of symptom is—also known as the SLO (and its complement, the error budget). Even at Google scale, many teams have yet to implement the change in their monitoring to realize SLO-based alerts. But systems don’t need to be the size of a planet to benefit from these patterns. Jamie Wilkinson offers a brief overview of SLOs and shares a practical guide to implementing sustainable SLO-based alerting for systems of any size. Whether you’re on call for 10 machines or 10 data centers, you’ll find something of value, as Jaime—a well-rested champion of work-life balance—demonstrates how to select service objectives and construct robust and low-maintenance alerting rules, using Prometheus for a live demonstration. You’ll also explore the tooling required to help make such a system retain observability in the absence of noisy caused-based alerts, now that they’re not telling you exactly which components are failing.

comments powered by Disqus