Virtuous Cycles: Enabling SRE via automated feedback loops
Submitted by Aaditya Talwai (@talwai) on Thursday, 30 May 2019
Section: Full talk Technical level: Intermediate Session type: Lecture Section: Full talk of 40 mins duration Technical level: Intermediate Status: Confirmed & Scheduled
Automating common operational procedures - like increasing capacity, expiring data, or evening out load on a system - is the bread-and-butter of many SRE teams. Operator nirvana is having apps that can heal themselves, without human intervention - but most SRE teams will accept some toil as an inevitable part of their lives. This is because some procedures are too risky to automate, too costly to get wrong. How do you build the confidence that your “self-healing” system will not accidentally shoot itself in the foot, while in production?
In pictures we will show a journey of instrumentation - how one can use app-level telemetry and tracing to build confidence that your auto-remediating strategies are doing the right things. Case studies include:
- Intelligent query timeouts that allow loaded workers to recover
- A backoff and jitter system for controlling thundering-herd on an internal service
- Watermark-based quota system for shaping traffic on a multitenant cluster
We will show that using open-source tooling, and good observability practices, you can make an opaque part of your system that is operationally taxing into a well-behaved component, that remediates itself. We take a very visual approach to telling these stories - so expect graphs and lot of them!
Ultimately, we want to give audience a framework and strategy to answer these questions:
- Is an ops procedure worth automating?
- How to get good feedback from internal telemetry in your application?
- How to use this feedback to drive auto-remediation?
- And most importantly, how to experiment on all this, without breaking production :)
Some prior knowledge of operating distributed systems.
Aaditya Talwai is a Site Reliabilty Engineer at Confluent and former Lead Software Engineer at Datadog. His work has focused on large-scale monitoring systems and the words, pictures, and tools we use to tell stories about our software systems. At Datadog, he helped architect a cloud-scale distributed tracing and APM tool, bringing together the three pillars of observability - metrics, traces, and logs. At Confluent, he works on a unified cloud platform for event streaming, including the observability and automation strategies needed to guarantee a highly-available, elastic, multitenant cluster. He is enthusiastic about helping SRE teams understand their systems, and deploy apps that heal themselves, through great observability practices and a culture of experimentation.