Principles & Practices for Running Large Scale Kubernetes Clusters
Submitted by Krishnaswamy Subramanian (@jskswamy) on Saturday, 30 November 2019
Section: Full talk (40 mins) Category: Systems engineering Status: Under evaluation
With the world moving towards containerisation, Kubernetes has become one of the de facto. Kubernetes lets you deploy applications in ways that are highly available and resilient, and can utilize the underlying resources more efficiently. Though there are many hosted Kubernetes solutions out there in the market, nitty-gritty of running the clusters lies in the hands of infrastructure team. Some of the cases include
1. setting up org wide federated clusters for multiple business units and managing service discovery
2. running stateful applications
3. continuous deployment of services in different environments
Even if we go with hosted Kubernetes, things could go wrong and thereby comes the need of monitoring and logging.
The last but not the least is the maintenance and upgrades of clusters and its components with minimal downtime in addition to support by hosted provider (if we have any).
In this talk we will cover things to take care while
- Setting up Kubernetes clusters
- Running stateful applications on Kubernetes
- CI/CD practice for application deployment
- Building containerisable application
- Monitoring and alerting for every component
- Setting up Federated clusters
- Doing upgrades and maintenance
Aswin Karthik S
I am a passionate programmer who loves solving interesting challenges with code.
I am a Devops enthusiast with experience in using a wide variety of tools and technologies starting from setting up baremetals, to using configuration management tools like Chef, and progressing to containerizing applications on Kubernetes.
As a geek, I try to solve every problem I encounter with command-line tools and I have created several open source CLI tools which evolved as solutions to these problems. Also, I love doing live tech demos on any given topic.