Building an organization specific Kubernetes container PaaS
There are a number of benefits that cloud-native, microservice architectures bring to organizations and containers along with orchestration tools like Kubernetes have been at the forefront of enabling these architectures.
Our organization is one of those that claim to be ‘born in the cloud’ and as such we employ a microservice architecture for most of our applications.
Seeing the multitude of benefits that containers have to offer, we embarked on a quest to ‘Setup a Kubernetes production ready Deployment Framework’ that hosts our workloads.
The rapid adoption of Kubernetes in the container ecosystem has led to multiple deployment models, ranging from self-managed kubernetes clusters to completely managed PaaS offerings.
Given the variety of options it can be a difficult choice to select the one that best suits your organisation’s requirements.
Apart from selecting a ‘Kubernetes deployment models’, there are multiple components that are critical to a production cluster which need consideration, for example an image registry along with robust monitoring and logging tools are components that ensure higher availability of the workloads.
You might also find yourself asking the following question:
What CI/CD tools should we use ?
How do we enable developers to embrace the new container based architecture ?
How do we keep the infrastructure cost in check after migrating workloads into kubernetes ?
These are some of the few questions that organizations embracing container based architecture will have to answer.
We being an organization that values infrastructure/deployment flexibility and having a dedicated Devops team, found that none of the kubernetes deployment models met our requirements and so we decided to build our own custom solution.
In this session, we will walk through various design choices we made to ‘Building an organization specific Kubernetes container PaaS’
and address issues surrounding the deployment of a Kubernetes environment in production.
It takes a lot of effort, consideration to run workloads on a kubernetes platform.
I plan to share our learnings and experiences to help guide others through the process of picturing Kubernetes at work in production
in their organization.
Shortcomings of existing VM based deployment framework:
- Limitations of current CI/CD pipelines
- Inefficient compute resource utilization
- Lack of efficient self-healing mechanisms
- On-boarding new applications was time consuming
- Lack of gitops, infrastructure as code and Immutable architecture practices
- Lack of a centralised monitoring/alerting tool
- Setup a standard framework for running workloads in Kubernetes
- On-board/migrate workloads into Kubernetes with ease
- Enable rapid deployments via improved CI/CD pipelines
- Facilitate Developers to easily adopt the new container based ecosystem by reducing complexity
- Increase DevOps/Developer productivity by reducing on-call issues
- Maintain cost efficiency (cost should not increase by > 10% after migrating workloads)
- Incorporate Gitops, infrastructure as code and Immutable architecture practices
Comparison between Kubernetes deployment models:
- Self-Hosted Kubernetes Clusters
- Managed Kubernetes Clusters
- CaaS (container as a service)
- PaaS (Platform as a service)
- Custom, Self-Hosted PaaS
Design patterns for a Custom, Self-Hosted PaaS:
- Organization specific business requirements
- Image registry (will discuss on why choosing the right image registry is important)
- Monitoring and alerting (will talk about various k8s monitoring/alerting tools for collecting K8s cluster + APM metrics)
- Log analytics platform (will talk about various Log analytics tools for collection K8s cluster and container logs)
- CI/CD (will discuss on application deployment strategies and the various CI/CD tools we considered)
- Platform observability (will talks about the advantages of having a K8s dashboard and how it helped our developer to easily adopt our Kubernetes platform)
- Auto-scaling (will discusses on auto-scaling mechanisms like HPA and ClusterAutoScaler )
- Cost optimization (will talk about how we run workloads on AWS spot-instances to reduce infrastructure costs)
Future plans for improving our Kubernetes PaaS:
- Secret management
- Per-pod routing via service-mesh (eg ISTIO)
- Security enhancements
- Automated cranary analysis with CD
- Centralised dashboards for high kubernetes platform visibility
Have a need for setting up a ‘Kubernetes container platform’ in production or have already done the same and are looking for ways to enhance, improve your existing platform.
Basic knowledge of Kubernetes.
Basic understanding of CI/CD and monitoring tools.
I am a Kubernetes certified (CKA) DevOps Engineer working at Moonfrog Labs.
I have over 7 years of experience and have worked with a variety of technologies in both service-based and product-based organisations.
Now exploring technology in gaming at its best in Moonfrog Labs for the past 2.5 year.
I was one of the speaker at Rootconf and AWS community day (Bangalore 2019), I gave a talk on ‘Log Analytics with ELK Stack (Architecture for aggressive cost optimization and infinite data scale)’
How do I spend my free time ?
Well I like to learning new technologies and playing PC games.
- Log Analytics with ELK Stack (Rootconf 2019): https://docs.google.com/presentation/d/1PiLsy-UwkB1IYhorTq6KWYTCmcuVg2OOK-8D92tlFQs/edit?usp=sharing
- Log Analytics with ELK Stack (AWS community day 2019): https://docs.google.com/presentation/d/1KWISUxRmhljon8NY-dhO9kIcS4zFcKnw_FjDcc8LNOg/edit?usp=sharing