From building malleable systems, to tweaking your architecture to control cloud costs - Rootconf SRE on Friday, 24 November, in Bangalore Rootconf SRE on Friday brings together talks and discussions which have all the ingredients of a nice OTT drama. more
##About Rootconf Hyderabad:
Rootconf Hyderabad is a platform for:
- DevOps engineers
- Site Reliability Engineers (SRE)
- ML and data engineers
- Security and DevSecOps professionals
- Software engineers
to discuss real-world problems around:
- Site Reliability Engineering (SRE)
- Data and AI engineering
- Distributed systems -- observerability, microservices
- Implementing Infrastructure as Code
Speakers from Flipkart, Hotstar, Intuit, GO-JEK, MadStreetDen and Trusting Social will share their experiences with the above challenges.
Rootconf Hyderabad will be held at T-Hub, IIIT-Hyderabad Campus, Gachibowli, Hyderabad, Telangana - 500032
For bulk ticket purchases,sponsorship and other inquiries, contact firstname.lastname@example.org or call 7676332020
For information about the event, tickets (bulk discounts automatically apply on 5+ and 10+ tickets) and speaking, call Rootconf on 7676332020 or write to email@example.com.
Achieving repeatable, extensible and self serve infrastructure at Gojek
From where we started, GO-JEK has grown to be a community of more than one million drivers with 3 Million+ orders every day in almost no time. To keep supporting this growth, hundreds of microservices run and communicate across multiple data centers to serve the best experience to our customers.
In this post, we’ll talk about our approach of assembling Infrastructure As Code that simplifies the maintenance of an increasingly complex microservices architecture for our company and how we have enabled the developers in helping maintain their own infrastructure by providing the tooling for them to do so in order to keep up with the increasing scale.
Building infrastructure is without a doubt a complex problem evolving over time. Maintainability, scalability, observability, fault-tolerance, and performance are some of the aspects around it that demand improvements over and over again.
One of the reasons it is so complex is the need for high availability. Most of the components are deployed as a cluster with 100s of microservices and 1000s of machines running, As a result, no one knew what the managed infrastructure looked like, how the running machines were configured, what changes were made, how networks were connected to each other. In a nutshell, we were lacking observability into our infrastructure. And when there was a failure in the system, it was hard to tell what could’ve brought the system down.
Time and again, we used to get swamped with service requests for requests like
- creating an ES cluster
- creating a rabbitmq cluster
- creating a VMs which would have boilerplate for the type of application it would host.
- increasing disk size for a box
- create a postgres master/slave
etc which made us realise that we are effectively becoming the bottleneck for the developers and their pace of going from ideation to production.
- to reduce developer toil
- to automate repetitive tasks
- have a central config for managing infrastructure for improved audit/repeatability.
- move into a self serve model from infrastructure to remove the bottleneck from the systems team.
- have templates for infra which gets created on a day to day basis and automate the systems person out the creation process.
We have been using Terraform for our IAC in bits and pieces for a while now, but what we were lacking, was structure and consistency. Different teams had different repositories. Modules were all over the place or inside the project itself. They were complex and there were lots and lots of bash scripts.
It was very challenging and error-prone to manually create infrastructure and maintaining it. We needed to switch from updating our infrastructure manually and embracing Infrastructure As Code and running it inside our CI/CD platform.
Infrastructure As Code allows you to take advantage of all software development best practices. You can safely and predictably create, change, and improve infrastructure. Every network, every server, every database, every open network port can be written in code, committed to version control, peer-reviewed, and then updated as many times as necessary.
Project Olympus is our initiative at GOJEK infrastructure engineering team to solve these problems and achieve mentioned goals.
Achieving a self serve model of infrastructure was achieved with Proctor (https://github.com/gojek/proctor), our automation orchestrator, using which a developer could get the infrastructure they wanted for themselves with the tooling helping them abstract out the work which we would do for having the boilerplate/security-tools/packages which we would put inside, logging/monitoring set up, dns creation for the services and other things like component creation (rabbitmq-cluster, ES cluster), disk size increase etc which usually used to happen by manual intervention.
*Architecture and discussion around
- olympus which hosts our cloud infrastructure config
- terraform module structuring and how we run it our CI/CD platform
- how proctor (https://github.com/gojek/proctor) helps us in achieving the self serve model for infrastructure
Tasdik is a Product Engineer at Gojek where he works with the systems team. Contributor to oVirt (under Redhat), before Gojek he was part of the early systems team of Razorpay. He has presented his talks at different national and international conferences including Pycon Taiwan, Devopsdays India and devconf India. A full list of the talks and slides/videos can be found at https://tasdikrahman.me/speaking/.