SRE Conf 2023

SRE Conf 2023

Availability and reliability 24/7- the SRE life

Tickets

Loading…

Sreeram Venkitesh

Sreeram Venkitesh

@sreeramvenkitesh

Scaling neetoDeploy from zero to production - Building, maintaining and optimizing our cloud deployment platform

Submitted Oct 27, 2023

At BigBinary, we’ve been building neetoDeploy for the past one year.

We were running all our PR review apps across around 25-30 projects on Heroku. Last year once Heroku announced that they’re getting rid of their free plans, we started off by building a platform to deploy PR review apps. We kept the date Heroku was planning to remove their free plans as a deadline and quickly put together our platform on top of Kubernetes so that we could migrate all our apps from Heroku. We completed this way before the deadline and spent the rest of the time fixing bugs and stabilizing the platform. We architected an entire idle mechanism for the apps, based on the network requests each service recieves. If an app is not accessed for 5 minutes, it will get scaled down and would only be brought back up when its accessed again. We were able to bring down our costs substantially with this.

Once we had nailed PR review apps, we started experimenting with staging and production app deployments. Since the basic functionality was there, we were able to bring it together easily. With this, we moved all of BigBinary’s internal staging deployments to neetoDeploy. One of the major uses of staging deployments was to run Cypress tests against them everyday.

After we started using our platform to deploy staging apps, we started facing a lot of stability issues with existing features. We had to rebuild and re-architecture several features that we had already implemented, kind of like building the Ship of Theseus. We went back to the drawing board and designed a new efficient way of streaming logs faster. We setup cluster autoscaler to handle load, and overprovisioned the cluster ever so lightly based on the existing deployments, so that new deployments never have to wait for the cluster to be up, resulting in seamless and fast deployments. We moved from an external docker registry to our own registry hosted inside our Kubernetes cluster to bring down network costs and latency and so on.

The last one year has been a rollercoaster ride in terms of learning and experimenting. Working on and maintaining neetoDeploy over the past year taught me a lot of lessons the hard way and I’ve understood what SRE means in a project of this scale. We wrote a bunch of blog posts about it too.

This is the neetoDeploy story - how we built a cloud deployment platform as a service from scratch and took it to production in a year.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

We care about site reliability, cloud costs, security and data privacy