A journey through Cosmos to understand users.
This talk covers the journey of building a cloud native user feedback system for Inmobi DSP. The challenges involved and the need for sharing these learnings can be appreciated by observing that a typical DSP processes anywhere from 250,000 - 1,000,000 queries per second, with an average response time of sub 50 milliseconds. To make intelligent decisions in such high throughput low latency system the supporting user store needs to be highly scalable, extremely cost conscious and reliable. The journey of building such a system in a cloud native setting raises a lot of learnings both emperical and theoretical, which will the focus of this talk. The major headlines of the talk will be
1. Understanding the factors which drive the cost of such a system, how to minimize the operational costs and intelligence to enable auto-scalability.
2. How to do multi version concurrency control.
3. The need for inflight abbreviated compression and how to achieve it with minimal overhead.
The topics we will be covering in this talk:
1. Introduction - Briefly provide business context to appreciate the need to solve this problem, and challenges involved.
2. The factors driving the decision to choose Cosmos DB as our backend store.
3, Key insights into what drives cost of the store, and various gotchas involved when designing such a system.
4. How to optimize the cost and bring intelligence to enable auto-scalability.
5. The need for building a multi version concurrency control and how to achieve it to enable parallel writes with multiple schema versions for the same record.
6. The tradeoff between readability and storage cost, and how to get the best of both worlds by building an avro library to enable inflight abbreviated compression.
Tech lead at Inmobi, MSc Computer Systems Indian Institute of Science.
I was part of the group which experimented and conceptualized the design for building the user inference systems for Inmobi DSP. My prior experience for the past 4 years, involve understanding user data at Inmobi and building large scale systems to provide inferences for enabling intelligent ad serving. This work spans across building large scale stream processing systems, ML pipelines to make inferences and various big data applications.