ML Infrastructure for Feed Recommendations at ShareChat
In this talk, we will describe ShareChat’s feed recommendation infrastructure in detail. The talk will delve into various ml-infrastructure related aspects, such as model training, serving, design, and development of feature-store, and also feature-computation pipelines. The subject matter will also provide insights and learnings that we have obtained via building these large-scale, low latency and fault-tolerant systems. Additionally, we will describe strategies we have employed for disaster management and automated recovery.
Scale of our feed system
General outline of the recommendation pipeline
Model Training at ShareChat
- Incremental models
- Real time models
- Ranker related models
- Realtime model inferencing , model versioning, hot reloading and tools that we use.
- Initial version of feature store at ShareChat, what we tried, what didn’t work.
- Current structure
Feature computation pipelines:
- Batch and real time pipelines with details
Feature logging and model monitoring
Learning and Conclusion: