MLOps Conference

MLOps Conference

On DataOps, productionizing ML models, and running experiments at scale.

Praveen Dhinwa


ML Infrastructure for Feed Recommendations at ShareChat

Submitted Jul 14, 2021


In this talk, we will describe ShareChat’s feed recommendation infrastructure in detail. The talk will delve into various ml-infrastructure related aspects, such as model training, serving, design, and development of feature-store, and also feature-computation pipelines. The subject matter will also provide insights and learnings that we have obtained via building these large-scale, low latency and fault-tolerant systems. Additionally, we will describe strategies we have employed for disaster management and automated recovery.

Outline of the talk:


Scale of our feed system

General outline of the recommendation pipeline

Model Training at ShareChat
- Incremental models
- Real time models
- Ranker related models

Model serving:
- Realtime model inferencing , model versioning, hot reloading and tools that we use.

Feature Store:
- Requirements
- Initial version of feature store at ShareChat, what we tried, what didn’t work.
- Current structure

Feature computation pipelines:
- Batch and real time pipelines with details

Feature logging and model monitoring

Disaster management:

Learning and Conclusion:

Future Directions


{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}