Submissions for MLOps November edition
On ML workflows, tools, automation and running ML in production
Praveen Dhinwa
In this talk, we will describe ShareChat’s feed recommendation infrastructure in detail. The talk will delve into various ml-infrastructure related aspects, such as model training, serving, design, and development of feature-store, and also feature-computation pipelines. The subject matter will also provide insights and learnings that we have obtained via building these large-scale, low latency and fault-tolerant systems. Additionally, we will describe strategies we have employed for disaster management and automated recovery.
Introduction
Scale of our feed system
General outline of the recommendation pipeline
Model Training at ShareChat
Model serving:
Feature Store:
Feature computation pipelines:
Feature logging and model monitoring
Disaster management:
Learning and Conclusion:
Future Directions
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}