This talk is mainly about productionizing ML Model and Optimizing Online Feature Store at the scale of India’s Biggest Fashion E-Commerce.
Myntra Home Page consists of widgets(targeted cards). Currently, their ranking is the same for all the users based on the business metrics(CTR, Revenue). To truly personalize the home page, we’ve deployed the ranking ML Model which talks to feature stores to get user & widget features in real-time.
Architecture design explains all the components in a detailed fashion. Components discussed: App service, ML Platform, Feature Store Lookup, Aggregation, Encoder, Model Predict, Ranking. There are four broad categories of features used: user embedding, user activity counts, widget embedding, widget business metrics. One of the major tech challenges is to fetch all the features in real-time with very low latency. In many cases feature lookup is more expensive than the actual Model prediction. To reduce the feature lookup time, we’ve done five types of Optimization, which is giving us ~ 10X improvement. I’ll be explaining why we choose Aerospike as the feature store, will share a detailed comparison with another alternative DB.
Will be talking about the end-to-end system design comprising different layers like Sources, Ingestion, Processing, Storage, MLP, Serving, Client. In the end, we’ll demonstrate the benchmark and load test of the personalization service for a very huge scale.
- Tech Challenges
- System Design
- Feature Store Lookup
- Architecture Design
- Why Aerospike
- Ideal Feature Store Choice for real-time feature lookup from Millions of user’s feature data.
- Different Optimizations to reduce the Feature lookup latency in aerospike.
- Why Aerospike?
- How to productionize ML Models at the scale.
- How ML Platform talks to different components & layers in real-time for each API call.
- Learning about the end-to-end system design, starting from Source, Ingestion, Processing, Storage, MLP, Serving.
- Benchmarking & Load test for the entire ML service.
Anyone looking for Productionization of ML Models, Feature Pipelines, Feature Store working at scale.
Ideal for the entire ML community.
Audience Level: Intermediate