Session on "Use Cases and Risks of ML in Capital Markets" | 23rd Dec at 4pm Hi everyone! The AI and Risk Mitigation project is well underway and for the third session, we will be joined by Rachna Maheshwari, Associate Director at CRI… more
The 2023 Monsoon edition is curated by:
- Nischal HP, Vice President of Data Engineering and Data Science at Scoutbee. Nischal curated the MLOps conference which was held online between 23 and 27 July 2021.
- Sumod Mohan, Founder and CEO at AutoInfer. Sumod curated Anthill Inside 2019 edition, held in Bangalore on 23 November.
- AI and Research - covers research, findings, and solutions for challenges on building models in various areas such as fraud detection, forecasting, and analytics. This track delves into the latest methodologies for handling challenges such as large-scale data processing, distributed computing, and optimizing model performance.
- Industrial applications of ML - covers implementation of AI in the industry, with more focus on the AI models, the issues in training, gathering data so, and so forth. ML is being used at scale in industries such as automotive, mechanical, manufacturing, agriculture, and such domains. This track focuses on the challenges in this space, as we see innovation coming out of these industries in the pursuit of using ML on a second-to-second basis.
- AI and Product - covers strategies for building AI products to scale and mitigating challenges. This track provides insights on incorporating AI tools and forecasting techniques to improve model training, developing a working model architecture, and using data in the business context.
There are three phases in the lifecycle of an application - research, application and aftermath of the application.
- Assess capabilities, determining the new frontiers for AI.
- Find a use for the application.
- Learn how to run it, monitor it and update it with time.
The three tracks at the 2023 Monsoon edition of The Fifth Elephant will cover this lifecycle.
The Fifth Elephant 2023 Monsoon edition will be held in-person. Attendance is open to The Fifth Elephant members only. Purchase a membership to attend the conference in-person. If you have questions about participation, post a comment here.
- Data/MLOps engineers who want to learn about state-of-the-art tools and techniques, especially from domains such as automobile, agri-tech and mechanical industries.
- Data scientists who want a deeper understanding of model deployment/governance.
- Architects who are building ML workflows that scale.
- Tech founders who are building products that require AI or ML.
- Product managers, who want to learn about the process of building AI/ML products.
- Directors, VPs and senior tech leadership who are building AI/ML teams.
Sponsorship slots are open for:
- Infrastructure (GPU, CPU and cloud providers) and developer productivity tool makers who want to evangelise their offering to developers and decision-makers.
- Companies seeking tech branding among AI and ML developers.
- Venture Capital (VC) firms and investors who want to scan the landscape of innovations and innovators in AI and who want to source leads for investment in the AI and ML space.
Edge-Based Recommendation Systems: Empowering Personalized Experiences at Scale
Server-driven recommendation systems (RecSys) face significant challenges when it comes to scaling to handle large data volumes and providing real-time recommendations. At Glance, we serve personalized recommendations to over millions of users, prioritizing response time and data privacy. To tackle these challenges, we have turned to edge machine learning (ML). By deploying ML models closer to the data source, edge ML offers a solution that reduces latency, conserves network bandwidth, and enhances data privacy for our customers.
In this talk, we will explore the potential of edge ML in scaling Recommendation Systems. We will delve into how Glance is utilizing edge computing to deliver user-personalized content recommendations at scale, resulting in reduced latencies, enhanced user experiences, and lower server costs. Moreover, we will discuss the capabilities enabled by this approach, such as real-time personalization for third-party integrations without their data ever leaving the edge device. We will also talk about how we enable large number of A/B experiments in our release cycle and also how we implement a server-side RecSys as an augmentation that we use with our Edge ML flow.
However, achieving these results was not without its challenges. We will address the obstacles we encountered during the development and deployment of this architecture, particularly focusing on our implementation along with a roadmap. While we made some strides in solving some of the challenges, we are still on the path of our initial journey of complete autonomous federated edge ML systems. So, apart from the solved obstacles, we will also cover some of the ongoing challenges.
By the end of this talk, attendees will gain a deeper insight into the practical application of edge ML Recommendation Systems. We will emphasize the advantages it offers, such as reduced latency, improved user experiences, enhanced privacy, and cost-effectiveness. Moreover, we will touch upon the future potential and broader implications of edge ML in the field.
- Why Edge ML is even needed in a server-driven space?
- How Edge ML is beneficial for Glance?
- Our journey from POC to full-scale production
- How we do fast experimentation in a slower app release cycle?
- How we are handling some of the challenges of running and architecting the system at scale.