Session on "Use Cases and Risks of ML in Capital Markets" | 23rd Dec at 4pm Hi everyone! The AI and Risk Mitigation project is well underway and for the third session, we will be joined by Rachna Maheshwari, Associate Director at CRI… more
The 2023 Monsoon edition is curated by:
- Nischal HP, Vice President of Data Engineering and Data Science at Scoutbee. Nischal curated the MLOps conference which was held online between 23 and 27 July 2021.
- Sumod Mohan, Founder and CEO at AutoInfer. Sumod curated Anthill Inside 2019 edition, held in Bangalore on 23 November.
- AI and Research - covers research, findings, and solutions for challenges on building models in various areas such as fraud detection, forecasting, and analytics. This track delves into the latest methodologies for handling challenges such as large-scale data processing, distributed computing, and optimizing model performance.
- Industrial applications of ML - covers implementation of AI in the industry, with more focus on the AI models, the issues in training, gathering data so, and so forth. ML is being used at scale in industries such as automotive, mechanical, manufacturing, agriculture, and such domains. This track focuses on the challenges in this space, as we see innovation coming out of these industries in the pursuit of using ML on a second-to-second basis.
- AI and Product - covers strategies for building AI products to scale and mitigating challenges. This track provides insights on incorporating AI tools and forecasting techniques to improve model training, developing a working model architecture, and using data in the business context.
There are three phases in the lifecycle of an application - research, application and aftermath of the application.
- Assess capabilities, determining the new frontiers for AI.
- Find a use for the application.
- Learn how to run it, monitor it and update it with time.
The three tracks at the 2023 Monsoon edition of The Fifth Elephant will cover this lifecycle.
The Fifth Elephant 2023 Monsoon edition will be held in-person. Attendance is open to The Fifth Elephant members only. Purchase a membership to attend the conference in-person. If you have questions about participation, post a comment here.
- Data/MLOps engineers who want to learn about state-of-the-art tools and techniques, especially from domains such as automobile, agri-tech and mechanical industries.
- Data scientists who want a deeper understanding of model deployment/governance.
- Architects who are building ML workflows that scale.
- Tech founders who are building products that require AI or ML.
- Product managers, who want to learn about the process of building AI/ML products.
- Directors, VPs and senior tech leadership who are building AI/ML teams.
Sponsorship slots are open for:
- Infrastructure (GPU, CPU and cloud providers) and developer productivity tool makers who want to evangelise their offering to developers and decision-makers.
- Companies seeking tech branding among AI and ML developers.
- Venture Capital (VC) firms and investors who want to scan the landscape of innovations and innovators in AI and who want to source leads for investment in the AI and ML space.
Harmonising Art and AI: Crafting Jazzy and Juicy Video Snippets through AI
In recent times, Live Streaming platforms are gaining popularity where live content is being shown to users. Typically, the videos created by the creators range from 15 minutes to an hour. After intensive research, it was found that a sizable chunk of users drops within first 30 seconds of the video. Another piece of research shows that, on average, a user only has an attention span of 30 seconds. And this number is even lower in Gen Z, which is our main target audience. To solve this problem, we would want to identify the juiciest segments from videos as well as add external features that would prompt a user to land on the base video, overall increasing user engagement and jazziness of the video. Also, we want to make a customizable framework that would cater not only to snippets but also trailers, mashups, etc.
To solve this problem, we did extensive research on the tools that already exist on the market to solve it. When searched globally, there is no single tool or solution that aims to solve this. There are several solutions that try to tackle this in bits and pieces, but not fully. We then read some research papers on how we can do this end-to-end, and from here we got a couple of ideas to try.
We broke our solution into two parts: how to get the base snippet (the juiciest part within the videos) and what are the different post-processing techniques that we can apply to it. To summarise our solution,
- A transcription-based approach to finding speech-to-text (SOTA)
- We optimised this model using ctranslate for faster inference.
- Used Flan T5 XXL to generate a summary of the sentences.
- Used simple transformer-based models to calculate sentence similarity between sentences and a summary.
- Used a moving average on the cosine scores to generate the best timestamp for the summary.
- Key moments in the video (We used CLIP-based models to identify them based on a prompt and user interactions)
- Used frame-level analysis (phasing) to determine shot detection (where sudden changes happen)
- Used stickers and gifs (based on context from the Flan model)
- Created an in-house solution for memes (using Stable diffusion)
- A stable diffusion-based model for artistic video generation
- Used ESRGANs to upsample videos to increase quality.
Impact and Future Work
We deployed our solution at scale (500 video snippets per day) in India. We saw a staggering increase of close to 80% in overall time spent and user engagements. As next steps, we are planning to scale this solution to Indonesia and then to the US. We are also aiming to create a new feed just for these videos. We will also be focusing on further improvements, both in base snippets and post-processing.