Jul 2024
8 Mon
9 Tue
10 Wed
11 Thu
12 Fri 10:30 AM β 12:00 PM IST
13 Sat
14 Sun
Jul 2024
8 Mon
9 Tue
10 Wed
11 Thu
12 Fri 10:30 AM β 12:00 PM IST
13 Sat
14 Sun
Learn how to fine-tune an LLM to build a script-writing assistant for a popular daytime soap.
This session delves into the fundamentals of LLMs, including:
Participants will:
The instructor will briefly discuss transformer architecture and introduce Low-Rank Adaptation (LORA) as the preferred technique for fine-tuning an existing open-source model like Llama/Mistral/Phi-3.
Dataset preparation and customization to fine-tune the model:
Participants will delve into strategies and techniques for dataset preparation like structuring the prompts and annotating the data.
Fine-tuning strategies: In this section, participants will make use of the axolotl library to perform the fine-tuning.
The goal will be to provide a simple starting point and strategy for fine-tuning keeping in mind the GPU costs and experimentation errors.
Participants will discuss practical aspects such as:
- Basic familiarity with Python programming is a must.
This will be a highly interactive workshop.
Participants will be able to work hands-on during the session and experiment with fine-tuning processes, experiencing first-hand the impact of different strategies.
By the end of this session, participants will develop an understanding of the open-source LLM landscape. They will then have a good starting point for fine-tuning LLMs for any specific downstream tasks.
Sidharth Ramachandran has worked across several industries in Software Engineering, Data Science and AI. He is passionate about technology, sharing knowledge and building solutions that solve real problems.
This workshop is free to attend for The Fifth Elephant members or The Fifth Elephant Conference ticket buyers.
This workshop is open to 50 participants only. Seats will be available on first-come-first-serve basis. RSVP to secure a seat. ποΈ
For inquiries about the workshop, contact +91-7676332020 or write to info@hasgeek.com