Script-write your soap opera in a fine-tuning LLMs workshop

About the workshop πŸ“š

Learn how to fine-tune an LLM to build a script-writing assistant for a popular daytime soap.
This session delves into the fundamentals of LLMs, including:

  • Pre-training LORA (Low-Rank Adaptation)
  • Fine-tuning LORA
  • Hands-on with Direct Preference Optimization (DPO).

Participants will:

  • Get an introduction to LLMs.
  • Learn various strategies for data set preparation and fine-tuning.
  • Practical hacks to enhance model performance.
  • This workshop is of 90 mins duration.
  • This is an online workshop.
  • Only 50 seats are open for workshop participation.
  • Recording of the workshop will be made available for The Fifth Elephant members.

Workshop outline πŸ—‚οΈ

  1. Understanding LLMs: High-level overview of LLMs; introduction to the concepts of:
  • Pre-training.
  • Supervised fine-tuning, and
  • Reinforcement Learning with Human Feedback (RLHF).

The instructor will briefly discuss transformer architecture and introduce Low-Rank Adaptation (LORA) as the preferred technique for fine-tuning an existing open-source model like Llama/Mistral/Phi-3.

  1. Dataset preparation and customization to fine-tune the model:
    Participants will delve into strategies and techniques for dataset preparation like structuring the prompts and annotating the data.

  2. Fine-tuning strategies: In this section, participants will make use of the axolotl library to perform the fine-tuning.
    The goal will be to provide a simple starting point and strategy for fine-tuning keeping in mind the GPU costs and experimentation errors.
    Participants will discuss practical aspects such as:

  • Parameter tuning,
  • Loss function optimization, and
  • Ways to avoid common pitfalls in model training.

Important prerequisites to attend the workshop πŸ“

- Basic familiarity with Python programming is a must.

  • Ability to work in a notebook environment (e.g. Jupyter, Google Colab).
  • Access to Github repository (shared in advance).
  • Create accounts with HuggingFace and Modal Labs (shared in advance).

Who should attend this workshop πŸ‘¨ πŸ’»

  • Data scientists
  • Engineers
  • Enthusiasts
    who want to understand the intricacies of fine-tuning open-source LLMs in a practical manner.

How will participants benefit from the workshop πŸŽ“

This will be a highly interactive workshop.

Participants will be able to work hands-on during the session and experiment with fine-tuning processes, experiencing first-hand the impact of different strategies.

By the end of this session, participants will develop an understanding of the open-source LLM landscape. They will then have a good starting point for fine-tuning LLMs for any specific downstream tasks.

About the Instructor πŸ‘¨ 🏫

Sidharth Ramachandran has worked across several industries in Software Engineering, Data Science and AI. He is passionate about technology, sharing knowledge and building solutions that solve real problems.

How to register

This workshop is free to attend for The Fifth Elephant members or The Fifth Elephant Conference ticket buyers.

This workshop is open to 50 participants only. Seats will be available on first-come-first-serve basis. RSVP to secure a seat. 🎟️

Contact information ☎️

For inquiries about the workshop, contact +91-7676332020 or write to



Hosted by

All about data science and machine learning