The Fifth Elephant 2024 Annual Conference

India’s most prestigious Big Data, Machine learning & Data Science conference



Sidharth Ramachandran


Fine-Tuning LLMs for Script-Writing: A Journey into the world of open source LLMs

Submitted May 25, 2024

Explore the emerging technology of open-source Large Language Models (LLMs) with a hands-on tutorial where we will fine-tune an LLM to build a script-writing assistant for a popular daytime soap. This session delves into the fundamentals of LLMs, including pre-training and fine-tuning with LORA (Low-Rank Adaptation) or Direct Preference Optimization (DPO), offering a good understanding of these new paradigms. Participants will learn various strategies for fine-tuning, along with practical hacks to enhance model performance. The tutorial will also focus on dataset preparation as a way of ensuring important external knowledge like the character arc and the story plotline are well understood by the fine-tuned model.


The tutorial will cover the following key aspects -

Understanding LLMs:
We will provide a high-level overview of LLMs and introduce the concepts of pre-training, supervised fine-tuning, and Reinforcement learning with Human Feedback (RLHF). We will briefly discuss the transformer architecture and then introduce the Low-Rank Adaptation (LORA) as the preferred technique for fine-tuning an existing open-source model like Llama/Mistral/Phi-3.

Dataset Preparation and Customization:
The next step will be the preparation of the dataset that will be used to fine-tune the model. An important reason we chose to fine-tune is to ensure that our scriptwriting assistant keeps the tone and style of characters and also imbibes knowledge about the show like the environment and plotline. We will delve into strategies and techniques for dataset preparation like structuring the prompts and annotating the data.

Fine-Tuning Strategies:
In this section, we will make use of the axolotl library to perform the fine-tuning. We will start with the Mistral/Llama 7B/Phi-3 models and explore the hyper-parameters and understand how each of them will affect our training run. The goal will be to provide a simple starting point and strategy for fine-tuning keeping in mind the GPU costs and experimentation errors. We’ll discuss practical aspects like parameter tuning, loss function optimization, and ways to avoid common pitfalls in model training.


This tutorial is aimed at data scientists, engineers, and enthusiasts where we will do a deep dive into the intricacies of fine-tuning open-source LLMs to build a script-writing assistant tailored for a popular daytime soap. WE plan the workshop to be highly interactive, with the code and data shared beforehand through a GitHub repo. Participants will be able to work hands-on during the session and experiment with fine-tuning processes, experiencing firsthand the impact of different strategies. By the end of this session, attendees will build an understanding of the open-source LLM landscape and have a good starting point for fine-tuning LLMs for any specific downstream tasks.


{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid Access Ticket

Hosted by

All about data science and machine learning

Supported by

Gold Sponsor

Atlassian unleashes the potential of every team. Our agile & DevOps, IT service management and work management software helps teams organize, discuss, and compl

Silver Sponsor