Advancing multimodal and agentic AI: systems, storage & scalability
Open Source AI Meet-up - Bangalore edition
Apr 2025
31 Mon
1 Tue
2 Wed
3 Thu
4 Fri 01:45 PM – 06:10 PM IST
5 Sat
6 Sun
Submitted Mar 24, 2025
In this talk, we will explore the growing need to fine-tune large pre-trained models for specialized tasks, and the limitations of conventional fine-tuning methods—especially their high computational and storage costs. We begin with Parameter-Efficient Fine-Tuning (PEFT) techniques, focusing on LoRA (Low-Rank Adaptation), an adapter-based approach that enables efficient model adaptation by introducing a small number of trainable parameters.
Through a hands-on implementation of LoRA in a multilayer perceptron (MLP) for a binary classification task, we’ll cover adapter insertion, parameter configuration, and the evaluation of parameter efficiency. We’ll also discuss real-world workflows—like sharing models via Hugging Face Hub—and explore practical extensions such as Quantized-LoRA for reducing inference-time memory usage.
Building on this foundation, the talk transitions into the theory of Intrinsic Dimension (ID)—the hypothesis that neural networks, despite their large size, may require only a few effective directions to learn. Using random subspace training, we measure ID and analyze how models behave when learning is constrained to low-dimensional subspaces. This leads to a key insight: LoRA’s efficiency aligns closely with the principles of intrinsic dimension, offering a deeper theoretical understanding of why PEFT methods work.
This talk bridges the ideas explored in my four-part blog series, which aims to demystify PEFT, LoRA, and intrinsic dimension for a broader audience. These blogs gained good visibility — receiving positive traction on /r/ Machine learning and ranking 7th on Hacker News, where they stayed on the front page for a day. First two blogs on LoRA became the basis of a talk I gave at PyCon India 2024.
Hosted by
Supported by
Meet-up sponsor
Community sponsor
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}