August meet-up

August meet-up

Meet-up for GenerativeAI practitioners in Pune

This is a meetup for enthusiasts and technologists who are using or want to use Generative AI.

Meet-up agenda

1. What is Generative AI - Aniket Kulkarni

Topics that will be covered:

  • What is GenerativeAI? Why is there suddenly so much talk about GenerativeAI?
  • What do terms like LLM, GPT, Transformer, Stable Diffusion, embeddings and prompt engineering etc. mean? How do LLMs work?
  • What are some of the open source LLMs?
  • What are the use cases of GenerativeAI?

Speaker Bio

Aniket Kulkarni

2. Demystifying Quantization in Large Language Models (LLMs) by Harshad Saykhedkar

What will be covered?

  • Basic maths of sizing up the memory and compute requirements of training and inference of a large language model will be covered. Some popular open source models will be used as example.
  • Quick brush-up of data types.
  • In plain English, how are popular quantization methods working?
  • Take an example of a typical computation in a neural network and show what quantization brings to the table.
  • What impact does this make on compute, memory requirements? What is the fine print?
  • Why is this important? How can you apply this in your work?

Why is this topic is important?

Quantization has emerged as a significant enabler for large language models (LLMs), making them accessible for companies without extravagant budgets (read: throw money at the problem) and paving the way for edge deployments. This talk delves beyond the basic concept of converting floats to integers. The underlying math that governs the memory and computation requirements will be explained, demonstrating how quantisation computations facilitate not only inference but also, potentially, training. Additionally, illuminating the cost, computational, and business impacts of quantisation will be discussed.

Key takeaways for audience

  • Intuitive, yet in-depth comprehension of why quantization is crucial for training or fine-tuning LLMs.
  • What is, roughly, happening in the maths? Where are the trade-offs?
  • How does it impact accuracy? What is the evidence for its claims?
  • How to make informed quantisation trade-offs, equipping them to exploit LLMs across various use cases effectively.

Speaker Bio

Harshad Saykhedkar

3. Lightning Talks - TBD

10-15 minutes presentation on

  • Use of Generative AI in participants’ work.
  • Successes and failures of techniques used.

4. Networking session

Fellow attendees can exchange information and ideas with others.

Hosted by

A platform to discuss about Generative AI for enthusiasts in and around Pune

This is a meetup for enthusiasts and technologists who are using or want to use Generative AI.

Meet-up agenda

1. What is Generative AI - Aniket Kulkarni

Topics that will be covered:

  • What is GenerativeAI? Why is there suddenly so much talk about GenerativeAI?
  • What do terms like LLM, GPT, Transformer, Stable Diffusion, embeddings and prompt engineering etc. mean? How do LLMs work?
  • What are some of the open source LLMs?
  • What are the use cases of GenerativeAI?

Speaker Bio

Aniket Kulkarni

2. Demystifying Quantization in Large Language Models (LLMs) by Harshad Saykhedkar

What will be covered?

  • Basic maths of sizing up the memory and compute requirements of training and inference of a large language model will be covered. Some popular open source models will be used as example.
  • Quick brush-up of data types.
  • In plain English, how are popular quantization methods working?
  • Take an example of a typical computation in a neural network and show what quantization brings to the table.
  • What impact does this make on compute, memory requirements? What is the fine print?
  • Why is this important? How can you apply this in your work?

Why is this topic is important?

Quantization has emerged as a significant enabler for large language models (LLMs), making them accessible for companies without extravagant budgets (read: throw money at the problem) and paving the way for edge deployments. This talk delves beyond the basic concept of converting floats to integers. The underlying math that governs the memory and computation requirements will be explained, demonstrating how quantisation computations facilitate not only inference but also, potentially, training. Additionally, illuminating the cost, computational, and business impacts of quantisation will be discussed.

Key takeaways for audience

  • Intuitive, yet in-depth comprehension of why quantization is crucial for training or fine-tuning LLMs.
  • What is, roughly, happening in the maths? Where are the trade-offs?
  • How does it impact accuracy? What is the evidence for its claims?
  • How to make informed quantisation trade-offs, equipping them to exploit LLMs across various use cases effectively.

Speaker Bio

Harshad Saykhedkar

3. Lightning Talks - TBD

10-15 minutes presentation on

  • Use of Generative AI in participants’ work.
  • Successes and failures of techniques used.

4. Networking session

Fellow attendees can exchange information and ideas with others.

Venue

OneCard HQ

OneHQ, Survey No. 127, Seasons Rd, opposite Sarjaa, Aundh, Pune, Maharashtra 411007

Pune - 411007

Maharashtra, IN

Hosted by

A platform to discuss about Generative AI for enthusiasts in and around Pune