Tickets

Loading…

Jaideep Khandelwal

Jaideep Khandelwal

@jaideepk

ABC of LLMOps - What does it take to run self-hosted LLMs?

Submitted Oct 11, 2024

LLMs and generative AI have made their way into our day-to-day operations. While the wrappers over GPT are a good starting point, I was intrigued by what it takes for an SRE to understand the domain, identify its operational aspects, and build runbooks around running self-hosted LLM models.

Currently, too many models are built, but very few are in production. While many companies are trying to streamline the toolchain, it is still nascent. The body of work I will discuss is an experiment to build an understanding of the LLMOps ecosystem.

We built an internal server setup and have explored deploying the models on GPUs instead of relying on OpenAI.

Goals:

  1. Learn the domain from first principles.
  2. Build practices around running models on Kubernetes with GPUs.
  3. Know what it takes to run and manage Vector databases for storing embeddings.
  4. Use the above knowledge to build and produce RAG applications.

What was our learning curve:

  1. Take the basic concepts of the domain and build a mental model of the toolchain and ecosystem.
  2. Explore platforms like Ray/Kuberay and model repositories like HuggingFace. Understand their usage from operational aspects.
  3. Learn basic models like Phi3 and evolve to using advanced models like Llama3.1.
  4. Understand the pipeline from a developer perspective - using frameworks like Langchain, finding its limitations, and shifting our codebase to LlamIndex.
  5. Start with toy applications to explore each tool individually. After gaining a basic understanding, we moved to building an RAG application for internal usage - we built a resume filter application.
  6. Dogfooded it internally and learned more about prompt engineering, vector embeddings, and databases like QDrant.

What will you gain from this talk:

  1. How should you approach this domain if you are managing backend systems - Our learnings were less from a pure development view and more from a “How will I run this in production?” lens.
  2. What does it take to build your home lab, and where can you save costs - Is the public cloud cheaper, or if you are an org thinking of investing in the domain, does buying your hardware make sense?
  3. The domain of LLMOps is developing, and we are trying to learn it through experimentation. You will also gain a perspective and an approach to experimenting through this landscape.

This talk benefits software engineers at all levels but is especially relevant for SRE and DevOps practitioners.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid Access Ticket

Hosted by

We care about site reliability, cloud costs, security and data privacy

Supported by

Platinum Sponsor

Nutanix is a global leader in cloud software, offering organizations a single platform for running apps and data across clouds.

Platinum Sponsor

PhonePe was founded in December 2015 and has emerged as India’s largest payments app, enabling digital inclusion for consumers and merchants alike.

Silver Sponsor

The next-gen analytics engine for heavy workloads.

Sponsor

Community sponsor

Peak XV Partners (formerly Sequoia Capital India & SEA) is a leading venture capital firm investing across India, Southeast Asia and beyond.

Venue host - Rootconf workshops

Thoughtworks is a pioneering global technology consultancy, leading the charge in custom software development and technology innovation.

Community Partner

FOSS United is a non-profit foundation that aims at promoting and strengthening the Free and Open Source Software (FOSS) ecosystem in India. more

Community Partner

A community of Rust language contributors and end-users from Bangalore. We have presence on the following telegram channels https://t.me/RustIndia https://t.me/fpncr LinkedIn: https://www.linkedin.com/company/rust-india/ Twitter (not updated frequently): https://twitter.com/rustlangin more