The Fifth Elephant 2023 Winter

The Fifth Elephant 2023 Winter

On the engineering and business implications of AI & ML

Tickets

Loading…

Sugarcane AI

Sugarcane AI

@sugarcaneai

Building LLM Apps up to 20x Faster using Microservices Architecture

Submitted Nov 15, 2023

Overview

Microservices Architecture has transformed API and Frontend development, extending its impact to billions of users. Now, with the advent of generative AI and Large Language Models (LLMs), there’s an unprecedented opportunity to disrupt industries. However, developing reliable and safe LLM applications poses significant challenges for developers. This talk explores how Microservices Architecture can address these challenges and accelerate LLM app development.

Problem

Developers face multiple challenges when building scalable and safe LLM applications:

  1. LLM Selection
  2. LLM Interoperability
  3. Prompt Reusability
  4. Prompt Collaboration
  5. Prompt Testing and Evaluation
  6. Prompt Refining and Fine-Tuning Cost
  7. Reproducibility
  8. Guardrailing
  9. Outdated Knowledge of LLMs

Solution

This talk proposes breaking down the monolithic approach into a layered architecture inspired by Microservices. This approach addresses distinct concerns for app developers, prompt developers, and data scientists. Key components of Microservices Architecture and open-source frameworks for Prompt Management, including versioning, refining, accuracy, backtesting, and training coupled with MicroLLM service, will be discussed.

This talk is based on the speaker work at Sugarcane AI - npm like ecosystem for prompts.

Agenda

  • Introduction
  • Challenges with Monolithic LLM Development
  • Microservices Architecture in LLM
    • Components of Microservices Architecture
  • How Microservices Architecture accelerates development by up to 20x
    -. LLM Selection
    • LLM Interoperability
    • Reusability
    • Cost Challenges
    • Reproducibility
    • Guardrailing
    • Addressing Outdated Knowledge of LLMs
    • Migrating to Microservices Architecture in LLM Apps
    • Time and Cost of Development and Maintenance
  • Cherry on the Top
    • AI Safety
    • Knowledge Upgradation
    • Real-world Examples and Challenges (depends on time)

Speakers

Conclusion

Attendees will gain a comprehensive understanding of leveraging Microservices Architecture to build LLM applications up to 20x faster. Real-world examples and insights from Sugarcane AI and Toast will provide practical knowledge for navigating the challenges of LLM-based app development.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Jump starting better data engineering and AI futures

Supported by

Sponsor

Providing all founders, at any stage, with free resources to build a successful startup.