Rootconf 2025 Annual Conference CfP

Rootconf 2025 Annual Conference CfP

Speak at Rootconf 2025 Annual Conference

Tickets

Loading…

Bharath Nallapeta

RRR: Rapid, Resilient, Reliable - Cluster Provisioning with k0rdent

Submitted Mar 24, 2025

Abstract

MLOps isn’t just about deploying AI models - it’s about building a scalable, repeatable, and automated AI platform. Setting up GPU-powered clusters, model-serving, and monitoring for AI workloads can be a nightmare of manual configurations, slow iteration cycles, and fragmented tooling.

This talk isn’t about scaling AI models - it’s about scaling AI infrastructure. We’ll showcase how k0rdent automates the entire MLOps lifecycle, from GPU cluster provisioning to model deployment, scaling, and monitoring.

Watch it live: We’ll spin up a GPU-enabled Kubernetes cluster across clouds and regions, deploy AI infrastructure, and show how k0rdent streamlines the entire MLOps workflow - all in minutes.

MLOps is complicated; but it doesn’t have to be - k0rdent makes it effortless.

Take-aways

Automated MLOps infrastructure: Deploy GPU-ready clusters across clouds effortlessly.

Zero Manual GPU Setup: NVIDIA GPU Operator automates and optimizes GPU usage without extra work.

Full-Stack MLOps Observability: KOF provides real-time AI infrastructure monitoring at scale.

Target Audience

Platform Engineers, DevOps practitioners, AI/ML Engineers looking to simplify and automate AI model deployment on Kubernetes.

Speaker Bio

Bharath Nallapeta is a seasoned cloud-native engineer specializing in Go, Kubernetes, and platform engineering. With extensive experience in designing and operating scalable Kubernetes infrastructure, Bharath has contributed to open-source projects and enterprise-grade cloud solutions. His expertise spans Kubernetes automation, multi-cloud deployments, and cluster management, with a deep understanding of how to optimize performance, security, and efficiency in cloud environments.

Beyond Kubernetes, Bharath is passionate about bridging AI and cloud-native technologies, ensuring AI workloads can scale efficiently and cost-effectively on Kubernetes. He actively works on building resilient, automated, and developer-friendly platforms that make running AI/ML workloads seamless in production environments.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

We care about site reliability, cloud costs, security and data privacy