Rootconf Mini 2024 (on 22nd & 23rd Nov)

Geeking out on systems and security since 2012

This video is for members only

Achal Shah

Achal Shah

@achalshah

Rebuilding Tecton's Realtime Compute stack (twice)

Submitted Oct 5, 2024

Overview:

This tech talk proposes to dive into the evolution of Tecton’s real-time compute stack, a journey that started with sidecar processes, moved through serverless architecture, and ultimately matured into a native service deployed on virtual machines (VMs). The session will (hopefully) outline the challenges, lessons learned, and engineering decisions made at each stage.

I’d like to have the following rough sections:

Background and Context:

  • Tecton’s Realtime Data Stack: Introduce Tecton and its real-time data processing requirements for machine learning (ML) and feature serving, including how tecton executes user-defined post processing code in realtime feature retreival APIs.
  • Initial Architecture - Sidecar Process: Explain how the compute stack initially relied on a sidecar model, what this architecture entailed, and its advantages in simplicity and quick iteration during early-stage development.

Challenges with Sidecar Processes:

  • Resource Contention: Describe the limitations of running sidecars alongside primary services, especially concerning resource isolation, security posture, network latencies, and scaling issues.
  • Operational Complexity: How managing large-scale, sidecar-based microservices introduced operational overhead and complexity.
  • Poor customer experience: Tecton dictated the environment/available libraries. Users couldn’t customize this because the environment was baked into our service code.

Moving to Serverless Functions:

  • The Appeal of Serverless: Discuss the decision to explore serverless functions (AWS Lambda) to handle real-time compute, reducing the overhead of managing servers and improving cost efficiency.
  • We utilized AWS Lambda to execute aforementioned user-defined post processing code.
  • We also extended our usage of AWS Lambda to build an API-driven ingestion service.
  • Benefits: Elastic scaling, more flexible and user-managed python environments for their postprocessing code, and better security posture.
  • Limitations Cold starts, poor performance even with warm starts, concurrency limits, debugging complexities, vendor lock-in. These challenges affected our real-time SLAs and generally led to poor user experience.

Evolution to Native Service on VMs:

  • What did we get wrong about serverless?
    • For customers, performance is paramount.
    • In our case, with a strict SLA but diverse workloads, serverless implementations are a non-started. The variance in latencies even in the happy path is too large.
    • The cost for lambda functions is actually higher than a lean service serving the same workload. Execution times were higher on Lambda.
    • Embedding state in Lambda (using Lambda layers) is high friction. Users changes take multiple minutes to reflect in their production workload. This is too long.
  • The Shift to Native Services: After outgrowing serverless solutions, explain the move towards a native service on VMs. This section will cover why VMs were chosen over containers or Kubernetes in this specific case.
  • Performance Gains: Far reduced latencies, better control over resource allocation, and improved predictability of performance for real-time ML feature serving. All while providing the same (in fact, better) user product experience.
  • Operational Improvements: Describe how the switch to VMs simplified monitoring, debugging, and scaling at the infrastructure level.
  • Cloud portablility. With the right abstractions in place, we were able to deploy these services to GCP with very little lift.

Technical Challenges and Lessons Learned:

  • Architectural Trade-offs: Key trade-offs between these architectures (sidecars vs. serverless vs. VMs), specifically as interpreted by us.
  • Product lessons learned

Future Directions:

  • Long-term vision

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

We care about site reliability, cloud costs, security and data privacy

Supported by

Platinum Sponsor

Nutanix is a global leader in cloud software, offering organizations a single platform for running apps and data across clouds.

Platinum Sponsor

PhonePe was founded in December 2015 and has emerged as India’s largest payments app, enabling digital inclusion for consumers and merchants alike.

Silver Sponsor

The next-gen analytics engine for heavy workloads.

Sponsor

Community sponsor

Peak XV Partners (formerly Sequoia Capital India & SEA) is a leading venture capital firm investing across India, Southeast Asia and beyond.

Venue host - Rootconf workshops

Thoughtworks is a pioneering global technology consultancy, leading the charge in custom software development and technology innovation.

Community Partner

FOSS United is a non-profit foundation that aims at promoting and strengthening the Free and Open Source Software (FOSS) ecosystem in India. more

Community Partner

A community of Rust language contributors and end-users from Bangalore. We have presence on the following telegram channels https://t.me/RustIndia https://t.me/fpncr LinkedIn: https://www.linkedin.com/company/rust-india/ Twitter (not updated frequently): https://twitter.com/rustlangin more