Rootconf 2019

On infrastructure security, DevOps and distributed systems.

Tickets

Kafka Streams at Scale

Submitted by DEEPAK GOYAL (@zonker) on Tuesday, 5 March 2019


Preview video

Section: Full talk of 40 mins duration Technical level: Advanced

View proposal in schedule

Abstract

http://Walmart.com generates millions of events per second. At WalmartLabs, I’m working in a team called the Customer Backbone (CBB), where we wanted to build a platform capable of processing this event volume in real-time and store the state/knowledge of possibly all the Walmart Customers generated by the processing. Kafka streams’ event-driven architecture seemed like the only obvious choice. However, there are a few challenges w.r.t. Walmart’s scale: the clusters need to be large and the problems thereof
infinite retention of changelog topics, wasting valuable disk
slow stand-by task recovery in case of a node failure (changelog topics have GBs of data)
no repartitioning in Kafka Streams

As part of the event-driven development and addressing the challenges above, I’m going to talk about some bold ideas that we developed as features/patches to Kafka Streams to deal with the scale required at Walmart.
Cold Bootstrap: Where in case of a Kafka Streams node failure, how instead of recovering from the change-log topic, we bootstrap the standby from active’s RocksDB using JSch and zero event loss by careful offset management.
Dynamic Repartitioning: We added support for repartitioning in Kafka Streams where state is distributed among the new partitions. We can now elastically scale to any number of partitions and any number of nodes.
Cloud/Rack/AZ aware task assignment: No active and standby tasks of the same partition are assigned to the same rack.
Decreased Partition Assignment Size: With large clusters like ours (>400 nodes and 3 stream threads per node), the size of Partition Assignment of the KS cluster being few 100MBs, it takes a lot of time to settle a rebalance.

Key Takeaways:
Basic understanding of Kafka Streams.
Productionizing Kafka Streams at large scale.
Using Kafka Streams as Distributed NoSQL DB.

Outline

Problem Statement: Stateful Realtime Processing of multi-million events.

  1. Intro Kafka Streams and event flow (2 slides)
  2. Challenges in Kafka Streams
    a. Fault Recovery b. Horizontal Scalability c. Cloud Readiness d. Restricted RocksDB e. Large Clusters
  3. Lay a background on why are these a challenge.
  4. How we forked the code to solve each of these over the past year.
  5. Conclusion
  6. Future Works

Speaker bio

Deepak is working at Walmart Labs as a software engineer in the Customer Backbone team where multi-millions events need to be processed in real-time. He’s promoting the event-driven architecture, enabled by a “Distributed NoSQL DB and Streaming Platform” based on Kafka Streams that the team is working on. He’s now working on a Distributed Graph Algorithm using the event-driven architecture.

Slides

https://www.slideshare.net/DeepakGoyal25/kafka-streams-at-scale-preview

Preview video

https://www.youtube.com/watch?v=U-fPBr7sVY8

Comments

  • Anwesha Das (@anweshasrkr) 7 months ago

    Thanks for submitting this proposal. We require slides and preview video by 12th March, latest, to evaluate your proposal and make a decision.

Login with Twitter or Google to leave a comment