The Fifth Elephant 2015

A conference on data, machine learning, and distributed and parallel computing

Aniruddha Gangopadhyay

@aniruddha9591

Joining data streams at scale for fun and profit

Submitted Jun 15, 2015

Understand how to derive more value out of real-time data streams by joining them using a stream processing system to derive deeper insights. We’ll walk through our experience of building a platform for such use-cases at Flipkart, and describe the design patterns we have evolved within it; we have scaled this platform to process billions of events a day across hundreds of streaming data applications.

Outline

Real-time data streams are everywhere, which is not really surprising considering how easy it has gotten to generate them. For example, applications can write their hearts out to a Kafka cluster, logs can be streamed out via Logstash, a Change Data Capture (CDC) system can be deployed to to turn a database’s write-ahead log into a stream, etc.

Stream processing systems such as Apache Storm have become quite popular for analysing such data streams; they are used to power real-time analytical dashboards as well as other data-driven products such as recommendations, trending topics etc.

But a lot more value could be derived by joining multiple data streams in real-time — when it comes to data, the value of the whole is much greater than the parts. For example, while data streams of search queries and product result clicks are useful by themselves, joining them allows us to derive metrics such as clickthrough-rate. But doing this is wrought with several challenges:

  • Data streams could be coming in — or be processed with — different lags i.e. one stream could have data as recent as a few seconds ago while the latest data in another stream is from several minutes ago. This can occur due to buffering, cross datacenter replication etc.
  • Data streams could have data that is coming out-of-order i.e. while most of the latest few events in a data stream are from a few seconds ago, a small percentage could be from several minutes ago. This can also occur due to buffering, cross datacenter replication etc.
  • Data streams could be very large, to the tune of tens to hundreds of thousands of events per second.
  • Data streams could represent updates to some mutable state such as updates to a User Profile, an Order, Product Information etc. While this is not a big challenge by itself, it becomes much more complex to deal with in the context of the above challenges.

Stream processing systems such as Apache Storm do not provide any out of the box ways of expressing such complex joins, let alone taking care of the complications listed above. At Flipkart, we built a framework on top of Storm to do the same; this framework has been used for building hundreds of stream processing pipelines joining Kafka data streams.

In this talk, we’ll describe the design patterns that the framework implements, how we arrived at them and the lessons we learnt along the way.

Speaker bio

Aniruddha works as a Software Engineer at Flipkart. He is the lead developer for the stream processing framework of the company’s central Data Platform. In this role Aniruddha was responsible for building and operationalising the design patterns being described in this talk.

Siddhartha Reddy is an Architect at Flipkart. He works with Aniruddha on the stream processing framework described here.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Jump starting better data engineering and AI futures