The Fifth Elephant 2015
A conference on data, machine learning, and distributed and parallel computing
Jul 2015
13 Mon
14 Tue
15 Wed
16 Thu 08:30 AM – 06:35 PM IST
17 Fri 08:30 AM – 06:30 PM IST
18 Sat 09:00 AM – 06:30 PM IST
19 Sun
Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Hosted by
Aniruddha Gangopadhyay
@aniruddha9591
Submitted Jun 15, 2015
Understand how to derive more value out of real-time data streams by joining them using a stream processing system to derive deeper insights. We’ll walk through our experience of building a platform for such use-cases at Flipkart, and describe the design patterns we have evolved within it; we have scaled this platform to process billions of events a day across hundreds of streaming data applications.
Real-time data streams are everywhere, which is not really surprising considering how easy it has gotten to generate them. For example, applications can write their hearts out to a Kafka cluster, logs can be streamed out via Logstash, a Change Data Capture (CDC) system can be deployed to to turn a database’s write-ahead log into a stream, etc.
Stream processing systems such as Apache Storm have become quite popular for analysing such data streams; they are used to power real-time analytical dashboards as well as other data-driven products such as recommendations, trending topics etc.
But a lot more value could be derived by joining multiple data streams in real-time — when it comes to data, the value of the whole is much greater than the parts. For example, while data streams of search queries and product result clicks are useful by themselves, joining them allows us to derive metrics such as clickthrough-rate. But doing this is wrought with several challenges:
Stream processing systems such as Apache Storm do not provide any out of the box ways of expressing such complex joins, let alone taking care of the complications listed above. At Flipkart, we built a framework on top of Storm to do the same; this framework has been used for building hundreds of stream processing pipelines joining Kafka data streams.
In this talk, we’ll describe the design patterns that the framework implements, how we arrived at them and the lessons we learnt along the way.
Aniruddha works as a Software Engineer at Flipkart. He is the lead developer for the stream processing framework of the company’s central Data Platform. In this role Aniruddha was responsible for building and operationalising the design patterns being described in this talk.
Siddhartha Reddy is an Architect at Flipkart. He works with Aniruddha on the stream processing framework described here.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}