The Fifth Elephant 2015

A conference on data, machine learning, and distributed and parallel computing

Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.

The deadline for submitting a proposal is 15th June 2015

We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.

Track 1: Discovering Insights and Driving Decisions

This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:

  • Statistical Visualizations
  • Unsupervised Learning
  • Supervised Learning
  • Semi-Supervised Learning
  • Active Learning
  • Reinforcement Learning
  • Monte-carlo techniques and probabilistic programming
  • Deep Learning

Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.

Track 2: Speed at Scale

This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:

  • Distributed and Parallel Computing
  • Real Time Analytics and Stream Processing
  • MapReduce and Graph Computing frameworks
  • Kafka, Spark, Hadoop, MPI
  • Stories of parallelizing sequential programs
  • Cost/Security/Disaster Management of Data

Commitment to Open Source

HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.


If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.

Hosted by

All about data science and machine learning

Siddhartha Reddy


Stream Processing in production: Metrics that matter

Submitted Jun 15, 2015

Understand what are some useful metrics to monitor the health of stream processing jobs (such as Apache Storm topologies) when they are deployed in production. Also get some ideas on how to capture these metrics (including suggestions for libraries & tools), and how to proactively mitigate the problems from escalating.


Stream Processing platforms such as Apache Storm have become pretty commonplace. They are used for powering a variety of applications such as real-time analytical dashboards and other data-driven applications such as recommendations. We have also seen Storm being employed simply as a distributed fault-tolerant runtime for applications that need to consume data from a queue and do some operations on it.

But because these jobs typically don’t come in user path, they are often not monitored well or at all. Or put another way, the only monitoring some of them have is the business guys alerting us by shouting “hey, my analytics dashboards are stale!”

Flipkart’s Data Platform hosts hundreds of stream processing applications and several of these are critical for our business. As such, we can’t afford to not monitor their health. So we evolved a whole bunch of metrics that we monitor for each of these jobs. These metrics are displayed as a part of our platform health dashboards which are displayed on large TV screens in our team area; we connected them to our alerting system to warn us about any mishaps; we have even set up some automated corrective actions to be taken based on some of them.

In this talk we’ll describe the metrics we monitor for each stream processing job, how we capture them, the libraries and tools we use, how we track them, and how we act on them.

Speaker bio

Siddhartha is an Architect at Flipkart, working on the company’s central Data Platform. He is responsible among other things for developing and operating a multi-tenant stream processing platform.

Aniruddha is a Software Engineer in the Data Platform team at Flipkart. He has worked on building and operating Storm topologies for various stream processing requirements.


{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

All about data science and machine learning