The Fifth Elephant 2015

A conference on data, machine learning, and distributed and parallel computing

Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.

The deadline for submitting a proposal is 15th June 2015

We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.

Track 1: Discovering Insights and Driving Decisions

This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:

  • Statistical Visualizations
  • Unsupervised Learning
  • Supervised Learning
  • Semi-Supervised Learning
  • Active Learning
  • Reinforcement Learning
  • Monte-carlo techniques and probabilistic programming
  • Deep Learning

Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.

Track 2: Speed at Scale

This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:

  • Distributed and Parallel Computing
  • Real Time Analytics and Stream Processing
  • MapReduce and Graph Computing frameworks
  • Kafka, Spark, Hadoop, MPI
  • Stories of parallelizing sequential programs
  • Cost/Security/Disaster Management of Data

Commitment to Open Source

HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.

Workshops

If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Aditya Prasad Narisetty

@adityaprasadn

Data Infrastructure for Real Time Analysis of User Click Stream Data

Submitted Jun 15, 2015

India is churning out a large number of service oriented startups by the day. They need to build customized views for users based on those users’ previous sessions and interactions with the product. Most startups can’t afford to design, build and maintain a custom Data Analytics Pipeline let alone do real-time data analysis and refine user interactions with the product. Most startups have few developers skilled in ROR, JS, Python or JAVA and no experience in setting up HDFS, Hadoop ETL, or MapReduce Jobs. This talk presents a way to build tailored data products for users backed by real-time data, with minimal resources. We’ll show how to build a pipeline with 3 production servers and experience in a single programming language.

Outline

Google Analytics and MixPanel are great tools to push user click stream data to, fetch data periodically from and run analysis for building data driven products. For real-time click stream analysis and bidirectional communication with the user, scalable communication channels need to be set up. We’ve used engine-io on Nodejs for the communication channel. Then complex server side logic needs to be written to process the streaming data, de-normalise it, and store it in a schemaless fashion so that subsequent dynamic product changes don’t affect your data pipeline. We use RabbitMQ as a queuing system for durable and scalable AMQ. Message processing is done on supervised python processes, which updates profiles on a custom sharded Redis which persists to disk and MongoDB.

As the demand and supply on the site grows, this analytics setup must be scaled until a customized data pipeline can be built to store, process, retrieve and query your data. At Housing’s Data Science Labs, we’ve implemented a pipeline which scaled from 100 events per sec to 20k events per sec. This real time processed data must be queryable by production APIs, Business Analysts, Product Managers and Developers with priority and ease.

Speaker bio

Aditya Prasad Narisetty is a Data Engineer at Housing.com responsible for building and maintaining the Data Pipeline and Architecture for Analytics. He was previously working with Bwin.Party as a Software Engineer in Risk Management and Wallet Services. He’s a B.Tech from Indian Institute Of Technology-Bombay

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more