The Fifth Elephant 2015

A conference on data, machine learning, and distributed and parallel computing

Data Infrastructure for Real Time Analysis of User Click Stream Data

Submitted by Aditya Prasad Narisetty (@adityaprasadn) on Monday, 15 June 2015

videocam
Preview video

Technical level

Beginner

Section

Full Talk

Status

Submitted

Vote on this proposal

Login to vote

Total votes:  +9

Objective

India is churning out a large number of service oriented startups by the day. They need to build customized views for users based on those users’ previous sessions and interactions with the product. Most startups can’t afford to design, build and maintain a custom Data Analytics Pipeline let alone do real-time data analysis and refine user interactions with the product. Most startups have few developers skilled in ROR, JS, Python or JAVA and no experience in setting up HDFS, Hadoop ETL, or MapReduce Jobs. This talk presents a way to build tailored data products for users backed by real-time data, with minimal resources. We’ll show how to build a pipeline with 3 production servers and experience in a single programming language.

Description

Google Analytics and MixPanel are great tools to push user click stream data to, fetch data periodically from and run analysis for building data driven products. For real-time click stream analysis and bidirectional communication with the user, scalable communication channels need to be set up. We’ve used engine-io on Nodejs for the communication channel. Then complex server side logic needs to be written to process the streaming data, de-normalise it, and store it in a schemaless fashion so that subsequent dynamic product changes don’t affect your data pipeline. We use RabbitMQ as a queuing system for durable and scalable AMQ. Message processing is done on supervised python processes, which updates profiles on a custom sharded Redis which persists to disk and MongoDB.

As the demand and supply on the site grows, this analytics setup must be scaled until a customized data pipeline can be built to store, process, retrieve and query your data. At Housing’s Data Science Labs, we’ve implemented a pipeline which scaled from 100 events per sec to 20k events per sec. This real time processed data must be queryable by production APIs, Business Analysts, Product Managers and Developers with priority and ease.

Speaker bio

Aditya Prasad Narisetty is a Data Engineer at Housing.com responsible for building and maintaining the Data Pipeline and Architecture for Analytics. He was previously working with Bwin.Party as a Software Engineer in Risk Management and Wallet Services. He’s a B.Tech from Indian Institute Of Technology-Bombay

Links

Preview video

https://www.youtube.com/watch?v=jQyGhOs0k_E

Comments

Login with Twitter or Google to leave a comment