The Fifth Elephant 2014

A conference on big data and analytics

ANALYTICS ON BIG FAST DATA USING REAL TIME STREAM DATA PROCESSING ARCHITECTURE

Submitted by Arvind Gopinath (@arvindo) on Sunday, 2 March 2014

videocam_off

Technical level

Intermediate

Section

Full talk

Status

Submitted

Vote on this proposal

Login to vote

Total votes:  +24

Objective

To understand big data real time processing challenges, technology maturity on real-time/near-time analytics and modern big data architecture built with Hadoop

Description

We all know the Hadoop like Batch processing system had evolved and matured over past few
years for excellent offline data processing platform for Big Data. Hadoop is a high-throughput system which can crunch a huge volume of data using a distributed parallel processing paradigm called MapReduce.

But there are many use cases across various domains which require real-time / near real-time response on Big Data for faster decision making. Hadoop is not suitable for those use cases. Credit card fraud analytics, network fault prediction from sensor data, security threat prediction, and so forth need to process real time data stream on the fly to predict if a given transaction is a fraud, if the system is developing a fault, or if there is a security threat in the network.

If decisions such as these are not taken in real time, the opportunity to mitigate the damage is lost.

In this talk I will showcase how Call Data Record (CDR) analytics use case can be solved using popular open source technologies to process real time data in a fault tolerant and distributed manner. The major challenge here is; it needs to be always available to process real time feeds from various telecom switches; it needs to be scalable enough to process hundreds of thousands of call records per second; and it needs to support a scale-out distributed architecture to process the stream in parallel.

To solve this complex real time processing challenge, we have evaluated two popular open
source technologies; Apache Kafka (http://kafka.apache.org/design.html), which is the
distributed messaging system, and Storm (http://storm-project.net/) which is a distributed stream processing engine.

http://hortonworks.com/hadoop/storm/

Requirements

Participants should have basic knowledge on Big Data concepts, Hadoop and any Programming Languages.

Speaker bio

Big Data Solution Architect @ Overall 12 years of experience in building distributed and enterprise search/web based applications primary working with Big Data Technologies & Tools.

Specialties:

Domain - Predictive Analytics, Recommendation Engines, Performance Re-engineering
Big Data Technology - Apache Hadoop, Kafka, Storm, Hive, HBase, OOZIE, Flume, SQOOP, Zookeeper, Cloudera Enterprise (CDH3, CDH4), MongoDB, Neo4j, MarkLogic, Solr
Programming Languages - Core Java, J2EE, Spring, ColdFusion MX, Jasper Reports, Quartz, Oracle, PL-SQL, JavaScript, HTML, AJAX

in.linkedin.com/in/arvindgopinath/

Comments

  • 1
    Vidyasagar Venkatachalam (@catchsagar) 4 years ago

    Good topic. All the best!
    -Sagar

  • 1
    Vinayak Hegde (@vin) 4 years ago

    Kafka and Storm solve two different problems. Can you go into more detail how you used them ?

  • 1
    Dibyendu Bhattacharya (@dibbhatt) 2 years ago

    Hey, this is copied from my 2nd prize winner paper in EMC Proven Professional Knowledge Sharing Competition 2013. This is just verbatim copy.

Login with Twitter or Google to leave a comment