arrow_back How to Make Big Data Real and Valuable ...
ANALYTICS ON BIG FAST DATA USING REAL TIME STREAM DATA PROCESSING ARCHITECTURE
Submitted by Arvind Gopinath (@arvindo) on Sunday, 2 March 2014
Section: Full talk Technical level: Intermediate
To understand big data real time processing challenges, technology maturity on real-time/near-time analytics and modern big data architecture built with Hadoop
We all know the Hadoop like Batch processing system had evolved and matured over past few
years for excellent offline data processing platform for Big Data. Hadoop is a high-throughput system which can crunch a huge volume of data using a distributed parallel processing paradigm called MapReduce.
But there are many use cases across various domains which require real-time / near real-time response on Big Data for faster decision making. Hadoop is not suitable for those use cases. Credit card fraud analytics, network fault prediction from sensor data, security threat prediction, and so forth need to process real time data stream on the fly to predict if a given transaction is a fraud, if the system is developing a fault, or if there is a security threat in the network.
If decisions such as these are not taken in real time, the opportunity to mitigate the damage is lost.
In this talk I will showcase how Call Data Record (CDR) analytics use case can be solved using popular open source technologies to process real time data in a fault tolerant and distributed manner. The major challenge here is; it needs to be always available to process real time feeds from various telecom switches; it needs to be scalable enough to process hundreds of thousands of call records per second; and it needs to support a scale-out distributed architecture to process the stream in parallel.
To solve this complex real time processing challenge, we have evaluated two popular open
source technologies; Apache Kafka (http://kafka.apache.org/design.html), which is the
distributed messaging system, and Storm (http://storm-project.net/) which is a distributed stream processing engine.
Participants should have basic knowledge on Big Data concepts, Hadoop and any Programming Languages.
Big Data Solution Architect @ Overall 12 years of experience in building distributed and enterprise search/web based applications primary working with Big Data Technologies & Tools.
Domain - Predictive Analytics, Recommendation Engines, Performance Re-engineering
Big Data Technology - Apache Hadoop, Kafka, Storm, Hive, HBase, OOZIE, Flume, SQOOP, Zookeeper, Cloudera Enterprise (CDH3, CDH4), MongoDB, Neo4j, MarkLogic, Solr