Jul 2014
21 Mon
22 Tue
23 Wed 09:30 AM – 05:00 PM IST
24 Thu 09:45 AM – 05:00 PM IST
25 Fri 08:30 AM – 07:15 PM IST
26 Sat 08:30 AM – 07:15 PM IST
27 Sun
Arvind Gopinath
To understand big data real time processing challenges, technology maturity on real-time/near-time analytics and modern big data architecture built with Hadoop
We all know the Hadoop like Batch processing system had evolved and matured over past few
years for excellent offline data processing platform for Big Data. Hadoop is a high-throughput system which can crunch a huge volume of data using a distributed parallel processing paradigm called MapReduce.
But there are many use cases across various domains which require real-time / near real-time response on Big Data for faster decision making. Hadoop is not suitable for those use cases. Credit card fraud analytics, network fault prediction from sensor data, security threat prediction, and so forth need to process real time data stream on the fly to predict if a given transaction is a fraud, if the system is developing a fault, or if there is a security threat in the network.
If decisions such as these are not taken in real time, the opportunity to mitigate the damage is lost.
In this talk I will showcase how Call Data Record (CDR) analytics use case can be solved using popular open source technologies to process real time data in a fault tolerant and distributed manner. The major challenge here is; it needs to be always available to process real time feeds from various telecom switches; it needs to be scalable enough to process hundreds of thousands of call records per second; and it needs to support a scale-out distributed architecture to process the stream in parallel.
To solve this complex real time processing challenge, we have evaluated two popular open
source technologies; Apache Kafka (http://kafka.apache.org/design.html), which is the
distributed messaging system, and Storm (http://storm-project.net/) which is a distributed stream processing engine.
http://hortonworks.com/hadoop/storm/
Participants should have basic knowledge on Big Data concepts, Hadoop and any Programming Languages.
Big Data Solution Architect @ Overall 12 years of experience in building distributed and enterprise search/web based applications primary working with Big Data Technologies & Tools.
Specialties:
Domain - Predictive Analytics, Recommendation Engines, Performance Re-engineering
Big Data Technology - Apache Hadoop, Kafka, Storm, Hive, HBase, OOZIE, Flume, SQOOP, Zookeeper, Cloudera Enterprise (CDH3, CDH4), MongoDB, Neo4j, MarkLogic, Solr
Programming Languages - Core Java, J2EE, Spring, ColdFusion MX, Jasper Reports, Quartz, Oracle, PL-SQL, JavaScript, HTML, AJAX
in.linkedin.com/in/arvindgopinath/
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}