The Fifth Elephant 2014

A conference on big data and analytics

In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.

Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.

Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.



For queries about proposals / submissions, write to


  1. Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.

  2. Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).

  3. Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.

  4. Real-time analytics

  5. Opportunity analytics

  6. Big data and security

  7. Big data and internet of things

  8. Data Usage and BI (Business Intelligence) in different sectors.

Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Arvind Gopinath



Submitted Mar 2, 2014

To understand big data real time processing challenges, technology maturity on real-time/near-time analytics and modern big data architecture built with Hadoop


We all know the Hadoop like Batch processing system had evolved and matured over past few
years for excellent offline data processing platform for Big Data. Hadoop is a high-throughput system which can crunch a huge volume of data using a distributed parallel processing paradigm called MapReduce.

But there are many use cases across various domains which require real-time / near real-time response on Big Data for faster decision making. Hadoop is not suitable for those use cases. Credit card fraud analytics, network fault prediction from sensor data, security threat prediction, and so forth need to process real time data stream on the fly to predict if a given transaction is a fraud, if the system is developing a fault, or if there is a security threat in the network.

If decisions such as these are not taken in real time, the opportunity to mitigate the damage is lost.

In this talk I will showcase how Call Data Record (CDR) analytics use case can be solved using popular open source technologies to process real time data in a fault tolerant and distributed manner. The major challenge here is; it needs to be always available to process real time feeds from various telecom switches; it needs to be scalable enough to process hundreds of thousands of call records per second; and it needs to support a scale-out distributed architecture to process the stream in parallel.

To solve this complex real time processing challenge, we have evaluated two popular open
source technologies; Apache Kafka (, which is the
distributed messaging system, and Storm ( which is a distributed stream processing engine.


Participants should have basic knowledge on Big Data concepts, Hadoop and any Programming Languages.

Speaker bio

Big Data Solution Architect @ Overall 12 years of experience in building distributed and enterprise search/web based applications primary working with Big Data Technologies & Tools.


Domain - Predictive Analytics, Recommendation Engines, Performance Re-engineering
Big Data Technology - Apache Hadoop, Kafka, Storm, Hive, HBase, OOZIE, Flume, SQOOP, Zookeeper, Cloudera Enterprise (CDH3, CDH4), MongoDB, Neo4j, MarkLogic, Solr
Programming Languages - Core Java, J2EE, Spring, ColdFusion MX, Jasper Reports, Quartz, Oracle, PL-SQL, JavaScript, HTML, AJAX


{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more