In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious.
Insights – what insights does the proposal share with the audience that they did not know earlier.
Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used.
Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
Interactive analytics on event streams with complexly nested schemas
In this talk, I will share the lessons that we learnt while building an application for interactively analyzing data from event streams like twitter firehose, click streams, and application logs with complexly nested schemas.
I will discuss the challenges faced while implementing the whole analytics stack that has Kafka for data collection, Elasticsearch for realtime search, and Apache Drill for advanced interactive queries.
Of late, data driven organisations need a system that can answer ad-hoc and interactive queries over a mountain of data. An example is a data analyst at an e-commerce firm looking for insights on a product by doing ad-hoc queries on twitter stream data.
There are several challenges in building a solution for this use case. A single twitter firehose emits thousands of events per second. We need a system that can scale horizontally on demand, be highly available and provide high throughput. Additionally, the collector is secure and asynchronous so that the sources are not blocked. The variety of data that is being processed is the another challenge. Data can be structured, nested, schema-less or unstructured. Twitter stream generates around 200+ fields nested at multiple levels; also the schema can change with time. Finally, we need a distributed query processing engine that crunches these data at interactive speeds.
I will cover the various components one has to consider in building this tool, our evaluation on the technologies available, and benchmark results that we achieved.
General knowledge on Big Data tools and technologies.
Abishek is co-founder of a startup developing an interactive big data analytics application. He has more than seven years of experience in Databases and Big Data systems and has worked for Oracle and Goldman Sachs.