In 2014, infrastructure components such as Hadoop, Berkeley Data Stack and other commercial tools have stabilized and are thriving. The challenges have moved higher up the stack from data collection and storage to data analysis and its presentation to users. The focus for this year’s conference on analytics – the infrastructure that powers analytics and how analytics is done.
Talks will cover various forms of analytics including real-time and opportunity analytics, and technologies and models used for analyzing data.
Proposals will be reviewed using 5 criteria:
Domain diversity – proposals will be selected from different domains – medical, insurance, banking, online transactions, retail. If there is more than one proposal from a domain, the one which meets the editorial criteria will be chosen.
Novelty – what has been done beyond the obvious. Insights – what insights does the proposal share with the audience that they did not know earlier. Practical versus theoretical – we are looking for applied knowledge. If the proposal covers material that can be looked up online, it will not be considered.
Conceptual versus tools-centric – tell us why, not how. Tell the audience what was the philosophy underlying your use of an application, not how an application was used. Presentation skills – proposer’s presentation skills will be reviewed carefully and assistance provided to ensure that the material is communicated in the most precise and effective manner to the audience.
For queries about proposals / submissions, write to email@example.com
Data Collection and Transport – for e.g, Opendatatoolkit, Scribe, Kafka, RabbitMQ, etc.
Data Storage, Caching and Management – Distributed storage (such as Gluster, HDFS) or hardware-specific (such as SSD or memory) or databases (Postgresql, MySQL, Infobright) or caching/storage (Memcache, Cassandra, Redis, etc).
Data Processing, Querying and Analysis – Oozie, Azkaban, scikit-learn, Mahout, Impala, Hive, Tez, etc.
Big data and security
Big data and internet of things
Data Usage and BI (Business Intelligence) in different sectors.
Please note: the technology stacks mentioned above indicate latest technologies that will be of interest to the community. Talks should not be on the technologies per se, but how these have been used and implemented in various sectors, enterprises and contexts.
Hive and Presto for Big Data Analytics in the Cloud
The objective of this talk is to conceptualize the use of Hive and Presto for big data analytics. We will contrast their architecture and use cases, and describe how to take advantage of both these technologies in the cloud.
A big data project typically entails processing terabytes to petabytes of data to produce actionable reports and generate business insights. With the advent of public clouds, it is extremely easy to provision machines for analytic workflows as per usage. Open source projects such as Hadoop, Hive and Presto provides inexpensive big data software to develop such projects and have become valuable tools for data integration and analysis. These technologies are production-ready and are running at scale in organizations like Yahoo and Facebook.
Hive provides a massive, fault-tolerant , data warehouse for ad-hoc querying and analysis of very large distributed datasets. Presto on the other hand is emerging as an alternative to Hive to run interactive analytic queries. It was open sourced by facebook in late 2013 and is targeted at analysts who expect response times ranging from sub-second to minutes. Since both of them are SQL implementations for Big Data, it raises the question: do we need both?
At Qubole, we spend a lot of time working on the internals of both Presto and Hive. In this talk, we will use our experiences and observations to explain why both technologies are required in a big data project. We will then contrast the two technologies in terms of architecture and performance. Finally, we will touch upon the best practices where Presto and Hive can co-exist in a cloud environment providing intuitive and powerful ways to interact with our data.
Participants need to have basic understanding of big data analytics.
Vikram Agrawal is a hacker at Qubole. He is currently focussing on Presto Internals with an emphasis to make it behave well in the cloud. Before Qubole, he co-founded uniRow, an online video conferencing platform, where he led all R&D efforts for the company. He has a Bachelor’s and Master’s degree in Computer Science from IIT, Delhi.
Shubham Tagra is working on Presto at Qubole with an emphasis on its feature improvements and performance evaluation against Hive. Before this he has worked at NetApp in Storage Area Networks. Shubham has a bachelor’s degree from NIT Surathkal in Computer Science.