Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
Track 1: Discovering Insights and Driving Decisions
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
- Statistical Visualizations
- Unsupervised Learning
- Supervised Learning
- Semi-Supervised Learning
- Active Learning
- Reinforcement Learning
- Monte-carlo techniques and probabilistic programming
- Deep Learning
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
Track 2: Speed at Scale
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
- Distributed and Parallel Computing
- Real Time Analytics and Stream Processing
- MapReduce and Graph Computing frameworks
- Kafka, Spark, Hadoop, MPI
- Stories of parallelizing sequential programs
- Cost/Security/Disaster Management of Data
Commitment to Open Source
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Exploratory data analysis using Apache Lens and Apache Zeppelin
Apache lens is an analytics platform that aims to cut the Data Analytics silos by providing a single view of data across multiple tiered data stores and optimal execution environment for the analytical query. It seamlessly integrates Hadoop with traditional data warehouses to appear like one. Zeppelin is a web based notebook that enables interactive data analytics.
This talk is on how both these projects help perform exploratory data analysis in Flipkart
Flipkart has multiple data stores catering to diverse requirements around flexible questions and response times. At Flipkart, we use Apache Lens for multi-dimensional queries in a unified way over datasets. These datasets are stored in multiple warehouses. Apache Lens aims to ease analytical querying by providing an unified SQL like interface to query across Realtime and historical datasets which could be spanning in across multiple stores. Conceiving data as a cube with hierarchical dimensions leads to conceptually straightforward operations to facilitate analysis. Integrating Apache Hive with traditional warehouses and other realtime stores provides the opportunity to optimize on the query execution cost while maintaining the latency SLAs.
Exploratory data analysis is the key ingredient behind deriving insights from data. Usually this process requires that multiple people with complementary expertise work together. While most of the query systems provide an interactive shell for data exploration they do not solve for collaboration between multiple people. Apache Zeppelin fills this gap.
In this talk we showcase how Zeppelin and Lens can be used for collaborative and exploratory data analysis.
Bala Nathan is an Architect at Flipkart’s Central Data platform. He is responsible for flipkart’s next generation query and analytics platform that will help Analysis and Systems make decisions in realtime. He has more than 15 years of experience in the industry. He spent his early days in the industry in a couple of startups before he moved on to Oracle and subsequently to Yahoo! where he worked on large scale structured and unstructured data ingestion and enrichment platforms. He also does consulting & tech bootstrapping for startups in his spare-time.
Pranav Agarwal is an SDE3 at Flipkart’s Data platform. At flipkart, he works on the next generation machine learning platform that uses Apache Zeppelin and Lens for exploratory Data analysis. He has more than 15 years of industry experience. Prior to Flipkart, he spent more than decade at Oracle working on Oracle Database Search technology that powers Unstructured Text retrieval on RDBMS, Semantic Search & Content Similarity. He holds a few patents on term selection, document similarity and score boosting.