Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
- Statistical Visualizations
- Unsupervised Learning
- Supervised Learning
- Semi-Supervised Learning
- Active Learning
- Reinforcement Learning
- Monte-carlo techniques and probabilistic programming
- Deep Learning
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
- Distributed and Parallel Computing
- Real Time Analytics and Stream Processing
- MapReduce and Graph Computing frameworks
- Kafka, Spark, Hadoop, MPI
- Stories of parallelizing sequential programs
- Cost/Security/Disaster Management of Data
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
How to stop admiring and start using Deep Learning
Deep Learning results looks very fascinating but it seems to require a huge infra to start using it. In this talk, we present how to approach it in incremental manner to make real use of Deep Learning.
Various problems related to text, image and voice have state-of-the-art results from Deep Learning. But all the groups who are giving these results have massive infrastructure to run Deep Learning algorithms. But same is not needed to start consuming goodness of Deep Learning.
In this talk, we present the reasons why Deep Learning is effective and present the ways to start consuming it using existing available models for solving the real problems. And then we talk about building the unsupervised pretraining models from scratch and introduce tricks and tools for same. We will cover various DL frameworks like theano, caffe, DL4j, Pylearn2 etc.
Vivek Mehta is Data Scientist at Flipkart. He is currently interested in product discovery, personalization and deep learning. He has worked on various problems and domain including fraud detection, inventory planning, online ad optimization, handwriting recongnition, machine transalation, etc.
Devashish is a software developer at Flipkart. His primary work is in the area of deriving insights from unstructured data. His previous works include Review Summarization at Flipkart, User Insights Platform - a system which enables us to dig into 100s of TB of data to find user insights. He is currently working on sentiment analysis and aspect extraction from Social Media.
- https://www.youtube.com/watch?v=zb98EWAUBxc&feature=youtu.be&t=225 (Vivek talking about outlier detection. Audio is poor, can work with headphones)
- https://youtu.be/sLDXm_53-mM?t=281 (Devashish talking about user insights engine)