Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
- Statistical Visualizations
- Unsupervised Learning
- Supervised Learning
- Semi-Supervised Learning
- Active Learning
- Reinforcement Learning
- Monte-carlo techniques and probabilistic programming
- Deep Learning
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
- Distributed and Parallel Computing
- Real Time Analytics and Stream Processing
- MapReduce and Graph Computing frameworks
- Kafka, Spark, Hadoop, MPI
- Stories of parallelizing sequential programs
- Cost/Security/Disaster Management of Data
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
From Search to Discovery at Housing
The objective of this session is to introduce a framework and models for search recommendations through real-time user click stream analysis. We will be talking about various architectural challenges and challenges in modeling the expert system and how it can be used in different domains.
The problem of search discovery at Housing can be broken down into two verticals - personalizing/improving relevance of the result set, and guiding the users to select a search criteria that has a higher chances of conversion. The components of the search recommendation service are user click stream processing and expert system to generate search recommendations. Stream processing builds session profile for the users and generate relevant signals for searches/session with low chances of conversion (broken search). The expert system handles such broken searches and suggests alternate but relevant search criteria. The expert system and the broken search models are updated using user activity and feedback. Result set is personalized based on user profiles and, supply and demand biases in the search criteria.
Ravikiran Gunale is a software developer at Housing. His interests include supervised learning systems, NLP and new technologies. He has worked on big data projects related to recommendation system, predictive analytics, fraud detection.
Mudit is a developer at Housing.com and has been leading the search and realtime recommendations at Housing. He is a FOSS enthusiast and has contributed actively to various project inlcuding the collaborative filtering module at mlpack.