Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
- Statistical Visualizations
- Unsupervised Learning
- Supervised Learning
- Semi-Supervised Learning
- Active Learning
- Reinforcement Learning
- Monte-carlo techniques and probabilistic programming
- Deep Learning
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
- Distributed and Parallel Computing
- Real Time Analytics and Stream Processing
- MapReduce and Graph Computing frameworks
- Kafka, Spark, Hadoop, MPI
- Stories of parallelizing sequential programs
- Cost/Security/Disaster Management of Data
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Solr compute cloud - An elastic Solr infrastructure
Go over various challenges in scaling solr search platform to serve hundreds of millions of documents with low latencies and high throughput in a multi tenant architecture.
Scaling search platforms is an extremely hard problem and involves a number of challenges like serving hundreds of millions of documents with low latencies, high through at optimized cost.
At BloomReach, we have implemented SC2, an elastic Solr infrastructure for big data applications that can support cloud based heterogeneous workloads, scale search servers dynamically, provide application and pipeline level isolation, offer latency guarantees and application specific performance tuning and provide high availability features like cluster replacement, cross data center support, disaster recovery etc..,
This talk goes over the architecture of SC2 infrastructure at Bloomreach and the various challenges involved in scaling a large scale distributed search platform.
Some background on search applications is needed for this talk. We will go indepth into advanced scaling aspects with working sample use cases around Solr.
Suchi has been working at Bloomreach on Search, user data and personalization platforms for the last 3 years.
Her areas of expertise include mobile frameworks, scaling search applications, performance tuning of huge distributed search platforms, user data/personalization applications. Prior to Bloomreach, Suchi was a founding member of the Android framework team and has made extensive contributions to the Android framework.
Suchi has an MS from University of Cincinnati and BE from BITS, Pilani. Her prior work experience includes working in companies like Google, Motorola and IBM Almaden.