The Fifth Elephant 2015
A conference on data, machine learning, and distributed and parallel computing
Jul 2015
13 Mon
14 Tue
15 Wed
16 Thu 08:30 AM – 06:35 PM IST
17 Fri 08:30 AM – 06:30 PM IST
18 Sat 09:00 AM – 06:30 PM IST
19 Sun
Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Hosted by
Rahul Kavale
@rahulkavale
Submitted May 7, 2015
Live coding demostration to show how Apache Spark can solve non trivial problems, and hence deprecating some of the established patterns of MapReduce, with consice code, giving us significant performance gain, and developer friendly programming model also keeping other sweetness of MapReduce wolrd intact.
Hadoop MapReduce programming model has evolved into having its own patterns for solving problems. This talk is trying to eloborate how those patterns can be replaced with simpler equivalent solutions with use of Apache Spark, and of course with keeping other sweetness of existing MapReduce world with advantage of performance boost. I will be using some some non trivial examples for the this with live coding for the same.
Rahul is an Application developer with Thoughtworks. Rahul has worked with technologies ranging from Scala, Java, Ruby. He has experience in builiding web applications to solving big data problems. Rahul has special interest in Scala, Apache Spark. He loves functional programming.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}