The Fifth Elephant 2015
A conference on data, machine learning, and distributed and parallel computing
Jul 2015
13 Mon
14 Tue
15 Wed
16 Thu 08:30 AM – 06:35 PM IST
17 Fri 08:30 AM – 06:30 PM IST
18 Sat 09:00 AM – 06:30 PM IST
19 Sun
A conference on data, machine learning, and distributed and parallel computing
Jul 2015
13 Mon
14 Tue
15 Wed
16 Thu 08:30 AM – 06:35 PM IST
17 Fri 08:30 AM – 06:30 PM IST
18 Sat 09:00 AM – 06:30 PM IST
19 Sun
Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Hosted by
Russell Nash
@russnash
Submitted Jun 18, 2015
To learn about the key Big Data and Analytics services on AWS and how they can be used for both batch and streaming workloads.
One of the biggest challenges organizations face when designing Big Data platforms is analyzing historical, batch and streaming data using the same architecture.
This session will illustrate how to use AWS Big Data services such as Amazon Elastic MapReduce, Amazon Kinesis, Amazon Redshift and others to build a scalable, fault-tolerant and multi-layered processing system which includes the ability to analyze streaming data by comparing it against historical data in near real-time.
Russell Nash is a Solutions Architect with Amazon focused on data analytics.
He works with customers to derive maximum value and performance from using the Amazon data analytics services.
Russell has over 20 years experience in the IT industry and the majority of that time was spent working with database and parallel technologies designed for large scale analysis.
He is passionate about applying these technologies to business problems in order to return value and insight to organizations.
Jul 2015
13 Mon
14 Tue
15 Wed
16 Thu 08:30 AM – 06:35 PM IST
17 Fri 08:30 AM – 06:30 PM IST
18 Sat 09:00 AM – 06:30 PM IST
19 Sun
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}