The Fifth Elephant 2015
A conference on data, machine learning, and distributed and parallel computing
Jul 2015
13 Mon
14 Tue
15 Wed
16 Thu 08:30 AM – 06:35 PM IST
17 Fri 08:30 AM – 06:30 PM IST
18 Sat 09:00 AM – 06:30 PM IST
19 Sun
Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Hosted by
Kiran Veigas
@kiranveigas Proposing
Submitted Jun 1, 2015
walk through how we used Sparks scalable KMeans algorithm to detect Anomalies for our Cyber Analytics platform
Apache Spark has proved itself to be the next generation BigData processing tool , which has become a favourite for DataScientists and Data Engineers. Its Machine learning component provides well tested scalable algorithms.
It runs 10-100X faster than traditional map-reduce and it provides high level API’s making development an ease.Since Spark exposes API in Java, Scala, Python and R (Coming soon) Data scientists can use their favourite language to build data products.
In this session we will walk through how we used Sparks scalable KMeans algorithm to detect Anomalies for our Cyber Analytics platform.It will demonstrate a taste of Scala(Sparks Native language) , RDD ,and usage of K-means clustering . And how to improve clustering in a session with Spark. Finally we demonstrate how to use the K-means model in realtime to detect anomalies.
Vishnu Subramanian works as solution architect for Happiest minds with years of experience in building distributed systems using Hadoop , Spark , ElasticSearch , Cassandra , Machine Learning.A Databricks certified spark developer and having experience in building Data Products. His interests are in IOT , Data Science , BigData Security
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}