The Fifth Elephant 2015
A conference on data, machine learning, and distributed and parallel computing
Jul 2015
13 Mon
14 Tue
15 Wed
16 Thu 08:30 AM – 06:35 PM IST
17 Fri 08:30 AM – 06:30 PM IST
18 Sat 09:00 AM – 06:30 PM IST
19 Sun
Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.
The deadline for submitting a proposal is 15th June 2015
We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.
This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:
Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.
This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.
Hosted by
Gagan Agrawal
@gagana24
Submitted Jun 13, 2015
Understand how to build complex data workflow pipelines with cascading on hadoop by taking inputs from different sources and pushing crunched data to different sinks.
Big data processing often requires reading data from multiple sources like log files, databases, nosql stores, external services etc and performing transformations and defining complex workflow pipe lines to get some useful insight out of it. Writing these complex steps in Hadoop’s Map Reduce can be a non-trivial job and will require lot of effort and expertise to get it done. There are high level languages like Pig latin or Hive which make writing Map Reduce jobs easy. But if you want to write complex logic in these languages, you need to write custom functions in Java, which makes testing and debugging difficult. This is where Cascading makes developer’s life easy. Everything is written in Java with ease of writing Map Reduce in high level language similar to SQL interfaces. Once logic has been written, it can be easily be tested by running in stand-alone mode or Junit test cases since everything is in java. Not only that, Cascading provides Hadoop(or any other framework) agnostic APIs, which means workflows written in Cascading can be executed on multiple frameworks without any code change as long as Cascading connector is available. In this session, I will introduce Cascading framework and it’s features and discuss some real world use cases where complex workflows can be easily developed in Cascading. Below is the agenda of the talk.
--What is cascading
--Building complex data flows in Cascading
--Testing with Cascading
--Multiple examples to demonstrate ease of writing complex workflows
--Real world use case
--Advantages / Disadvantages
Gagan Agarwal is a Sr. Principal Engineer at Snapdeal and is currently heading Personalization and Recommendation team at Snapdeal. He has close to 10 years of experience in Software industry and have worked in domains like e-commerce, digital advertising, e-Governance, Document and Content Management, Customer Communication Management, Media Buy Management etc. Gagan has worked and developed challenging softwares ranging from multi-tiered Web Applications with millions of users to batch processing of multi tera byte data. Apart from expertise in Java/JEE technologies, Gagan has been working with Big Data technologies like Hadoop, Spark, Cascading, Pig, Hive, Sqoop, Oozie, Kafka etc. and nosql stores like Hbase, Cassandra, Aerospike, Mongo, Neo4j etc for past several years. Gagan is a seasoned speaker and has spoken on several technology conferences on topics ranging from Big Data Processing, No SQL Stores (key-value, graph based, column oriented stores) to functional programming languages.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}