The Fifth Elephant 2015

A conference on data, machine learning, and distributed and parallel computing

Scrap Your MapReduce - Introduction to Apache Spark

Submitted by Rahul Kavale (@rahulkavale) on Thursday, 7 May 2015

videocam_off

Technical level

Beginner

Section

Full Talk

Status

Submitted

Vote on this proposal

Login to vote

Total votes:  +7

Objective

Introduction to Apache Spark, compare and contrast it with MapReduce programming model, see what Apache Spark has to offer, where it shines, how to use it via real life examples.

Description

Remember the last time you tried to write a MapReduce job(obviously something non trivial than a word count)? Did you feel how this could be done in a better way?

Did you wonder how life will be much simple if you had to code like doing collection operations and hence being transparent to its distributed nature? Did you want/hope for more performant/low latency jobs?

Well, seems like you are in luck. This talk will get you started with Apache Spark, and see where it shines, why to use it, how to use it.

We’ll be covering aspects like testability, maintainability, conciseness of the code, and some features like iterative processing, optional in-memory caching and others.

We will see how Spark, being just a cluster computing engine, abstracts the underlying distributed storage, and cluster management aspects, giving us a uniform interface to consume/process/query the data.

We will explore the basic abstraction of RDD which gives us so many awesome features making Spark a very good choice for your big data applications.

We will see this through some non trivial code examples.

Speaker bio

Rahul is an Application developer with Thoughtworks. Rahul has worked with technologies ranging from Scala, Java, Ruby. He has experience in builiding web applications to solving big data problems. Rahul has special interest in Scala, Apache Spark. He loves functional programming.

Links

Comments

Login with Twitter or Google to leave a comment