The Fifth Elephant 2015

A conference on data, machine learning, and distributed and parallel computing

Rahul Kavale

@rahulkavale

Scrap Your MapReduce - Introduction to Apache Spark

Submitted May 7, 2015

Introduction to Apache Spark, compare and contrast it with MapReduce programming model, see what Apache Spark has to offer, where it shines, how to use it via real life examples.

Outline

Remember the last time you tried to write a MapReduce job(obviously something non trivial than a word count)? Did you feel how this could be done in a better way?

Did you wonder how life will be much simple if you had to code like doing collection operations and hence being transparent to its distributed nature? Did you want/hope for more performant/low latency jobs?

Well, seems like you are in luck. This talk will get you started with Apache Spark, and see where it shines, why to use it, how to use it.

We’ll be covering aspects like testability, maintainability, conciseness of the code, and some features like iterative processing, optional in-memory caching and others.

We will see how Spark, being just a cluster computing engine, abstracts the underlying distributed storage, and cluster management aspects, giving us a uniform interface to consume/process/query the data.

We will explore the basic abstraction of RDD which gives us so many awesome features making Spark a very good choice for your big data applications.

We will see this through some non trivial code examples.

Speaker bio

Rahul is an Application developer with Thoughtworks. Rahul has worked with technologies ranging from Scala, Java, Ruby. He has experience in builiding web applications to solving big data problems. Rahul has special interest in Scala, Apache Spark. He loves functional programming.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Jump starting better data engineering and AI futures