Anatomy of RDD : A Deep dive into Spark RDD Data structure.
Submitted by Madhukara Phatak (@phatak-dev) on Wednesday, 6 May 2015
RDD is the core abstraction of Apache Spark. So understanding RDD in depth is very
crucial to use spark very effectively. This talks aims to take audience a deep
dive into RDD to make them understand why it’s so powerful.
This is an Advanced talks aimed toward people who already know Spark. This talk
tries to deconstruct RDD abstraction to peek inside. We will be discussing about
- Immutability and Distribution
- Partition API’s like mapParittions, lookUp etc
- Implementation of Laziness
- RDD dependency hierarchy
- Transformation and Action implementation
- Caching implementation
All the above topics are discussed with real code.
- Prior experience of Working with Spark
Madhukara phatatak is a Bigdata consultant @ Datamantra. He has been actively working in Hadoop,Spark and its ecosystem projects from last 5 years.
He was lead developer of Nectar, a ML library for hadoop.He also contributed to hadoop source code to improve cyclic checks in Jobcontrol api.With raise of Apache Spark, he with his team has open sourced courseera machine learning course examples on spark here. He blogs on spark here. Also he runs a Spark meetup group in Bangalore.