Tickets

Loading…

Manoj Kumar

@mkumar1984

TuneIn: How to get your jobs tuned while sleeping

Submitted Sep 19, 2018

Have you ever tuned a Spark, Hive or Pig job? If yes, then you must know that it is a never ending cycle of executing the job, observing the running job, making sense out of hundreds of Spark/Hadoop metrics and then re-run it with the better parameters. Imagine doing this for tens of thousands of jobs. Manually doing performance optimization at this scale is tedious, requires significant expertise and results into wasting a lot of resources to do the same task repeatedly. As Spark/Hadoop is the natural choice for any data processing, with many naive users, it becomes important to develop a tool to automatically tune Spark/Hadoop jobs.

At LinkedIn we tried to solve the problem using Dr. Elephant, an open-sourced self-serve performance monitoring and tuning tool for Spark and Hadoop jobs. While it has proved to be very successful, it relies on the developer’s initiative to check and apply the recommendation manually. It also expects some expertise from developers to arrive at the optimal configuration using the recommendations.

In this talk we will discuss TuneIn, an Auto Tuning tool developed on top of Dr. Elephant, which overcomes the above mentioned limitations. We will describe how we took the best of various approaches taken by industry and academia, so far to solve this problem to come up with a framework, which doesn’t require any extra resources for tuning. We will discuss two approaches of auto tuning jobs, Heuristics Based Tuning and Optimization Based Tuning. We will also talk about tuning framework, which is easily extendable to integrate different approaches (Machine Learning Based Tuning) and execution frameworks (TensorFlow). We will also talk about the lessons learned and the future roadmap.

Outline

In this talk we will discuss TuneIn, an Auto Tuning tool developed on top of Dr. Elephant, which overcomes the above mentioned limitations. We will describe how we took the best of various approaches taken by industry and academia, so far to solve this problem to come up with a framework, which doesn’t require any extra resources for tuning. We will discuss two approaches of auto tuning jobs, Heuristics Based Tuning and Optimization Based Tuning. We will also talk about tuning framework, which is easily extendable to integrate different approaches (Machine Learning Based Tuning) and execution frameworks (TensorFlow). We will also talk about the lessons learned and the future roadmap.

Requirements

Basic understanding of hadoop and spark

Speaker bio

Manoj Kumar is a Senior Software Engineer in the data team at LinkedIn, where he is currently working on auto-tuning Spark/Hadoop jobs. He has presented in Spark + AI summit 2018 and Strata Data Conference 2018. He has more than four years of experience in big data technologies like Hadoop, MapReduce, Spark, HBase, Pig, Hive, Kafka, and Gobblin. Previously, he worked on the data framework for slicing and dicing (30 dimensions, 50 metrics) advertising data at PubMatic and worked at Amazon. He has completed M.Tech from IIT Bombay in 2008.

Pralabh Kumar is a Senior Software Engineer in the data team at LinkedIn, where he is working on auto-tuning Spark jobs. His TuneIN paper is selected in Spark and Strata Conferences 2018 .He has more than seven years of experience in big data technologies like Spark, Hadoop, MapReduce, Cassandra, Hive, Kafka, and ELK. He contributes to Spark and Livy and has filed couple of patents. Previously, he worked on the real-time system for unique customer identification at Walmart. He holds a degree from the University of Texas at Dallas.

Slides

https://www.slideshare.net/secret/CKSJQSikovJ7H8

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Jump starting better data engineering and AI futures