Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Make a submission
Accepting submissions till 15 Jun 2019, 01:00 PM
Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Accepting submissions till 15 Jun 2019, 01:00 PM
##The eighth edition of The Fifth Elephant will be held in Bangalore on 25 and 26 July. A thousand data scientists, ML engineers, data engineers and analysts will gather at the NIMHANS Convention Centre in Bangalore to discuss:
##Highlights:
1. Meet Peter Wang, co-founder of Anaconda Inc, and learn about why data privacy is the first step towards robust data management; the journey of building Anaconda; and Anaconda in enterprise.
2. Talk to the Fulfillment and Supply Group (FSG) team from Flipkart, and learn about their work with platform engineering where ground truths are the source of data.
3. Attend tutorials on Deep Learning with RedisAI; TransmorgifyAI, Salesforce’s open source AutoML.
4. Discuss interesting problems to solve with data science in agriculture, SaaS perspective on multi-tenancy in Machine Learning (with the Freshworks team), bias in intent classification and recommendations.
5. Meet data science, data engineering and product teams from sponsoring companies to understand how they are handling data and leveraging intelligence from data to solve interesting problems.
##Why you should attend?
##Full schedule published here: https://hasgeek.com/fifthelephant/2019/schedule
##Contact details:
For more information about The Fifth Elephant, sponsorships, or any other information call +91-7676332020 or email info@hasgeek.com
#Sponsors:
Sponsorship Deck.
Email sales@hasgeek.com for bulk ticket purchases, and sponsoring 2019 edition of JSFoo:VueDay.
#Platinum Sponsor
#Community Sponsors
#Exhibition Sponsors
#Bronze Sponsor
#Community Sponsors
Hosted by
Manoj Kumar
@mkumar1984
Submitted Sep 19, 2018
Have you ever tuned a Spark, Hive or Pig job? If yes, then you must know that it is a never ending cycle of executing the job, observing the running job, making sense out of hundreds of Spark/Hadoop metrics and then re-run it with the better parameters. Imagine doing this for tens of thousands of jobs. Manually doing performance optimization at this scale is tedious, requires significant expertise and results into wasting a lot of resources to do the same task repeatedly. As Spark/Hadoop is the natural choice for any data processing, with many naive users, it becomes important to develop a tool to automatically tune Spark/Hadoop jobs.
At LinkedIn we tried to solve the problem using Dr. Elephant, an open-sourced self-serve performance monitoring and tuning tool for Spark and Hadoop jobs. While it has proved to be very successful, it relies on the developer’s initiative to check and apply the recommendation manually. It also expects some expertise from developers to arrive at the optimal configuration using the recommendations.
In this talk we will discuss TuneIn, an Auto Tuning tool developed on top of Dr. Elephant, which overcomes the above mentioned limitations. We will describe how we took the best of various approaches taken by industry and academia, so far to solve this problem to come up with a framework, which doesn’t require any extra resources for tuning. We will discuss two approaches of auto tuning jobs, Heuristics Based Tuning and Optimization Based Tuning. We will also talk about tuning framework, which is easily extendable to integrate different approaches (Machine Learning Based Tuning) and execution frameworks (TensorFlow). We will also talk about the lessons learned and the future roadmap.
In this talk we will discuss TuneIn, an Auto Tuning tool developed on top of Dr. Elephant, which overcomes the above mentioned limitations. We will describe how we took the best of various approaches taken by industry and academia, so far to solve this problem to come up with a framework, which doesn’t require any extra resources for tuning. We will discuss two approaches of auto tuning jobs, Heuristics Based Tuning and Optimization Based Tuning. We will also talk about tuning framework, which is easily extendable to integrate different approaches (Machine Learning Based Tuning) and execution frameworks (TensorFlow). We will also talk about the lessons learned and the future roadmap.
Basic understanding of hadoop and spark
Manoj Kumar is a Senior Software Engineer in the data team at LinkedIn, where he is currently working on auto-tuning Spark/Hadoop jobs. He has presented in Spark + AI summit 2018 and Strata Data Conference 2018. He has more than four years of experience in big data technologies like Hadoop, MapReduce, Spark, HBase, Pig, Hive, Kafka, and Gobblin. Previously, he worked on the data framework for slicing and dicing (30 dimensions, 50 metrics) advertising data at PubMatic and worked at Amazon. He has completed M.Tech from IIT Bombay in 2008.
Pralabh Kumar is a Senior Software Engineer in the data team at LinkedIn, where he is working on auto-tuning Spark jobs. His TuneIN paper is selected in Spark and Strata Conferences 2018 .He has more than seven years of experience in big data technologies like Spark, Hadoop, MapReduce, Cassandra, Hive, Kafka, and ELK. He contributes to Spark and Livy and has filed couple of patents. Previously, he worked on the real-time system for unique customer identification at Walmart. He holds a degree from the University of Texas at Dallas.
Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Accepting submissions till 15 Jun 2019, 01:00 PM
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}