Making data scientists life easy with Docker
Submitted by Abhishek Kumar (@meabhishekkumar) on Friday, 28 April 2017
Full talk for data engineering track
Life of data scientists is hard as they have to bother not only about the algorithms & analysis but also about the environment & dependencies they have to build in order to get there at the first place. Also, when it comes to collaboration, deployment and scaling they always have hard times. Introducing docker in the data science workflow can eliminate these issues significantly. While docker has become a boon for devops, it can also be leveraged by data scientists to streamline the whole data science pipeline. So, in the talk, I will give step by step guidelines along with live demos to showcase the use of docker for delivering maximum value as a data scientist.
Audience: All data science professionals and enthusiasts
- A brief outline of docker and its ecosystem
- Why docker is so powerful?
- How docker can be useful for data scientists?
- Building end-to-end data science workflow (with a deep learning use case) using docker.
Abhishek Kumar is an experienced data science professional and technical team lead specializing in building and managing data products from conceptualization to deployment phase and interested in solving challenging machine learning problems.
He holds Master’s degree from University of California, Berkeley. He is also a Pluralsight author and has authored several data science courses that are followed by data science aspirants across the globe. He has worked on various machine learning projects involving predictive modeling, forecasting, optimization, and anomaly detection. He has also received Hal R. Varian award at University of California, Berkeley for his work on deep learning based context recognition system.
He is currently working as SapientRazorfish as Manager, Data Science and focusing on applying methods in machine learning to opportunities in retail, ecommerce, marketing and operational optimization.