Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Jaidev Deshpande
As technology gets cheaper and more available, we start taking it for granted. It’s easier than ever before to perform fairly exciting AI tasks with as little as tens of lines of code. As data grows, our approach to ML problems often, and understandably, becomes haphazard. As GPUs become more widely available, we subconsciously think that throwing enough artificial neurons at a problem will eventually solve it. Whether this will actually solve a problem or not, it is not uncommon for data scientists to realize - unfortunately only in hindsight - that most of the iterations required to build a successful model were unnecessary. Ironically, these ‘missteps’ are essential to finding the right solution. Solving an ML problem is like traversing a minefield where the safest path can only be determined by blowing up a lot of mines. We can only find the solution by making mistakes. Unfortunately, this ‘see what sticks’ approach cannot be completely avoided. However it can be curbed significantly, with a structured approach to running machine learning experiments. This structured approach - its theory and practice, its design principles and its software implementation - is what this talk is about.
This talk is inspired, in part, by Peter Bull & Isaac Slavitt’s SciPy 2016 tutorial on Developer Lifehacks for the Jupyter Data Scientist. Bull & Slavitt’s tutorial is a result of their effort towards trying to bring best practices from software engineering to data science. In the same spirit, this talk expands on this idea by:
Most ML problems are, by design, highly iterative. Therefore they can, at least in theory, be automated. However, the lack of a structured workflow prevents us from exploiting the redundancies in ML practice. The ideal way of managing machine learning experiments is with a lab journal. Each machine learning experiment can be reasonably characterized by a hypothesis, a procedure and finally drawing inferences from it’s results. A well kept journal would help practitioners avoid repeating mistakes, and narrowing down to the right approach.
This talk will introduce Kepler, a fully open-source framework for managing ML experiments. Kepler is written in Python, and optimized for deep learning experiments. It runs sanity checks on models and data which enforce the idea that training models should not begin until the model is sane enough, and the data is properly prepared. It enforces the DRY principle by keeping track of performance metrics across multiple experiments. It allows users to log experiments carried out on sklearn estimators and keras models. It also behaves like a hyperparameter grid manager, which alerts the user if the user accidentally re-runs the same experiment on the same data with the same parameters. It has some meta-learning features which allow for an end-to-end approach to machine learning problems. Ultimately, it provides a searchable interface for all projects, models and experiments under its umbrella - facilitating the design and automation of efficient ML workflows.
Jaidev is senior data scientist at Gramener, where he work on building products for other data scientists. His interests lie at the intersection of data science, software engineering and continuous integration. He’s an active contributor to the scientific Python stack, and loves to apply machine learning and analytics to personal productivity. You’re likely to run into him at FOSS community events.
Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}