The Fifth Elephant 2019

The eighth edition of India's best data conference

Participate

Automating Workflows for AI Projects

Submitted by Jaidev Deshpande (@jaidevd) on Sunday, 21 April 2019


Preview video

Section: Full talk Technical level: Intermediate Session type: Lecture

Abstract

As technology gets cheaper and more available, we start taking it for granted. It’s easier than ever before to perform fairly exciting AI tasks with as little as tens of lines of code. As data grows, our approach to ML problems often, and understandably, becomes haphazard. As GPUs become more widely available, we subconsciously think that throwing enough artificial neurons at a problem will eventually solve it. Whether this will actually solve a problem or not, it is not uncommon for data scientists to realize - unfortunately only in hindsight - that most of the iterations required to build a successful model were unnecessary. Ironically, these ‘missteps’ are essential to finding the right solution. Solving an ML problem is like traversing a minefield where the safest path can only be determined by blowing up a lot of mines. We can only find the solution by making mistakes. Unfortunately, this ‘see what sticks’ approach cannot be completely avoided. However it can be curbed significantly, with a structured approach to running machine learning experiments. This structured approach - its theory and practice, its design principles and its software implementation - is what this talk is about.

Outline

This talk is inspired, in part, by Peter Bull & Isaac Slavitt’s SciPy 2016 tutorial on Developer Lifehacks for the Jupyter Data Scientist. Bull & Slavitt’s tutorial is a result of their effort towards trying to bring best practices from software engineering to data science. In the same spirit, this talk expands on this idea by:

  1. Putting together best practices to design workflows - not only for data scientists who spend a lot of time in Jupyter notebooks, but also for engineers and software development teams who work on end-to-end AI projects.
  2. Examining how and which of these practices can be automated.
  3. Showcasing open source solutions that enables data scientists to implement workflows - mostly without leaving the comfort of their existing development environments.

Most ML problems are, by design, highly iterative. Therefore they can, at least in theory, be automated. However, the lack of a structured workflow prevents us from exploiting the redundancies in ML practice. The ideal way of managing machine learning experiments is with a lab journal. Each machine learning experiment can be reasonably characterized by a hypothesis, a procedure and finally drawing inferences from it’s results. A well kept journal would help practitioners avoid repeating mistakes, and narrowing down to the right approach.

This talk will introduce Kepler, a fully open-source framework for managing ML experiments. Kepler is written in Python, and optimized for deep learning experiments. It runs sanity checks on models and data which enforce the idea that training models should not begin until the model is sane enough, and the data is properly prepared. It enforces the DRY principle by keeping track of performance metrics across multiple experiments. It allows users to log experiments carried out on sklearn estimators and keras models. It also behaves like a hyperparameter grid manager, which alerts the user if the user accidentally re-runs the same experiment on the same data with the same parameters. It has some meta-learning features which allow for an end-to-end approach to machine learning problems. Ultimately, it provides a searchable interface for all projects, models and experiments under its umbrella - facilitating the design and automation of efficient ML workflows.

Speaker bio

Jaidev is senior data scientist at Gramener, where he work on building products for other data scientists. His interests lie at the intersection of data science, software engineering and continuous integration. He’s an active contributor to the scientific Python stack, and loves to apply machine learning and analytics to personal productivity. You’re likely to run into him at FOSS community events.

Links

Slides

https://www.slideshare.net/secret/v4avVPIvzK64G5

Preview video

https://youtu.be/SfOVYicORWo

Comments

  • Abhishek Balaji (@booleanbalaji) Reviewer a month ago

    Hi Jaidev,

    Thank you for submitting a proposal. The slides and preview video you submitted are fairly detailed and good enough for evaluation. If you’d like to refine your content before we take it forward, please make sure you are incorporating all the points below:

    • Problem statement/context, which the audience can relate to and understand. The problem statement has to be a problem (based on this context) that can be generalized for all.
    • What were the tools/options available in the market to solve this problem? How did you evaluate these, and what metrics did you use for the evaluation? Why did you decide to build your own ML model?
    • Why did you pick the option that you did?
    • Explain how the situation was before the solution you picked/built and how was the fraud/ghosting after implementing the solution you picked and built? Show before-after scenario comparisons & metrics.
    • What compromises/trade-offs did you have to make in this process?
    • What are the privacy, regulatory and ethical considerations when building this solution?
    • What is the one takeaway that you want participants to go back with at the end of this talk? What is it that participants should learn/be cautious about when solving similar problems?

    As next steps, we’d need to see the detailed and/or updated slides by 21 May, in order to close the decision on your proposal. If we dont receive an update by 21 May, we’d have to move the proposal for consideration for a future conference.

    • Jaidev Deshpande (@jaidevd) Proposer 29 days ago (edited 29 days ago)

      Hi Abhishek,

      I’ve updated my slides and I’ll continue to tweak them for a few more days - for now it should be detailed enough for review.

      Thanks,

  • Jaidev Deshpande (@jaidevd) Proposer a month ago

    Hi Abhishek,

    Thanks for the note. I have already addressed some of these points through the abstract and the video - there are some others that I would like to incorporate into the slides, which I’ll do by tomorrow.

    Will that suffice?

    Thanks,

    • Abhishek Balaji (@booleanbalaji) Reviewer 2 days ago

      Hi Jaidev,

      Do update your slides. When someone is viewing your talk content outside the context of a talk, it helps to make the slides as comprehensive as possible. Since I missed this comment, you can send in your content by Jun 27, 2019.

  • Jaidev Deshpande (@jaidevd) Proposer 2 days ago

    Hi Abhishek,

    Sorry, I guess I was not clear enough in my comments. The slides were updated a couple of days after my last comment itself - there are some technical details I’ll probably add later - like better examples, maybe.

    Also, it seems that this proposal has been moved from Anthill to Fifth Elephant. When did this happen?

    Thanks,

Login with Twitter or Google to leave a comment