Anthill Inside 2019

A conference on AI and Deep Learning

Tickets

Interpret-ability as a bridge from Insights to Intuition in Machine and Deep Learning

Submitted by Sai Sundarakrishna (@psgsai) on Monday, 29 April 2019

Section: Full talk Technical level: Intermediate Session type: Lecture

Abstract

Requisite characteristics of the Machine Learning models that make them fully deployable in a business setting are multivarious and sometimes compelling. Mere predictive power and validation accuracy are sometimes not sufficient. These models need to be interpretable, bias-free, transparent, explainable, consistent and should have a global and local basis for their output predictions. In this talk we would address various contemporary state of the art approaches to explain and interpret the complex models, ranging from linear and logistic models to Deep Learning models. We will focus on fundamental Model Interpretability principles to build simple to complex models. We will cover a mix of Model specific and Model Agnostic Interpretation strategies. Some of the models that would be covered are; Decision Tree Surrogate models, ICE plots, K-LIME, LOCO, Partial dependent plots and Random Forest Feature importance. Two great model interpretation strategies LIME and SHAP would be introduced and covered in depth. You will learn the process of applying and interpreting these techniques on real-world datasets and casestudies.

Outline

Machine Learning Interpretability - Foundational Principles
Model Agnostic and Model specifc Tools for interpretability, and, why we need both the flavors
Interpreting simple to complex models (linear/logistic to Deep Learning Models like Convolutional Neural Nets)
Notations, insights and key ideas related to techniques such as K-LIME, SHAP, ICE plots, PD plots & LOCO.
Insights from interpretation, appropriateness of model updates, bias/transparency handling etc.
Examples/Case Studies, insights for production implementation

Requirements

Basic Statistics, Basic Knowledge of ML and DL techniques

Speaker bio

Comments

  • Abhishek Balaji (@booleanbalaji) Reviewer 5 months ago

    Hi Sai,

    Thank you for submitting a proposal. For us to evaluate your proposal, we need to see detailed slides and a preview video. Your slides must take the following points into consideration:

    • Problem statement/context, which the audience can relate to and understand. The problem statement has to be a problem (based on this context) that can be generalized for all.
    • What were the tools/options available in the market to solve this problem? How did you evaluate these, and what metrics did you use for the evaluation? Why did you decide to build your own ML model?
    • Why did you pick the option that you did?
    • Explain how the situation was before the solution you picked/built and how was the fraud/ghosting after implementing the solution you picked and built? Show before-after scenario comparisons & metrics.
    • What compromises/trade-offs did you have to make in this process?
    • What are the privacy, regulatory and ethical considerations when building this solution?
    • What is the one takeaway that you want participants to go back with at the end of this talk? What is it that participants should learn/be cautious about when solving similar problems?

    As next steps, we’d need to see the detailed and/or updated slides by 21 May, in order to close the decision on your proposal. If we dont receive an update by 21 May, we’d have to move the proposal for consideration for a future conference.

Login with Twitter or Google to leave a comment