Anthill Inside 2019

A conference on AI and Deep Learning

Tickets

Loading…

Farhat Habib

@distantfedora

Opening the Black Box: How to Interpret Machine Learning models; techniques, tools, and takeaways

Submitted Apr 30, 2019

Interpretability of a model is the degree to which a human can consistently predict the model’s result. The higher the interpretability of a machine learning model, the easier it is to comprehend why certain decisions or predictions have been made. While interpretability is not important in low-risk domains and black box models abound, in domains such as medicine or finance, or high-risk domains such as self-driving cars, or weapons systems model interpretability is a strong requirement. As privacy preserving legislation such as GDPR becomes a norm across the globe, interpretability of models is important for explaining how particular recommendations or decisions were made. The more an ML model decision affects a person’s life, the more important it is for it to be interpretable. The training data fed to a model may have biases, inconsistencies, and other artifacts. Interpretability is a useful debugging tool for detecting bias in machine learning models.

Various interpretation methods are explained in depth. How do they work under the hood and their strengths and weaknesses? How can their outputs be interpreted? We will start with models that are interpretable easily such as linear regression and decision trees and then go on to interpretation methods that are model agnostic such as feature importance, local surrogate (LIME), and Shapley values (SHAP). In the traditionally inscrutable domain of deep learning, we will look at gradient based and attention based methods for interpreting deep neural nets.

Outline

  1. Importance of interpretability
  2. Stories of uninterpretable model failures
  3. Evaluation of interpretability
  4. Human-friendly explanations
  5. Interpretable models
    1. Linear and Logistic regression
    2. GLM and GAM
    3. Decision trees
  6. Model Agnostic Methods
    1. Partial Dependence Plots
    2. Feature Interaction
    3. Feature Importance
    4. Global and local surrogate (LIME)
    5. Shapley Values
  7. Interpretability of Deep Learning Models
    1. Gradient based methods
    2. Attention based methods
  8. Counterfactual explanations
  9. Adversarial examples

Requirements

  1. Basic knowledge of Machine Learning and Deep Learning
  2. Basic familiarity with Python and Jupyter notebooks
  3. User should have a working Jupyter setup on their laptops with internet access

Speaker bio

Farhat Habib is a Director in Data Sciences at Inmobi currently working on anti-fraud and improving creatives. Farhat has a PhD and MS in Physics from The Ohio State University and has been doing data science since before it was cool. Prior to Inmobi he worked on solving logistics challenges at Locus.sh. Before that he was at Inmobi working on improving ad targeting on mobile devices and prior to that he was at Indian Institute of Science Education and Research, Pune leading research on computational biology and genomic sequence analysis. Farhat enjoys working on a wide range of domains where solutions can be found by the application of machine learning and data science.

Aditya Patel is Director, Data Science at InMobi Glance. Previously he was head of data science at Stasis and has 7+ years of experience spanning over the fields of Machine Learning and Signal Processing. He graduated with Dual Master’s degree in Biomedical and Electrical Engineering from the University of Southern California. He has presented his work in Machine learning at multiple peer reviewed conference. He also contributed to first generation “Artificial Pancreas” project in Medtronic, Los Angeles.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more