arrow_back Panel on product and AI
Keep Calm and Trust your Model - On Explainability of Machine Learning Models
Submitted by Praveen Sridhar (@psbots) on Monday, 10 July 2017
Section: Full talk Technical level: Intermediate
The accuracy of Machine Learning models is going up by the day with advances in Deep Learning. But this comes at a cost of explainability of these models. There is a need to uncover these black boxes for the Business users. This is very essential especially for heavily regulated industries like Finance, Medicine, Defence and the likes
A lot of research is going on to make ML models interpretable and explainable. In this talk we will be going through the various approaches taken to unravel machine learning models and explain the reason behind their predictions.
We’ll see the different approaches being taken by discussing the latest research literature, the ‘behind the scenes’ view of what is happening inside these approaches with enough mathematical depth and intuition.
Finally, the aim is to leave the audience with the practical know-how on how to use these approaches in understanding deep learning and classical machine learning models using open source tools in Python by doing a live demo. Link to IPython notebooks
Motivation + Intro on Explainability (5 mins)
- The need for explainability
- Why are certain models not explainable?
- Linear, monotonic vs Non-linear, non-monotonic functions
Model Specific approaches to Explainability (15 mins)
Special model specific methods, deep dive into a few of them :
- Tree Interpreter for explaining Tree based models like Random Forest and Gradient Boosted Trees
- Model Specific Visualisations
- Attention mechanism used to explain predictions
- Generating explanations as a part of the model itself (cutting edge deep learning models from MIT and Berkeley that give an explanation as additional output along with the predicted class/value)
Model Agnostic approaches to Explainability (10 mins)
- Global scoped Surrogate models, statistical interpretation tools like Variable Importance, Residual plot etc
- Local Interpretable Model-agnostic Explanations : recent research which works on any black box model
- Layerwise Relevance Propagation for understanding Deep Learning
- for CNNs
- for RNNs (the source code for this was released just 15 days back! We’ll be doing a live demo of this method)
Live Demo of the above approaches and Conclusion (10 mins)
- Use open source tools in Python and learn how to make use of them to explain machine learning model predictions
- Conclusion with practical demonstrations and call to action to try out the tools
Basic understanding of Deep Learning and classical Machine Learning algorithms.
Currently working as a as Machine Learning Engineer at datalog.ai, working remotely from Kochi, I’m entirely self-taught in the field, and originally did Bachelors in Mechanical Engineering from CUSAT.
I have completed consulting projects in ML and AI with multiple startups and companies.
Previously I was a Technology Innovation Fellow with Kerala Startup Mission where I started a non-profit student community TinkerHub, that has a focus on creating community spaces across colleges for learning the latest technologies.
My work on CNNs was the winning solution for IBM’s Cognitive Cup challenge in 2016 and gave a talk on the same at the Super Computing conference SC16 at Salt Lake City, Utah : Slides
Explainability and Interpretability of ML is one of my focus areas, after having interacted with many Business owners asking for the reasons behind the working of the prediction models built for them.