Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Farhat Habib
Interpretability of a model is the degree to which a human can consistently predict the model’s result. The higher the interpretability of a machine learning model, the easier it is to comprehend why certain decisions or predictions have been made. While interpretability is not important in low-risk domains and black box models abound, in domains such as medicine or finance, or high-risk domains such as self-driving cars, or weapons systems model interpretability is a strong requirement. As privacy preserving legislation such as GDPR becomes a norm across the globe, interpretability of models is important for explaining how particular recommendations or decisions were made. The more an ML model decision affects a person’s life, the more important it is for it to be interpretable. The training data fed to a model may have biases, inconsistencies, and other artifacts. Interpretability is a useful debugging tool for detecting bias in machine learning models.
Various interpretation methods are explained in depth. How do they work under the hood and their strengths and weaknesses? How can their outputs be interpreted? We will start with models that are interpretable easily such as linear regression and decision trees and then go on to interpretation methods that are model agnostic such as feature importance, local surrogate (LIME), and Shapley values (SHAP). In the traditionally inscrutable domain of deep learning, we will look at gradient based and attention based methods for interpreting deep neural nets.
Farhat Habib is a Director in Data Sciences at Inmobi currently working on anti-fraud and improving creatives. Farhat has a PhD and MS in Physics from The Ohio State University and has been doing data science since before it was cool. Prior to Inmobi he worked on solving logistics challenges at Locus.sh. Before that he was at Inmobi working on improving ad targeting on mobile devices and prior to that he was at Indian Institute of Science Education and Research, Pune leading research on computational biology and genomic sequence analysis. Farhat enjoys working on a wide range of domains where solutions can be found by the application of machine learning and data science.
Aditya Patel is Director, Data Science at InMobi Glance. Previously he was head of data science at Stasis and has 7+ years of experience spanning over the fields of Machine Learning and Signal Processing. He graduated with Dual Master’s degree in Biomedical and Electrical Engineering from the University of Southern California. He has presented his work in Machine learning at multiple peer reviewed conference. He also contributed to first generation “Artificial Pancreas” project in Medtronic, Los Angeles.
Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}