Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Sai Sundarakrishna
Requisite characteristics of the Machine Learning models that make them fully deployable in a business setting are multivarious and sometimes compelling. Mere predictive power and validation accuracy are sometimes not sufficient. These models need to be interpretable, bias-free, transparent, explainable, consistent and should have a global and local basis for their output predictions. In this talk we would address various contemporary state of the art approaches to explain and interpret the complex models, ranging from linear and logistic models to Deep Learning models. We will focus on fundamental Model Interpretability principles to build simple to complex models. We will cover a mix of Model specific and Model Agnostic Interpretation strategies. Some of the models that would be covered are; Decision Tree Surrogate models, ICE plots, K-LIME, LOCO, Partial dependent plots and Random Forest Feature importance. Two great model interpretation strategies LIME and SHAP would be introduced and covered in depth. You will learn the process of applying and interpreting these techniques on real-world datasets and casestudies.
Machine Learning Interpretability - Foundational Principles
Model Agnostic and Model specifc Tools for interpretability, and, why we need both the flavors
Interpreting simple to complex models (linear/logistic to Deep Learning Models like Convolutional Neural Nets)
Notations, insights and key ideas related to techniques such as K-LIME, SHAP, ICE plots, PD plots & LOCO.
Insights from interpretation, appropriateness of model updates, bias/transparency handling etc.
Examples/Case Studies, insights for production implementation
Basic Statistics, Basic Knowledge of ML and DL techniques
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}