BoF on Interpretability of ML Models
Submitted by Namrata Hanspal (@namrata4) via Abhishek Balaji (@booleanbalaji) on Monday, 15 July 2019
Session type: Birds of a Feather session of 1 hour Session type: BOF session of 1 hour
Complex machine learning models work very well at prediction and classification tasks but become really hard to interpret. On the other hand simpler models are easier to interpret but less accurate and hence oftentimes we are made to take a call between interpretability and accuracy.
Understand why an ML algorithm makes a particular decision which can help make better business decisions.
- Why is model interpretability important?
- Trade off between accuracy and interpretability.
- Developments in explainable AI.
- Interpret black box models, global and local interpretation.
Who should attend?
Anyone into ML and who wishes to understand blackbox(ML) decisions.
What should you know?
Basics of Machine learning and statistics.
- Namrata Hanspal
- Fathat Habib
- Aditya Patel