BoF on Interpretability of ML Models
Complex machine learning models work very well at prediction and classification tasks but become really hard to interpret. On the other hand simpler models are easier to interpret but less accurate and hence oftentimes we are made to take a call between interpretability and accuracy.
Understand why an ML algorithm makes a particular decision which can help make better business decisions.
- Why is model interpretability important?
- Trade off between accuracy and interpretability.
- Developments in explainable AI.
- Interpret black box models, global and local interpretation.
Who should attend?
Anyone into ML and who wishes to understand blackbox(ML) decisions.
What should you know?
Basics of Machine learning and statistics.
- Namrata Hanspal
- Fathat Habib
- Aditya Patel