Namrata Hanspal


BoF on Interpretability of ML Models

Submitted Jul 15, 2019

Complex machine learning models work very well at prediction and classification tasks but become really hard to interpret. On the other hand simpler models are easier to interpret but less accurate and hence oftentimes we are made to take a call between interpretability and accuracy.

Key takeaway

Understand why an ML algorithm makes a particular decision which can help make better business decisions.


  • Why is model interpretability important?
  • Trade off between accuracy and interpretability.
  • Developments in explainable AI.
  • Interpret black box models, global and local interpretation.


Who should attend?

Anyone into ML and who wishes to understand blackbox(ML) decisions.

What should you know?

Basics of Machine learning and statistics.

Speaker bio


  • Namrata Hanspal
  • Fathat Habib
  • Aditya Patel


{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}