Anthill Inside 2019

A conference on AI and Deep Learning



Jacob Joseph


Birds of a Feather on Interpretability

Submitted Aug 13, 2019

Complex machine learning models work very well at prediction and classification tasks but become really hard to interpret. On the other hand simpler models are easier to interpret but less accurate and hence oftentimes we are made to take a call between interpretability and accuracy.

##Key takeaway

Understand why an ML algorithm makes a particular decision which can help make better business decisions.


  • Why is model interpretability important?
  • Trade off between accuracy and interpretability.
  • Developments in explainable AI.
  • Interpret black box models, global and local interpretation.


##Who should attend?

Anyone into ML and who wishes to understand blackbox(ML) decisions.

##What should you know?

Basics of Machine learning and statistics.

Speaker bio

Led by Jacob Joseph, Namrata Hanspal, Nishant Sinha


{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more