The Fifth Elephant 2019

Gathering of 1000+ practitioners from the data ecosystem

Interpretable NLP Models

Submitted by Logesh kumar (@infinitylogesh) on May 31, 2019

Session type: Tutorial Status: Rejected

Abstract

Deep learning models are always known to be a black box and lacks interpretability compared to traditional machine learning models. So,There is alway a hesitation in adopting deep learning models in user facing applications (especially medical applications). Recent progress in NLP with the advent of Attention based models , LIME and other techniques have helped to solve this. I would like to walkthough each of the techniques and share my experience in deploying explainable models in production.

Outline

1.Brief introduction on the importance of interpretability
2.Introduction to different interpretabilty techniques
2.1 Attention based models
2.2 LIME
2.3 Extraction based models
2.4 other techniques
3.Demo of the techniques.

Requirements

No specific requirements.

Speaker bio

I am Data scientist with a focus on NLP. I have first hand experience of facing problems occuring because of non intrepretability of deep learning models and also I have experience in deploying deep learning based NLP models from protype to production

Links

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('You need to be a participant to comment.') }}

{{ formTitle }}
{{ gettext('Post a comment...') }}
{{ gettext('New comment') }}

{{ errorMsg }}