Tickets

Loading…

Logesh kumar

@infinitylogesh

Interpretable NLP Models

Submitted May 31, 2019

Deep learning models are always known to be a black box and lacks interpretability compared to traditional machine learning models. So,There is alway a hesitation in adopting deep learning models in user facing applications (especially medical applications). Recent progress in NLP with the advent of Attention based models , LIME and other techniques have helped to solve this. I would like to walkthough each of the techniques and share my experience in deploying explainable models in production.

Outline

1.Brief introduction on the importance of interpretability
2.Introduction to different interpretabilty techniques
2.1 Attention based models
2.2 LIME
2.3 Extraction based models
2.4 other techniques
3.Demo of the techniques.

Requirements

No specific requirements.

Speaker bio

I am Data scientist with a focus on NLP. I have first hand experience of facing problems occuring because of non intrepretability of deep learning models and also I have experience in deploying deep learning based NLP models from protype to production

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Jump starting better data engineering and AI futures