Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
Logesh kumar
Deep learning models are always known to be a black box and lacks interpretability compared to traditional machine learning models. So,There is alway a hesitation in adopting deep learning models in user facing applications (especially medical applications). Recent progress in NLP with the advent of Attention based models , LIME and other techniques have helped to solve this. I would like to walkthough each of the techniques and share my experience in deploying explainable models in production.
1.Brief introduction on the importance of interpretability
2.Introduction to different interpretabilty techniques
2.1 Attention based models
2.2 LIME
2.3 Extraction based models
2.4 other techniques
3.Demo of the techniques.
No specific requirements.
I am Data scientist with a focus on NLP. I have first hand experience of facing problems occuring because of non intrepretability of deep learning models and also I have experience in deploying deep learning based NLP models from protype to production
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}