Anthill Inside For members

Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Tickets

Loading…

##About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
  • Myths and realities of labelling datasets for Deep Learning.
  • Practical experience with using Knowledge Graphs for different use cases.
  • Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
  • Pros and cons of using custom and open source tooling for AI/DL/ML.

#Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


#Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


#Bronze Sponsor

iMerit Impetus

#Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more

Farhat Habib

@distantfedora

Opening the Black Box: How to Interpret Machine Learning models; techniques, tools, and takeaways

Submitted Apr 30, 2019

Interpretability of a model is the degree to which a human can consistently predict the model’s result. The higher the interpretability of a machine learning model, the easier it is to comprehend why certain decisions or predictions have been made. While interpretability is not important in low-risk domains and black box models abound, in domains such as medicine or finance, or high-risk domains such as self-driving cars, or weapons systems model interpretability is a strong requirement. As privacy preserving legislation such as GDPR becomes a norm across the globe, interpretability of models is important for explaining how particular recommendations or decisions were made. The more an ML model decision affects a person’s life, the more important it is for it to be interpretable. The training data fed to a model may have biases, inconsistencies, and other artifacts. Interpretability is a useful debugging tool for detecting bias in machine learning models.

Various interpretation methods are explained in depth. How do they work under the hood and their strengths and weaknesses? How can their outputs be interpreted? We will start with models that are interpretable easily such as linear regression and decision trees and then go on to interpretation methods that are model agnostic such as feature importance, local surrogate (LIME), and Shapley values (SHAP). In the traditionally inscrutable domain of deep learning, we will look at gradient based and attention based methods for interpreting deep neural nets.

Outline

  1. Importance of interpretability
  2. Stories of uninterpretable model failures
  3. Evaluation of interpretability
  4. Human-friendly explanations
  5. Interpretable models
    1. Linear and Logistic regression
    2. GLM and GAM
    3. Decision trees
  6. Model Agnostic Methods
    1. Partial Dependence Plots
    2. Feature Interaction
    3. Feature Importance
    4. Global and local surrogate (LIME)
    5. Shapley Values
  7. Interpretability of Deep Learning Models
    1. Gradient based methods
    2. Attention based methods
  8. Counterfactual explanations
  9. Adversarial examples

Requirements

  1. Basic knowledge of Machine Learning and Deep Learning
  2. Basic familiarity with Python and Jupyter notebooks
  3. User should have a working Jupyter setup on their laptops with internet access

Speaker bio

Farhat Habib is a Director in Data Sciences at Inmobi currently working on anti-fraud and improving creatives. Farhat has a PhD and MS in Physics from The Ohio State University and has been doing data science since before it was cool. Prior to Inmobi he worked on solving logistics challenges at Locus.sh. Before that he was at Inmobi working on improving ad targeting on mobile devices and prior to that he was at Indian Institute of Science Education and Research, Pune leading research on computational biology and genomic sequence analysis. Farhat enjoys working on a wide range of domains where solutions can be found by the application of machine learning and data science.

Aditya Patel is Director, Data Science at InMobi Glance. Previously he was head of data science at Stasis and has 7+ years of experience spanning over the fields of Machine Learning and Signal Processing. He graduated with Dual Master’s degree in Biomedical and Electrical Engineering from the University of Southern California. He has presented his work in Machine learning at multiple peer reviewed conference. He also contributed to first generation “Artificial Pancreas” project in Medtronic, Los Angeles.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more