Anthill Inside For members

Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Tickets

Loading…

##About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
  • Myths and realities of labelling datasets for Deep Learning.
  • Practical experience with using Knowledge Graphs for different use cases.
  • Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
  • Pros and cons of using custom and open source tooling for AI/DL/ML.

#Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


#Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


#Bronze Sponsor

iMerit Impetus

#Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more

Sai Sundarakrishna

@psgsai

Interpret-ability as a bridge from Insights to Intuition in Machine and Deep Learning

Submitted Apr 30, 2019

Requisite characteristics of the Machine Learning models that make them fully deployable in a business setting are multivarious and sometimes compelling. Mere predictive power and validation accuracy are sometimes not sufficient. These models need to be interpretable, bias-free, transparent, explainable, consistent and should have a global and local basis for their output predictions. In this talk we would address various contemporary state of the art approaches to explain and interpret the complex models, ranging from linear and logistic models to Deep Learning models. We will focus on fundamental Model Interpretability principles to build simple to complex models. We will cover a mix of Model specific and Model Agnostic Interpretation strategies. Some of the models that would be covered are; Decision Tree Surrogate models, ICE plots, K-LIME, LOCO, Partial dependent plots and Random Forest Feature importance. Two great model interpretation strategies LIME and SHAP would be introduced and covered in depth. You will learn the process of applying and interpreting these techniques on real-world datasets and casestudies.

Outline

Machine Learning Interpretability - Foundational Principles
Model Agnostic and Model specifc Tools for interpretability, and, why we need both the flavors
Interpreting simple to complex models (linear/logistic to Deep Learning Models like Convolutional Neural Nets)
Notations, insights and key ideas related to techniques such as K-LIME, SHAP, ICE plots, PD plots & LOCO.
Insights from interpretation, appropriateness of model updates, bias/transparency handling etc.
Examples/Case Studies, insights for production implementation

Requirements

Basic Statistics, Basic Knowledge of ML and DL techniques

Speaker bio

https://www.linkedin.com/in/sai-sundarakrishna-7b87625/

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more