Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Submissions are closed for this project

Taj M G Road, Bangalore, Bangalore

About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
    - Myths and realities of labelling datasets for Deep Learning.
    - Practical experience with using Knowledge Graphs for different use cases.
    - Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
    - Pros and cons of using custom and open source tooling for AI/DL/ML.

Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


Bronze Sponsor

iMerit Impetus

Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more

Jacob Joseph

@jacjose

Birds of a Feather on Interpretability

Submitted Aug 13, 2019

Complex machine learning models work very well at prediction and classification tasks but become really hard to interpret. On the other hand simpler models are easier to interpret but less accurate and hence oftentimes we are made to take a call between interpretability and accuracy.

Key takeaway

Understand why an ML algorithm makes a particular decision which can help make better business decisions.

Outline

  • Why is model interpretability important?
  • Trade off between accuracy and interpretability.
  • Developments in explainable AI.
  • Interpret black box models, global and local interpretation.

Requirements

Who should attend?

Anyone into ML and who wishes to understand blackbox(ML) decisions.

What should you know?

Basics of Machine learning and statistics.

Speaker bio

Led by Jacob Joseph, Namrata Hanspal, Nishant Sinha

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Vijay Gabale

Myths and Realities of Data Labeling for Deep Learning

In this BoF, we will explore data labeling tasks for NLP and CV problems. Specifically, we will discusses nuiances around defining, crowd sourcing and executing data labeling tasks, along with quality assurance processes. We shall also discuss machine aided data taggint to save cost, time and efforts on different data labeling tasks. Finally, we shall also touch upon feedback loopswhen some of th… more

15 Aug 2019