Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Submissions are closed for this project

Taj M G Road, Bangalore, Bangalore

About the 2019 edition:

The schedule for the 2019 edition is published here:

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
    - Myths and realities of labelling datasets for Deep Learning.
    - Practical experience with using Knowledge Graphs for different use cases.
    - Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
    - Pros and cons of using custom and open source tooling for AI/DL/ML.

Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to


Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.

Anthill Inside 2019 sponsors:

Bronze Sponsor

iMerit Impetus

Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more



Yes! Attention is all you need for NLP

Submitted Apr 14, 2019

Natural language processing a very tough problem to crack. We humans have our own style of speaking and though many a times we might mean the same thing we say it differently. This makes it very difficult for the machine to understand and process language at human level.

At VMware we believe in delivering the best to our customers - the best of products, the best of services, the best of everything. In order to deliver the best in support we process huge volumes of support tickets to structure free text and provide intelligent solutions. As part of a project, we were required to process a set of support tickets, identify key topics/categories, map them to a very different document set etc. Even though I wouldn’t be able to go into the details of the algorithm we have built, I would like to help built an intuition on how best to go about solving such problems.

For instance consider the problem of identifying topics/categories from your document set. The first and the most obvious approach would be topic modelling. Yeah, we can do topic modelling and also tune it in many ways like using seed keywords for bootstrapping. This works well when we have very different document groups and the keywords are clear distinguishers, but what happens when you have a group of similar documents with keywords used in multiple different contexts. Clearly the topics are contextual and there is a need to go beyond keyword based modelling. In this talk we will understand how can we make machines understand the context, take a sample problem and break down the approach.

P.S The title of the talk is inspired by the paper released by google called “Attention Is All You Need” which introduces Transformers and we will learn more about them and how they learn context efficiently in the talk.


  • Brief evolution of NLP
  • Challenges in working with free text
  • Why do we need to understand context
  • How can we understand context
    – Overview of Transformers and Self-attention
  • Demonstration of context based sequence-to-sequence modelling with below use cases
    – Document summarization
    – Anomaly detection
  • Adaptation of attention network - heirarchical attention network
  • Key takeaways

Speaker bio

Data scientist with overall 8 years of experience in software development, applied research and machine learning. Currently working at VMware as Lead Data Scientist. Tech enthusiast and stationary hoarder :)




{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Jacob Joseph

Birds of a Feather on Interpretability

Complex machine learning models work very well at prediction and classification tasks but become really hard to interpret. On the other hand simpler models are easier to interpret but less accurate and hence oftentimes we are made to take a call between interpretability and accuracy. more

13 Aug 2019