Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
    - Myths and realities of labelling datasets for Deep Learning.
    - Practical experience with using Knowledge Graphs for different use cases.
    - Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
    - Pros and cons of using custom and open source tooling for AI/DL/ML.

Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


Bronze Sponsor

iMerit Impetus

Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more

Venkata Dikshit Pappu

@vdpappu

Hacking Self-attention architectures to address Unsupervised text tasks

Submitted Apr 11, 2019

Self-attention architectures like BERT, OpenAI GPT, MT-DNN are current state-of-the art feature extractors for several supervised downstream tasks for text. However, their ability on unsupervised tasks like document/sentence similarity are inconclusive. In this talk, I intend to cover brief overview of self attention architectures for Language Modelling, fine-tuning/feature selection approaches for unsupervised tasks that can be used for a variety of tasks. This talk is for NLP practitioners interested in using Self-attention architectures for their applications.

Outline

  1. Overview of Transformer/Self-attention architectures - BERT
  2. Document representations using BERT
  3. Formulating a sentence relevance score with BERT features
  4. Seaching and ranking feature sub-spaces for specific tasks
  5. Other reproducible hacks

Speaker bio

Venkat is ML Architect working for Ether Labs based out of Bangalore
6+ years of Experience in ML and related fields
Worked on Machine Vision and NLP solutions for Retail, Customer electronics, embedded verticals
Venkat leads ML team at Ether Labs and his team is responsible for building scalable AI components for Ether Video collaboration platform - Vision, NLU and Graph learning.
https://www.linkedin.com/in/vdpappu/

Links

Slides

https://drive.google.com/file/d/1yAHwtyNnaK308X1m4Mkh_Ig16E_TO8wV/view?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more