Anthill Inside For members

Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Tickets

Loading…

##About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
  • Myths and realities of labelling datasets for Deep Learning.
  • Practical experience with using Knowledge Graphs for different use cases.
  • Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
  • Pros and cons of using custom and open source tooling for AI/DL/ML.

#Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


#Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


#Bronze Sponsor

iMerit Impetus

#Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more

Nickil Maveli

@nickil21

Efficient Machine Translation for low resource languages using Transformers

Submitted Nov 5, 2019

Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence aligned RNNs or convolution. Transformers were recently used by OpenAI in their language models, and also used recently by DeepMind for AlphaStar, their program to defeat a top professional Starcraft player.

Key Takeaways

Build a translation mechanism for datasets with scarcely available parallel sentence pair corpus to obtain relatively high BLEU scores.

Outline

Section 1.

  1. Transformer Model Architecture
    a. Encoder [Theory + Code]
    b. Decoder [Theory + Code]
  2. Self-Attention [Theory + Code]
  3. Multi-head Attention [Theory + Code]
  4. Positional Encoding [Theory + Code]
  5. Note on Bleu Score

Section 2.

  1. Solving a Real World Translation Problem with low resource data
  2. Attention Visualization
  3. Translation Results

Requirements

Basic Familiarity with Neural Networks and Linear Algebra.

Speaker bio

Have over 3+ years of industrial experience in Data Science. Currently working as a data scientist (NLP) at niki.ai, where I have built models for Parse Classification, Unsupervised Synonym Detection, Identifying Code Mixing in text, etc. I’ve also participated in numerous data science competitions across Kaggle, AnalyticsVidhya, Topcoder, Crowdanalytix etc and finished in the top 10 in atleast a dozen of those.

Specialties: data science, machine learning, predictive modelling, natural language processing, deep learning, big data, artificial intelligence.

StackOverflow
Linkedin

Slides

https://www.beautiful.ai/player/-LswkeyEBAgBPzM7Kn_R

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more