Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Make a submission
Accepting submissions till 01 Nov 2019, 04:20 PM
##About the 2019 edition:
The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule
The conference has three tracks:
#Who should attend Anthill Inside:
Anthill Inside is a platform for:
For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com
#Sponsors:
Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.
#Bronze Sponsor
#Community Sponsor
Hosted by
Nickil Maveli
@nickil21
Submitted Nov 5, 2019
Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence aligned RNNs or convolution. Transformers were recently used by OpenAI in their language models, and also used recently by DeepMind for AlphaStar, their program to defeat a top professional Starcraft player.
Key Takeaways
Build a translation mechanism for datasets with scarcely available parallel sentence pair corpus to obtain relatively high BLEU scores.
Section 1.
Section 2.
Basic Familiarity with Neural Networks and Linear Algebra.
Have over 3+ years of industrial experience in Data Science. Currently working as a data scientist (NLP) at niki.ai, where I have built models for Parse Classification, Unsupervised Synonym Detection, Identifying Code Mixing in text, etc. I’ve also participated in numerous data science competitions across Kaggle, AnalyticsVidhya, Topcoder, Crowdanalytix etc and finished in the top 10 in atleast a dozen of those.
Specialties: data science, machine learning, predictive modelling, natural language processing, deep learning, big data, artificial intelligence.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}