Nov 2019
18 Mon
19 Tue
20 Wed
21 Thu
22 Fri
23 Sat 08:30 AM – 05:30 PM IST
24 Sun
Nickil Maveli
Transformer is the first transduction model relying entirely on self-attention to compute representations of its input and output without using sequence aligned RNNs or convolution. Transformers were recently used by OpenAI in their language models, and also used recently by DeepMind for AlphaStar, their program to defeat a top professional Starcraft player.
Key Takeaways
Build a translation mechanism for datasets with scarcely available parallel sentence pair corpus to obtain relatively high BLEU scores.
Section 1.
Section 2.
Basic Familiarity with Neural Networks and Linear Algebra.
Have over 3+ years of industrial experience in Data Science. Currently working as a data scientist (NLP) at niki.ai, where I have built models for Parse Classification, Unsupervised Synonym Detection, Identifying Code Mixing in text, etc. I’ve also participated in numerous data science competitions across Kaggle, AnalyticsVidhya, Topcoder, Crowdanalytix etc and finished in the top 10 in atleast a dozen of those.
Specialties: data science, machine learning, predictive modelling, natural language processing, deep learning, big data, artificial intelligence.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}