Anthill Inside 2019

A conference on AI and Deep Learning

Tickets

Loading…

Venkata Dikshit Pappu

@vdpappu

Hacking Self-attention architectures to address Unsupervised text tasks

Submitted Apr 11, 2019

Self-attention architectures like BERT, OpenAI GPT, MT-DNN are current state-of-the art feature extractors for several supervised downstream tasks for text. However, their ability on unsupervised tasks like document/sentence similarity are inconclusive. In this talk, I intend to cover brief overview of self attention architectures for Language Modelling, fine-tuning/feature selection approaches for unsupervised tasks that can be used for a variety of tasks. This talk is for NLP practitioners interested in using Self-attention architectures for their applications.

Outline

  1. Overview of Transformer/Self-attention architectures - BERT
  2. Document representations using BERT
  3. Formulating a sentence relevance score with BERT features
  4. Seaching and ranking feature sub-spaces for specific tasks
  5. Other reproducible hacks

Speaker bio

Venkat is ML Architect working for Ether Labs based out of Bangalore
6+ years of Experience in ML and related fields
Worked on Machine Vision and NLP solutions for Retail, Customer electronics, embedded verticals
Venkat leads ML team at Ether Labs and his team is responsible for building scalable AI components for Ether Video collaboration platform - Vision, NLU and Graph learning.
https://www.linkedin.com/in/vdpappu/

Slides

https://drive.google.com/file/d/1yAHwtyNnaK308X1m4Mkh_Ig16E_TO8wV/view?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more