Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
    - Myths and realities of labelling datasets for Deep Learning.
    - Practical experience with using Knowledge Graphs for different use cases.
    - Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
    - Pros and cons of using custom and open source tooling for AI/DL/ML.

Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


Bronze Sponsor

iMerit Impetus

Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more

Vikram Vij

@vikramvij

Exploring the un-conventional: End-to-End learning architectures for automatic speech recognition

Submitted Mar 17, 2019

Speech recognition is a challenging area, where accuracies have risen dramatically with the use of deep learning over the last decade, but there are still many areas of improvement. We start with the basics of speech recognition and the design of a conventional speech recognition system, comprising of acoustic modeling, language modeling, lexicon (pronunciation model) and decoder. To improve the accuracy of speech recognition and to reduce the size of the model (especially for edge computing based On-device speech recognition), new architectures are emerging. In a conventional speech recognition system, the acoustic and language model are trained separately, on different datasets. With End-to-End ASR, we can develop a single neural network which jointly learns the acoustic model, lexicon and language model components together. E2E ASR can potentially help reduce the model size by up to 18 times and also improve the accuracy (word error rate) by up to 15 %. This is based on the Listen-Attend-Spell end-to-end architecture, augmented with CTC loss, label smoothing and scheduled sampling techniques. This architecture results in increasing accuracy and reducing model size by up to 18 times with no out of vocabulary words. We can get multi-lingual and multi-dialect models with are simpler and smaller in size. A few shortcomings are the lack of streaming (online) speech recognition and handling of rare or proper nouns which can be solved by techniques such as contextual Listen-Attend-Spell, Language Model Fusion, Online attention to support real-time streaming output, personalization/biasing through a context encoder, and adaptation based on auxiliary network / multi-task learning. We go into the motivation and approach behind each of these techniques, which may also be applicable to other deep learning based systems.

Outline

We challenge the status quo in automatic speech recognition technology to achieve breakthrough results using end to end speech recognition. The talk summarizes the latest research in this area.

Speaker bio

Dr. Vikram Vij received a Ph.D. and Master’s degree from the University of California Berkeley in Computer Science, an M.B.A. degree from Santa Clara University and a B.Tech. degree from IIT Kanpur in Electronics. Vikram has over 26 years of industrial experience in multiple technical domains from Databases, Storage & File Systems, Embedded systems, Intelligent Services and IoT. He has worked at Samsung since 2004 and is currently working as Sr. Vice President and Voice Intelligence R&D Team Head at Samsung R&D Institute in Bangalore. Dr. Vij’s current focus is on building the World’s Best Voice Intelligence Experience for Mobiles and other Samsung appliances. Dr. Vij is also driving the growth of AI Centre of Excellence at Samsung Bangalore.

Slides

https://drive.google.com/file/d/11WNqde4BDkT9z0AMhdwJzW1FYaP9KtRi/view?usp=drivesdk

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more