arrow_back Why we went ahead with Apache Pulsar(streaming platform 2.0 ) instead of Apache Kafka
The Art of Applying Data Science arrow_forward
Exploring the un-conventional: End to End learning architectures for automatic speech recognition
Submitted by Vikram Vij (@vikramvij) on Sunday, 17 March 2019
Session type: Lecture Session type: Full talk of 40 mins
Speech recognition is a challenging area, where accuracies have risen dramatically with the use of deep learning over the last decade, but there are still many areas of improvement. We start with the basics of speech recognition and the design of a conventional speech recognition system, comprising of acoustic modeling, language modeling, lexicon (pronunciation model) and decoder. To improve the accuracy of speech recognition and to reduce the size of the model (especially for edge computing based On-device speech recognition), new architectures are emerging. In a conventional speech recognition system, the acoustic and language model are trained separately, on different datasets. With End-to-End ASR, we can develop a single neural network which jointly learns the acoustic model, lexicon and language model components together. E2E ASR can potentially help reduce the model size by up to 18 times and also improve the accuracy (word error rate) by up to 15 %. This is based on the Listen-Attend-Spell end-to-end architecture, augmented with CTC loss, label smoothing and scheduled sampling techniques. This architecture results in increasing accuracy and reducing model size by up to 18 times with no out of vocabulary words. We can get multi-lingual and multi-dialect models with are simpler and smaller in size. A few shortcomings are the lack of streaming (online) speech recognition and handling of rare or proper nouns which can be solved by techniques such as contextual Listen-Attend-Spell, Language Model Fusion, Online attention to support real-time streaming output, personalization/biasing through a context encoder, and adaptation based on auxiliary network / multi-task learning. We go into the motivation and approach behind each of these techniques, which may also be applicable to other deep learning based systems.
We challenge the status quo in automatic speech recognition technology to achieve breakthrough results using end to end speech recognition. The talk summarizes the latest research in this area.
Dr. Vikram Vij received a Ph.D. and Master’s degree from the University of California Berkeley in Computer Science, an M.B.A. degree from Santa Clara University and a B.Tech. degree from IIT Kanpur in Electronics. He has over 26 years of industrial experience in multiple technical domains from Databases, Storage & File Systems, Embedded systems, Intelligent Services and IoT. Vikram has worked at Samsung since 2004 and is currently working as Sr. Vice President and Voice Intelligence R&D Team Head at Samsung R&D Institute in Bangalore. His current focus is on building the World’s Best Voice Intelligence Experience for Mobiles and other Samsung appliances. Dr. Vij is also driving the growth of AI Centre of Excellence at Samsung Bangalore.