Sep 2017
11 Mon
12 Tue
13 Wed
14 Thu
15 Fri 08:45 AM – 05:10 PM IST
16 Sat 09:45 AM – 05:30 PM IST
17 Sun
Vamsidhar Bethanabatla
Voice/Speech as a user interface medium is growing in popularity thanks to applications like Alexa, Cortana, Siri and Google. For long we have used a level of indirection to deal with machines. For e.g. using a mouse to click a button on screen or using arrow keys to move the content. Touch interfaces came and removed this level of inderection and allowed us to interact with content directly. Voice has the potential to bring us even closer to machines. With AI and intelligent agents growing, the interfaces of systems are not going to be limited enough to fit on a screen. These agents keep learning and expose new features. Voice/Speech is a fluid mechanism that can adapt to this ever growing interface. How can we build voice driven apps on the web? What standards are in place to guide these apps? Doesn’t this require massive infrastructure and ML algorithms to make it work? These are some questions we’ll explore and look at the simplest ways to get started and alternatives available. A lot can already be achieved using JavaScript and web standards.
Vamsi has been a UI/Frontend developer for over 12 years. Currently working as a Lead Consultant, UI at ThoughtWorks. It was while working at Yahoo! that he was inspired to develop the best UIs which follow strict standards and are accessible. Still carrying that inspiration he is passionate about the quality of user interfaces and newer possibilities. One such possibility, the “Conversational UIs” was introduced to him at ThoughtWorks. For the past several months he has been exprimenting with web speech api, CMU sphinx and how these technologies can be brought together to develop meaningful web apps. One such attempt is to build an extensible voice assistant using open technologies.
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}