The Fifth Elephant 2018

The Fifth Elephant 2018

The seventh edition of India's best data conference

##About the conference and topics for submitting talks:
The Fifth Elephant is rated as India’s best data conference. It is a conference for practitioners, by practitioners. In 2018, The Fifth Elephant will complete its seventh edition.

The Fifth Elephant is an evolving community of stakeholders invested in data in India. Our goal is to strengthen and grow this community by presenting talks, panels and Off The Record (OTR) sessions that present real insights about:

**

  1. Data engineering and architecture: tools, frameworks, infrastructure, architecture, case studies and scaling.
  2. Data science and machine learning: fundamentals, algorithms, streaming, tools, domain specific and data specific examples, case studies.
  3. The journey and challenges in building data driven products: design, data insights, visualisation, culture, security, governance and case studies.
  4. Talks around an emerging domain: such as IoT, finance, e-commerce, payments or data in government.
    **

##Target audience:
You should attend and speak at The Fifth Elephant if your work involves:

  1. Engineering and architecting data pipelines.
  2. Building ML models, pipelines and architectures.
  3. ML engineering.
  4. Analyzing data to build features for existing products.
  5. Using data to predict outcomes.
  6. Using data to create / model visualizations.
  7. Building products with data – either as product managers or as decision scientists.
  8. Researching concepts and deciding on algorithms for analyzing datasets.
  9. Mining data with greater speed and efficiency.
  10. Developer evangelists from organizations which want developers to use their APIs and technologies for machine learning, full stack engineering, and data science.

##Perks for submitting proposals:
Submitting a proposal, especially with our process, is hard work. We appreciate your effort.
We offer one conference ticket at discounted price to each proposer, and a t-shirt.
We only accept one speaker per talk. This is non-negotiable. Workshops may have more than one instructor.
In case of proposals where more than one person has been mentioned as collaborator, we offer the discounted ticket and t-shirt only to the person with who the editorial team corresponded directly during the evaluation process.

##Format:
The Fifth Elephant is a two-day conference with two tracks on each day. Track details will be announced with a draft schedule in February 2018.

We are accepting sessions with the following formats:

  1. Full talks of 40 minutes.
  2. Crisp talks of 20 minutes.
  3. Off the Record (OTR) sessions on focussed topics / questions. An OTR is 60-90 minutes long and typically has up to four facilitators and one moderator.
  4. Workshops and tutorials of 3-6 hours duration on Machine Learning concepts and tools, full stack data engineering, and data science concepts and tools.
  5. Pre-events. Birds Of Feather (BOF) sessions, talks, and workshops for open houses and pre-events in Bangalore and other cities between October 2017 and June 2018.** Reach out to info@hasgeek.com should you be interested in speaking and/or hosting a community event between now and the conference in July 2018.

##Selection criteria:
The first filter for a proposal is whether the technology or solution you are referring to is open source or not. The following criteria apply for closed source talks:

  1. If the technology or solution is proprietary, and you want to speak about your proprietary solution to make a pitch to the audience, you should pick up a sponsored session. This involves paying for the speaking slot. Write to fifthelephant.editorial@hasgeek.com
  2. If the technology or solution is in the process of being open sourced, we will consider the talk only if the solution is open sourced at least three months before the conference.
  3. If your solution is closed source, you should consider proposing a talk explaining why you built it in the first place; what options did you consider (business-wise and technology-wise) before making the decision to develop the solution; or, what is your specific use case that left you without existing options and necessitated creating the in-house solution.

The criteria for selecting proposals, in the order of importance, are:

  1. Key insight or takeaway: what can you share with participants that will help them in their work and in thinking about the ML, big data and data science problem space?
  2. Structure of the talk and flow of content: a detailed outline – either as mindmap or draft slides or textual description – will help us understand the focus of the talk, and the clarity of your thought process.
  3. Ability to communicate succinctly, and how you engage with the audience. You must submit link to a two-minute preview video explaining what your talk is about, and what is the key takeaway for the audience.

No one submits the perfect proposal in the first instance. We therefore encourage you to:

  1. Submit your proposal early so that we have more time to iterate if the proposal has potential.
  2. Talk to us on our community Slack channel: https://friends.hasgeek.com if you want to discuss an idea for your proposal, and need help / advice on how to structure it. Head over to the link to request an invite and join #fifthel.

Our editorial team helps potential speakers in honing their speaking skills, fine tuning and rehearsing content at least twice - before the main conference - and sharpening the focus of talks.

##How to submit a proposal (and increase your chances of getting selected):
The following guidelines will help you in submitting a proposal:

  1. Focus on why, not how. Explain to participants why you made a business or engineering decision, or why you chose a particular approach to solving your problem.
  2. The journey is more important than the solution you may want to explain. We are interested in the journey, not the outcome alone. Share as much detail as possible about how you solved the problem. Glossing over details does not help participants grasp real insights.
  3. Focus on what participants from other domains can learn/abstract from your journey / solution. Refer to these talks from The Fifth Elephant 2017, which participants liked most: http://hsgk.in/2uvYKI9 and http://hsgk.in/2ufhbWb
  4. We do not accept how-to talks unless they demonstrate latest technology. If you are demonstrating new tech, show enough to motivate participants to explore the technology later. Refer to talks such as this: http://hsgk.in/2vDpag4 and http://hsgk.in/2varOqt to structure your proposal.
  5. Similarly, we don’t accept talks on topics that have already been covered in the previous editions. If you are unsure about whether your proposal falls in this category, drop an email to: fifthelephant.editorial@hasgeek.com
  6. Content that can be read off the internet does not interest us. Our participants are keen to listen to use cases and experience stories that will help them in their practice.

To summarize, we do not accept talks that gloss over details or try to deliver high-level knowledge without covering depth. Talks have to be backed with real insights and experiences for the content to be useful to participants.

##Passes and honorarium for speakers:
We pay an honorarium of Rs. 3,000 to each speaker and workshop instructor at the end of their talk/workshop. Confirmed speakers and instructors also get a pass to the conference and networking dinner. We do not provide free passes for speakers’ colleagues and spouses.

##Travel grants for outstation speakers:
Travel grants are available for international and domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans. If you require a grant, request it when you submit your proposal in the field where you add your location. The Fifth Elephant is funded through ticket purchases and sponsorships; travel grant budgets vary.

##Last date for submitting proposals is: 31 March 2018.
You must submit the following details along with your proposal, or within 10 days of submission:

  1. Draft slides, mind map or a textual description detailing the structure and content of your talk.
  2. Link to a self-recorded, two-minute preview video, where you explain what your talk is about, and the key takeaways for participants. This preview video helps conference editors understand the lucidity of your thoughts and how invested you are in presenting insights beyond the solution you have built, or your use case. Please note that the preview video should be submitted irrespective of whether you have spoken at past editions of The Fifth Elephant.
  3. If you submit a workshop proposal, you must specify the target audience for your workshop; duration; number of participants you can accommodate; pre-requisites for the workshop; link to GitHub repositories and a document showing the full workshop plan.

##Contact details:
For more information about the conference, sponsorships, or any other information contact support@hasgeek.com or call 7676332020.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Vikram Vij

@vikramvij

Building a next generation speech and NLU engine: in pursuit of multi-modal experience for Bixby

Submitted Mar 28, 2018

Bixby is an intelligent, personalized voice interface for your phone. It lets you seamlessly switch between voice & type/touch, and supports more than 75 domains (eg. Camera, Gallery, Messages, WhatsApp, Youtube, Uber etc.). It was launched in July 2017 for English and is now available in more than 200 countries with about 8 million registered users. The talk focuses on challenges in deep learning for Bixby Automatic Speech Recognition & Natural Language understanding, ranging from CNN vs. RNNs, Word vs. Character based models, Domain Classification challenges given the massive contextual input space, Grammar complexity, Multi-modal and Multi-accent handling.

Outline

Bixby is an intelligent, personalized voice interface for your phone. It lets you seamless switch between voice & type/touch, and supports more than 75 domains (eg. Camera, Gallery, Messages, WhatsApp, Youtube, Uber etc.). It was launched in July 2017 for English and is now available on more than 200 countries with about 8 million registered users.

My talk focuses on challenges in deep learning for Bixby Automatic Speech Recognition & Natural Language understanding, ranging from CNN vs. RNNs, Word vs. Character based models, Domain Classification challenges given the massive contextual input space, Grammar complexity, Multi-modal and Multi-accent handling. We go into details of hierarchical classification, session based classification, intent rejection logic… Also about the tradeoffs between RNNs and CNNs, Optimal filter sizes for CNNs, Handling variations of data and conflicts between data. Also go into use of Transfer learning and Bilingual models for Bixby for Hindi

When you look at processing steps of voice engine, it typically is like this. User speaks an utterance, for example “text to mom”. Then NLU engine tries to understand what domain the user is talking about, what command the user wants to execute, and extract the required parameters for execution in slot tagger.

In a minimalistic view, Bixby accepts voice signals with its Automatic Speech Recognition engine, and then give transcribed text to its Natural Language Processing engine. Then NLU engine extracts the information required for execution, and send it to devices or CP services.

Bixby Automatic Speech Recognition (ASR) was earlier optimized for US English accent only. In our testing, we found that it did not perform as well as expected. The root cause was that there are many people of Indian, Korean, Chinese and Spanish origin residing in US and the ASR did not work so well for them. So we trained ASR models optimized for Indian English, Korean English, Chinese English and Spanish English using transfer learning to save training time as well computing resources. Then, we had to find a way to load the model that would best for the individual’s voice. We incorporated an accent determination step at the Bixby onboarding time and the user is asked to speak five sentences. Word recognition accuracy is measured for all these models and we select the model using ASR performance as well as other cues such as Keyboard, Contact information. The accent selection once determined will be used as default.

One big difference of Bixby is that we tried to build a multi-modal system which supports both touch and voice interface, so that a user can execute the same function with touch or voice. This 1st version of Bixby we call it Bixby 1.0

Usually voice assistants classify user utterances into commands not caring much about the screen status. In Bixby 1.0, we try to understand user utterances based on their screen context too. So “find James” in contact application should give you the contact information of James, And “find James” in Gallery application should give you the images tagged as James.

To support that kind of multi-modality, we modeled application screens as contexts of dialog management system. So we should have added context awareness to traditional NLU to build a multi-modal NLU engine. The problem was there were thousands of different screens that should be modeled as different contexts. Moreover, we needed another kind of challenge in context awareness which is coming from supporting many device types. The set of commands vary from device to device because they have delta functions according to models and locales. So we needed to consider the changes of command set as well.

Now let’s look at the first challenge, which is the challenge of massive contextual input space. The input to NLU engine is now not only the utterances for 6,000 commands, but also the context of where the user started talking. So like I presented in the previous slide, “find james” in gallery application should work differently to “find james” in contact application. If we model it in a dumbest way, we can maintain a command classifier per each context. This will be best in performance, but the developing cost is prohibitive. It means training and maintaining 2,000 classifiers. We have a hierarchical classifier in place – meta domain (for some domains), domain, intent.. As we have a session based architecture. Once we are inside the session, we go to intent classification directly (bypassing domain classification). In case the intent classification rejects, it takes the output of the domain classifier.

RNN Domain Classification was designed as word-based model. The model converges fast. But it had issues of unknown words. And it was performing poorly for variations of the client state from where the utterance was generated. Due to this reason Domain Classification was moved to character based CNN model, where data is more and build time is also increased.

Word based model has known problem of unknown words. Whereas character based model does not have any unknowns. But character based model is not good at making a difference between different words, having similar spelling. For example “search for s8 plus”, goes to calculator domain due to presence of similar character sequence “8 plus” in calculator domain.

For such a huge input space, there were extreme variations of data. That includes lots of unknown words, during training phase. The unknowns were issues for accuracy in lots of domains. That led us to experiment on the possibilities on CNN with Respect to RNN

There were issues of misclassifications for the word inflections (when the word boundary goes beyond the representation). The CNN was candidate of research to counter the inflection problem, which we faced in the RNN. In RNN, the state was not getting learnt… Sentence is represented in vector space but it was too huge for the word based RNN to handle. Also, unknown words were not being handled with the word based RNN. So we went into CNN. This for both domain and intent classification. For the tagger, we continued with RNN.

When the migration was done to CNN, then there was a question on the optimal filter size for the CNN design.We conducted various experimentation on different combinations of values of N in N-Gram for CNN Filters. Typically shorter values of N was used for sub-word level features. And in the same time, larger values of N is used for understanding the language structures. Various experiments were conducted to determine the best filter sizes to achieve the commercial quality accuracy. We have multiple filters with various sizes (2x2, 4x4, 6x6 etc.). We have another layer of CNN which gives the final output with a probabilistic score.

For such a huge input space, there were extreme variations of data. At the same time, there exists similarities between the data. So we needed some tools to help resolve such data conflicts. We used techniques such as tf-idf, cosine similarity and policy conflict concept words to deal with this problem.

As discussed earlier, we built the DNN classifier to take the context as input as well as utterances. Now we are good as we have just one classifier for every context. But still we need to train this neural network with utterances with different context. For example, an utterance A should be mapped into command 1 when they are under context alpha or beta, utterance B needs to be mapped to command 1 at context alpha, and command 2 at context beta. If you want to maintain the training set like this, it will serve your purpose but training time and maintenance cost will still be prohibitive. So we needed a nice sampling algorithm to pick up necessary training data. How the sampling works well will ultimately determines the fluency of context understanding. Samsung is recognized for making various device models, throughout the year. When we are having multi-modality , then various device models will have their differences in UX. That’s a challenge to Bixby to handle a wide variety of output spaces. The architecture here show the handling of variable output space

Speaker bio

Dr. Vij has over 26 years of industrial experience in multiple technical domains from Databases, Storage & File Systems, Embedded systems, Intelligent Services and IoT. He has worked at Samsung since 2004 and is currently working as Sr. Vice President and Voice Intelligence R&D Team Head at Samsung R&D Institute in Bangalore. Dr. Vij’s current focus is on building the World’s Best Voice Intelligence Experience for Mobiles and other Samsung appliances. Dr. Vikram Vij received a Ph.D. and Master’s degree from the University of California Berkeley in Computer Science, an M.B.A. degree from Santa Clara University and a B.Tech. degree from IIT Kanpur in Electronics.

Slides

https://www.slideshare.net/vinutharani1995/samsung-voice-intelligencev55-107403316

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more