Jul 2016
27 Mon
28 Tue
29 Wed
30 Thu
1 Fri 08:45 AM – 06:15 PM IST
2 Sat 08:15 AM – 02:15 PM IST
3 Sun
Jul 2016
27 Mon
28 Tue
29 Wed
30 Thu
1 Fri 08:45 AM – 06:15 PM IST
2 Sat 08:15 AM – 02:15 PM IST
3 Sun
Deep Learning is a new area of research that is getting us closer in achieving one of the primary objectives of Machine Learning – Artificial Intelligence.
It is used widely in the fields of Image Recognition, Natural Language Processing (NLP) and Video Classification.
Deep Learning Conf is a single day conference followed by workshops on the second day. The conference will have full, crisp and lightning talks from morning to evening. The workshops on the next day will introduce participants to neural networks followed by two tracks of three-hour workshops on NLP and Computer Vision / AI. Participants can join either one of the two workshop tracks.
##Tracks
We are looking for talks and workshops from academics and practitioners of Deep Learning on the following topics:
We are inviting proposals for:
Proposals will be filtered and shortlisted by an Editorial Panel. Along with your proposal, you must share the following details:
If your proposal involves speaking about a library / tool / software that you intend to open source in future, the proposal will be considered only when the library / tool / software in question is made open source.
We will notify you about the status of your proposal within two-three weeks of submission.
Selected speakers have to participate in one-two rounds of rehearsals before the conference. This is mandatory and helps you prepare for speaking at the conference.
There is only one speaker per session. Entry is free for selected speakers. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. HasGeek will provide a grant to cover part of your travel and accommodation in Bangalore. Grants are limited and made available to speakers delivering full sessions (40 minutes or longer).
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), please consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
##Venue
CMR Institute of Technology, Bangalore
##Contact
For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.
Hosted by
Nishant Sinha
@ekshaks
Submitted May 31, 2016
Building conversational assistants which help users get jobs done, e.g., order food, book tickets or buy phones, is a complex task. Your bot needs to understand ambiguous natural language inputs, guess user’s intent and context, extract relevant entities, lookup catalogs, generate responses to elicit more information, build user’s profile and finally create and fulfill orders!! While deep learning cannot create an end-to-end assistant for you automatically, it can certainly help with several of the tasks above.
In this talk, I’ll discuss how deep learning can be used for natural language understanding, in particular, to solve the problem of slot-filling. For instance, from the sentence ‘recharge 9900990099 for Rs 100’, we can fill up two slots needed by our recharge bot: phone_number = 9900990099, recharge_amount = 100.
Slot-filling is an instance of the more complex semantic parsing problem. While the latter requires building sophisticated parse trees, slot-filling is, in essence, is a sentence labeling problem. Historically, methods based on conditional random fields (CRFs) have been used to solve the slot-filling problem. Not surprisingly, deep learning methods now outperform CRFs for sequence labeling tasks also. I will present multiple recurrent neural network (RNN) variations for the sequence labeling problem and discuss their relative advantages. I’ll also present encodings which tradeoff local word-level loss functions with sequence level loss functions over RNNs, in order to gain the full power of CRFs.
Building Conversational Assistants
Slot-filling problem
CRFs, RNNs
RNN variations - Elman, Jordan, BiRNNs
Optimizing RNNs for Slot-Filling
Nishant Sinha is an experienced computer scientist and researcher with expertise in deductive and inductive inference, conversational interfaces and distributed systems. He has worked at reputed industrial research labs including NEC Labs USA and IBM Research India and mentored several graduate students. He has published in top-tier international academic conferences and has several patents to his credit. At MagicX, Nishant helps build smart personal assistants for getting tasks done.
Jul 2016
27 Mon
28 Tue
29 Wed
30 Thu
1 Fri 08:45 AM – 06:15 PM IST
2 Sat 08:15 AM – 02:15 PM IST
3 Sun
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}