Jul 2016
27 Mon
28 Tue
29 Wed
30 Thu
1 Fri 08:45 AM – 06:15 PM IST
2 Sat 08:15 AM – 02:15 PM IST
3 Sun
Deep Learning is a new area of research that is getting us closer in achieving one of the primary objectives of Machine Learning – Artificial Intelligence.
It is used widely in the fields of Image Recognition, Natural Language Processing (NLP) and Video Classification.
Deep Learning Conf is a single day conference followed by workshops on the second day. The conference will have full, crisp and lightning talks from morning to evening. The workshops on the next day will introduce participants to neural networks followed by two tracks of three-hour workshops on NLP and Computer Vision / AI. Participants can join either one of the two workshop tracks.
##Tracks
We are looking for talks and workshops from academics and practitioners of Deep Learning on the following topics:
We are inviting proposals for:
Proposals will be filtered and shortlisted by an Editorial Panel. Along with your proposal, you must share the following details:
If your proposal involves speaking about a library / tool / software that you intend to open source in future, the proposal will be considered only when the library / tool / software in question is made open source.
We will notify you about the status of your proposal within two-three weeks of submission.
Selected speakers have to participate in one-two rounds of rehearsals before the conference. This is mandatory and helps you prepare for speaking at the conference.
There is only one speaker per session. Entry is free for selected speakers. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. HasGeek will provide a grant to cover part of your travel and accommodation in Bangalore. Grants are limited and made available to speakers delivering full sessions (40 minutes or longer).
HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), please consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.
##Venue
CMR Institute of Technology, Bangalore
##Contact
For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.
Hosted by
Rajarshee Mitra
@rajarsheem
Submitted May 29, 2016
The proposed talk aims to provide a thorough explanation of language modelling (word and sentence embeddings), application of RNN, LSTMs to text - predicting text, mapping sentence to sentence, chatbots.
Deep Learning has heavily impacted natural language processing. Recent advancements include automatically writing poetries and essays, converting words and sentences to semantic representations called as embeddings which can be used to carry several tasks such as classification (sentiment, category), semantic similarity etc. This is called language modelling.
I propose to start my talk with neural language models - methods, improvements, how these vectors can effectively change the way we look at words and I will give some very interesting analogies to support it. I will give an overview on how these vectors can be used in some traditional problems like NER (detecting entities from text). Then we will introduce the RNN family and it’s application to sequence learning. RNN completely changes the way we deal with text (or sequence) and a whole new research area has opened. The RNN family can outperform the shallow MLPs in most of the basic tasks such as classification, analysing sentiments and ironies. RNNs can predict and generate text effectively. This concept is used in interesting applications like writng Shakespeare like dramas, source code of linux kernel. We will also talk on how RNN is different from MLP, how GRU (or LSTM) is different from RNN, how RNNs can be trained and how we can overcome some problems in vanilla RNNs. RNNs can also be used to embed sentences more effectively (i.e. converting sentences to vecctors), in sequence to sequence learning which is essentially mapping a sentence to another sentence. I will also demonstrate how an encoder and decoder works in sequence to sequence learning. This concept is used in both translating languages and forming conversational models. I will go more in-depth and talk about the state of the art attention models that has very recently arrived and will de-mistify them.
Proposed outline of talk :
Language modelling by feed forward nets (word embeddings) - CBOW, skip-gram.
Application to Named Entitiy Recognition.
paragraph vectors, sentence similarity.
What is RNN ? RNN vs MLP, RNN vs LSTM, GRU. Designing and Training. Backpropagation through time.
Difficulties - Gradient Explosion, Vanishing Gradients. Overcoming them.
Basic application of RNN, GRU, LSTM to text - sentence classification and sentiment analysis.
Predicting and generating text.
Sentence Embedding using LSTM or GRU
Attention models and Memory.
Sequence to sequence learning - encoding and decoding,
Example - conversational models (chatbots), machine translation.
Knowledge of Feed forward neural nets, one hot encodings.
I am a dedicated NLP practitioner focusing mainly on the intersection of DL to NLP. I am also a research engineer at Snapshopr, Bangalore. Currently working on some problems that appears in language - embedding methods, interpreting and generating text, seq2seq.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}