Deep Learning Conf 2016

A conference on deep learning.

Deep Learning is a new area of research that is getting us closer in achieving one of the primary objectives of Machine Learning – Artificial Intelligence.
It is used widely in the fields of Image Recognition, Natural Language Processing (NLP) and Video Classification.

Format

Deep Learning Conf is a single day conference followed by workshops on the second day. The conference will have full, crisp and lightning talks from morning to evening. The workshops on the next day will introduce participants to neural networks followed by two tracks of three-hour workshops on NLP and Computer Vision / AI. Participants can join either one of the two workshop tracks.

##Tracks
We are looking for talks and workshops from academics and practitioners of Deep Learning on the following topics:

  • Applications of Deep Learning in software.
  • Applications of Deep Learning in hardware.
  • Conceptual talks and cutting edge research on Deep Learning.
  • Building businesses with Deep Learning at the core.

We are inviting proposals for:

  • Full-length 40 minute talks.
  • Crisp 15-minute talks.
  • Lightning talks of 5 mins duration.

Selection process

Proposals will be filtered and shortlisted by an Editorial Panel. Along with your proposal, you must share the following details:

  • Links to videos / slide decks when submitting proposals. This will help us understand your past speaking experience.
  • Blog posts you may have written related to your proposal.
  • Outline of your proposed talk – either in the form of a mind map or a text document or draft slides.

If your proposal involves speaking about a library / tool / software that you intend to open source in future, the proposal will be considered only when the library / tool / software in question is made open source.

We will notify you about the status of your proposal within two-three weeks of submission.

Selected speakers have to participate in one-two rounds of rehearsals before the conference. This is mandatory and helps you prepare for speaking at the conference.

There is only one speaker per session. Entry is free for selected speakers. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. HasGeek will provide a grant to cover part of your travel and accommodation in Bangalore. Grants are limited and made available to speakers delivering full sessions (40 minutes or longer).

Commitment to open source

HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), please consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.

Key dates and deadlines

  • Proposal submission deadline: 31 May 2016
  • Schedule announcement: 15 June 2016
  • Conference dates: 1 July 2016

##Venue
CMR Institute of Technology, Bangalore

##Contact
For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Rajarshee Mitra

@rajarsheem

Sequence learning

Submitted May 29, 2016

The proposed talk aims to provide a thorough explanation of language modelling (word and sentence embeddings), application of RNN, LSTMs to text - predicting text, mapping sentence to sentence, chatbots.

Deep Learning has heavily impacted natural language processing. Recent advancements include automatically writing poetries and essays, converting words and sentences to semantic representations called as embeddings which can be used to carry several tasks such as classification (sentiment, category), semantic similarity etc. This is called language modelling.

I propose to start my talk with neural language models - methods, improvements, how these vectors can effectively change the way we look at words and I will give some very interesting analogies to support it. I will give an overview on how these vectors can be used in some traditional problems like NER (detecting entities from text). Then we will introduce the RNN family and it’s application to sequence learning. RNN completely changes the way we deal with text (or sequence) and a whole new research area has opened. The RNN family can outperform the shallow MLPs in most of the basic tasks such as classification, analysing sentiments and ironies. RNNs can predict and generate text effectively. This concept is used in interesting applications like writng Shakespeare like dramas, source code of linux kernel. We will also talk on how RNN is different from MLP, how GRU (or LSTM) is different from RNN, how RNNs can be trained and how we can overcome some problems in vanilla RNNs. RNNs can also be used to embed sentences more effectively (i.e. converting sentences to vecctors), in sequence to sequence learning which is essentially mapping a sentence to another sentence. I will also demonstrate how an encoder and decoder works in sequence to sequence learning. This concept is used in both translating languages and forming conversational models. I will go more in-depth and talk about the state of the art attention models that has very recently arrived and will de-mistify them.

Outline

Proposed outline of talk :

  1. Language modelling by feed forward nets (word embeddings) - CBOW, skip-gram.
    Application to Named Entitiy Recognition.

  2. paragraph vectors, sentence similarity.

  3. What is RNN ? RNN vs MLP, RNN vs LSTM, GRU. Designing and Training. Backpropagation through time.
    Difficulties - Gradient Explosion, Vanishing Gradients. Overcoming them.

  4. Basic application of RNN, GRU, LSTM to text - sentence classification and sentiment analysis.

  5. Predicting and generating text.

  6. Sentence Embedding using LSTM or GRU

  7. Attention models and Memory.

  8. Sequence to sequence learning - encoding and decoding,
    Example - conversational models (chatbots), machine translation.

Requirements

Knowledge of Feed forward neural nets, one hot encodings.

Speaker bio

I am a dedicated NLP practitioner focusing mainly on the intersection of DL to NLP. I am also a research engineer at Snapshopr, Bangalore. Currently working on some problems that appears in language - embedding methods, interpreting and generating text, seq2seq.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more