Deep Learning Conf 2016

A conference on deep learning.

Deep Learning is a new area of research that is getting us closer in achieving one of the primary objectives of Machine Learning – Artificial Intelligence.
It is used widely in the fields of Image Recognition, Natural Language Processing (NLP) and Video Classification.

Format

Deep Learning Conf is a single day conference followed by workshops on the second day. The conference will have full, crisp and lightning talks from morning to evening. The workshops on the next day will introduce participants to neural networks followed by two tracks of three-hour workshops on NLP and Computer Vision / AI. Participants can join either one of the two workshop tracks.

##Tracks
We are looking for talks and workshops from academics and practitioners of Deep Learning on the following topics:

  • Applications of Deep Learning in software.
  • Applications of Deep Learning in hardware.
  • Conceptual talks and cutting edge research on Deep Learning.
  • Building businesses with Deep Learning at the core.

We are inviting proposals for:

  • Full-length 40 minute talks.
  • Crisp 15-minute talks.
  • Lightning talks of 5 mins duration.

Selection process

Proposals will be filtered and shortlisted by an Editorial Panel. Along with your proposal, you must share the following details:

  • Links to videos / slide decks when submitting proposals. This will help us understand your past speaking experience.
  • Blog posts you may have written related to your proposal.
  • Outline of your proposed talk – either in the form of a mind map or a text document or draft slides.

If your proposal involves speaking about a library / tool / software that you intend to open source in future, the proposal will be considered only when the library / tool / software in question is made open source.

We will notify you about the status of your proposal within two-three weeks of submission.

Selected speakers have to participate in one-two rounds of rehearsals before the conference. This is mandatory and helps you prepare for speaking at the conference.

There is only one speaker per session. Entry is free for selected speakers. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. HasGeek will provide a grant to cover part of your travel and accommodation in Bangalore. Grants are limited and made available to speakers delivering full sessions (40 minutes or longer).

Commitment to open source

HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), please consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.

Key dates and deadlines

  • Proposal submission deadline: 31 May 2016
  • Schedule announcement: 15 June 2016
  • Conference dates: 1 July 2016

##Venue
CMR Institute of Technology, Bangalore

##Contact
For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

pradyumna reddy

@pradyu1993

Residual Learning and Stochastic Depth in Deep Neural Networks

Submitted May 6, 2016

The talk will introduce Deep Residual Learning and provide an in depth idea of how Residual Networks work. It will also cover stochastic depth method which helps to increase the depth of residual networks beyond 1200 layers.

Outline

Residual networks are famed for receiving first place in latest ILSVRC image classification.
They are able to achieve a start of the art performance in image classification tasks beating previous VGG net
Stochastic Depth is a ridiculously simple idea which can help in training the network even if the maximum depth is in order of 1000s.

The talk would cover the following:

  1. Introduction to convolution layer, batch normalization and relu depending on the audience comfort level with these concepts.
  2. Basic Introduction of architectures of Deep Neural Networks which previously won ILSVRC
  3. Deep Residual Learning and how to implement Residual networks in TensorFlow
  4. Deep Neural Networks with Stochastic depth
  5. If time permits will discuss other similar architectures like Recombinator Networks and summation based networks.

Speaker bio

Pradyumna is a Statistical Analyst at @WalmartLabs. He completed his under graduation in Computer Science from BITS Pilani Goa Campus. He did his undergraduate thesis under Prof Yogesh Rathi, Director of Pediatric Image Computing at Psychiatry Neuroimaging Lab Harvard Medical School. He was also a Member of Board of Reviewers at 23rd WSCG International Conference on Computer Graphics, Visualization and Computer Vision 2015.

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more