Deep Learning Conf 2016

A conference on deep learning.

Deep Learning is a new area of research that is getting us closer in achieving one of the primary objectives of Machine Learning – Artificial Intelligence.
It is used widely in the fields of Image Recognition, Natural Language Processing (NLP) and Video Classification.

Format

Deep Learning Conf is a single day conference followed by workshops on the second day. The conference will have full, crisp and lightning talks from morning to evening. The workshops on the next day will introduce participants to neural networks followed by two tracks of three-hour workshops on NLP and Computer Vision / AI. Participants can join either one of the two workshop tracks.

Tracks

We are looking for talks and workshops from academics and practitioners of Deep Learning on the following topics:

  • Applications of Deep Learning in software.
  • Applications of Deep Learning in hardware.
  • Conceptual talks and cutting edge research on Deep Learning.
  • Building businesses with Deep Learning at the core.

We are inviting proposals for:

  • Full-length 40 minute talks.
  • Crisp 15-minute talks.
  • Lightning talks of 5 mins duration.

Selection process

Proposals will be filtered and shortlisted by an Editorial Panel. Along with your proposal, you must share the following details:

  • Links to videos / slide decks when submitting proposals. This will help us understand your past speaking experience.
  • Blog posts you may have written related to your proposal.
  • Outline of your proposed talk – either in the form of a mind map or a text document or draft slides.

If your proposal involves speaking about a library / tool / software that you intend to open source in future, the proposal will be considered only when the library / tool / software in question is made open source.

We will notify you about the status of your proposal within two-three weeks of submission.

Selected speakers have to participate in one-two rounds of rehearsals before the conference. This is mandatory and helps you prepare for speaking at the conference.

There is only one speaker per session. Entry is free for selected speakers. As our budget is limited, we will prefer speakers from locations closer home, but will do our best to cover for anyone exceptional. HasGeek will provide a grant to cover part of your travel and accommodation in Bangalore. Grants are limited and made available to speakers delivering full sessions (40 minutes or longer).

Commitment to open source

HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), please consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.

Key dates and deadlines

  • Proposal submission deadline: 31 May 2016
  • Schedule announcement: 15 June 2016
  • Conference dates: 1 July 2016

Venue

CMR Institute of Technology, Bangalore

Contact

For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Anand Chandrasekaran

@anandchandrasekaran

Deep learning: A convoluted overview with recurrent themes and beliefs.

Submitted May 21, 2016

The data needed to represent the world around is daunting. But somehow, we need to capture a lot of that information explicitly or implicitly to create ‘intelligent’ machines. It was formerly believed that explicitly capturing all this information would give rise to Artificial Intelligence through clever programming. But with every passing decade, despite the rise in computational power, this only resulted in hype, and at least two AI winters.

Instead, the field moved on to trying to implicitly represent the world. If a machine can mathematically infer the variance in the data, and then also infer how different parts interact, then it could be described as ‘understanding’ those factors. But we simply don’t have a large enough database to capture all that variance. A plausible alternative could be to have hierarchical model of the world, where each level of the hierarchy is transforming the ‘raw’ data into more abstract representations. The higher the level, the more abstract the representation, resulting in more human recognisable concepts, such as scenes or language.

It would be preferable if these abstractions were learnt by the machines themselves, and we have kind of known how to get started with that for several decades now. Pretty much every algorithm you hear about in these talks was invented decades ago. But historically, we did not have sufficient computational power, nor did we have large enough datasets to seed these machines and use those algorithms. Instead, Machine Learning was dominated by hand crafted feature descriptors and support vector machines. These models were simpler, and used other implicit knowledge, namely human intuition, to great effect.

With the arrival of graphics processors and programming tools to exploit them in the mid 2000s, deep architectures that could automatically learn these abstract representations were suddenly plausible and could be trained quickly, and the vast amount of data on the web, both images and text, was also available and ready to be translated. And now hopefully we can start building machines that better ‘understand’ the world around us, and we could maybe, just maybe ask it some intelligent questions.

Hopefully the answer is not 42.

Outline

The talk is meant to serve as an overview for the conference. It will briefly describe the history and origin of deep learning, how it contrasts with ‘traditional’ machine learning, and touching upon the dominant concepts that shape its practice today.

Speaker bio

Dr. Anand Chandrasekaran is a founder and the CTO of Mad Street Den, an AI company specializing in computer vision. In addition to an academic background in the fields of neuroscience and neuromorphic engineering, he has been a member of teams working on DARPA projects in cognition and vision.

Links

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more