Anthill Inside 2017

On theory and concepts in Machine Learning, Deep Learning and Artificial Intelligence. Formerly Deep Learning Conf.

##About AnthillInside:
In 2016, The Fifth Elephant branched into a separate conference on Deep Learning. Anthill Inside is the new avataar of the Deep Learning conference.
Anthill Inside attempts to bridge the gap bringing theoretical advances closer to functioning reality.
Proposals are invited for full length talks, crisp talks and poster/demo sessions in the area of ML+DL. The talks need to focus on the techniques used, and may be presented independent of the domain wherein they are applied.
We also invite talks on novel applications of ML+DL, and methods of realising the same in hardware/software.
Case studies of how DL and ML have been applied in different domains will continue to be discussed at The Fifth Elephant.

https://anthillinside.in/2017/

Topics: we are looking for talks covering the following:

  • Machine Learning with end-to-end application
  • Deep Learning
  • Artificial Intelligence
  • Hardware / software implementations of advanced Machine Learning and Deep Learning
  • IoT and Deep Learning
  • Operations research and Machine Learning

##Format:
Anthill Inside is a two-track conference:

  • Talks in the main auditorium and hall 2.
  • Birds of Feather (BOF) sessions in expo area.

We are inviting proposals for:

  • Full-length 40-minute talks.
  • Crisp 15-minute how-to talks or introduction to a new technology.
  • Sponsored sessions, of 15 minutes and 40 minutes duration (limited slots available; subject to editorial scrutiny and approval).
  • Hands-on workshop sessions of 3 and 6 hour duration where participants follow instructors on their laptops.
  • Birds of Feather (BOF) sessions.

You must submit the following details along with your proposal, or within 10 days of submission:

  1. Draft slides, mind map or a textual description detailing the structure and content of your talk.
  2. Link to a self-record, two-minute preview video, where you explain what your talk is about, and the key takeaways for participants. This preview video helps conference editors understand the lucidity of your thoughts and how invested you are in presenting insights beyond your use case. Please note that the preview video should be submitted irrespective of whether you have spoken at past editions of The Fifth Elephant or last year at Deep Learning.
  3. If you submit a workshop proposal, you must specify the target audience for your workshop; duration; number of participants you can accommodate; pre-requisites for the workshop; link to GitHub repositories and documents showing the full workshop plan.

##Selection Process:

  1. Proposals will be filtered and shortlisted by an Editorial Panel.
  2. Proposers, editors and community members must respond to comments as openly as possible so that the selection processs is transparent.
  3. Proposers are also encouraged to vote and comment on other proposals submitted here.

We expect you to submit an outline of your proposed talk, either in the form of a mind map or a text document or draft slides within two weeks of submitting your proposal to start evaluating your proposal.

Selection Process Flowchart

You can check back on this page for the status of your proposal. We will notify you if we either move your proposal to the next round or if we reject it. Selected speakers must participate in one or two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.

A speaker is NOT confirmed a slot unless we explicitly mention so in an email or over any other medium of communication.

There is only one speaker per session. Entry is free for selected speakers.

We might contact you to ask if you’d like to repost your content on the official conference blog.

##Travel Grants:

Partial or full grants, covering travel and accomodation are made available to speakers delivering full sessions (40 minutes) and workshops. Grants are limited, and are given in the order of preference to students, women, persons of non-binary genders, and speakers from Asia and Africa.

##Commitment to Open Source:

We believe in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like for it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), you should consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support the conference in return for giving you an audience. Your session will be marked on the schedule as a “sponsored session”.

##Important Dates:

  • Deadline for submitting proposals: July 10
  • First draft of the coference schedule: July 15
  • Tutorial and workshop announcements: June 30
  • Final conference schedule: July 20
  • Conference date: July 30

##Contact:

For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.

Please note, we will not evaluate proposals that do not have a slide deck and a video in them.

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more

Laksh Arora

@techedlaksh

Demystifying Visual Question Answering

Submitted Jun 14, 2017

We are witnessing a renewed excitement in multi-discipline Artificial Intelligence (AI) research problems. In particular, research in image and video captioning that combines Computer Vision (CV), Natural Language Processing (NLP), and Knowledge Representation & Reasoning (KR) has dramatically increased in the past year. Since the time, Alan Turing has developed Turing Test, it has become an important concept in the philosophy of Aritficial Intelligence. Turing Test is a test of machine’s ability to exhibit intelligent behaviour equivalent to, or indistinguishable from, that of a human. In the last couple of years,a number of papers have suggested that the task of Visual Question Answering can be used as an alternative Turing Test. The task of Visual Question Answering involves an open-ended questions ( or a series of a question) about an image.

A VQA system takes as input an image and a free-form, open-ended, natural language question about the image and produces a natural language answer as the output. This goal-driven task is applicable to scenarios encountered when visually-impaired users or intelligence analysts actively elicit visual information. Open-ended questions require a potentially vast set of AI capabilities to answer – fine-grained recognition (eg. “What kind of cheese is on pizza?”), object detection (eg “How many bikes are there?”) and others recognition such as activity recognition, knowledge base recognition and knowledge base reasoning etc.

This talk will be benefitted to those who are interested in advanced applicaton of Deep Neural Network and is looking forward to see the implementation of the latest state-of-the-art models. In this talk, there will also be the demo of live Visual Question Answering model and code will be open-sourced and shared on github. The open source implementation will be done in Keras framework which is high-level neural network API, written in Python and running on top of either TensorFlow, CNTK or Theano.

Outline

In this talk, we will look at the Visual QA challenge, and the dataset that came along with it. We will see different ways to model this problem using Recurrent Neural Network (LSTMs to be specific). Most of the code will be inspired from ICCV and NIPS paper. An important aspect of solving this problem is to have a system that can generate new answers. The problem is considered as a classification task here, wherein, 1000 top answers are chosen as classes. Images are transformed by passing it through the VGG-19 model that generates a 4096 dimensional vector in second layer , then tokens are embedded into Glove vectors and then passed through LSTM model to generate the sentences.

We would cover the following:

  1. What is Deep Neural Network, Vanilla CNN and RNN models.
  2. Motivation: Advancement in Convolutional and Recurrent Models.
  3. How these models are helping in current real world applications.
  4. Description of VQA Dataset
  5. Deep dive into VGG models, Glove Vectors, LSTMs, Cost functions.
  6. Explaination of Code

Requirements

Basic knowledge of DeepLearning, MLP, CNN, RNN, pre-trained models and interest in Latest Applications of Neural Network.

Speaker bio

Laksh Arora is Pythonista at heart and has interests in applications of Machine Learning and Computer Vision. Completed his BCA in Computer Science from IPU. Co-organiser of PyDataDelhi meetups. Previously also gave talk at PyDataDelhi community, CSI and other various small meetups. Also spoke at University level about latest work in Machine Learning and won multiple hackathons. Currently TA at coding blocks where he is teaching Machine Learning and also collaborating with other enthusiasts over the globe doing independent projects.

Slides

http://slides.com/techedlaksh/visual-question-answering-vqa/fullscreen

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more