Anthill Inside 2017

On theory and concepts in Machine Learning, Deep Learning and Artificial Intelligence. Formerly Deep Learning Conf.

##About AnthillInside:
In 2016, The Fifth Elephant branched into a separate conference on Deep Learning. Anthill Inside is the new avataar of the Deep Learning conference.
Anthill Inside attempts to bridge the gap bringing theoretical advances closer to functioning reality.
Proposals are invited for full length talks, crisp talks and poster/demo sessions in the area of ML+DL. The talks need to focus on the techniques used, and may be presented independent of the domain wherein they are applied.
We also invite talks on novel applications of ML+DL, and methods of realising the same in hardware/software.
Case studies of how DL and ML have been applied in different domains will continue to be discussed at The Fifth Elephant.

https://anthillinside.in/2017/

Topics: we are looking for talks covering the following:

  • Machine Learning with end-to-end application
  • Deep Learning
  • Artificial Intelligence
  • Hardware / software implementations of advanced Machine Learning and Deep Learning
  • IoT and Deep Learning
  • Operations research and Machine Learning

##Format:
Anthill Inside is a two-track conference:

  • Talks in the main auditorium and hall 2.
  • Birds of Feather (BOF) sessions in expo area.

We are inviting proposals for:

  • Full-length 40-minute talks.
  • Crisp 15-minute how-to talks or introduction to a new technology.
  • Sponsored sessions, of 15 minutes and 40 minutes duration (limited slots available; subject to editorial scrutiny and approval).
  • Hands-on workshop sessions of 3 and 6 hour duration where participants follow instructors on their laptops.
  • Birds of Feather (BOF) sessions.

You must submit the following details along with your proposal, or within 10 days of submission:

  1. Draft slides, mind map or a textual description detailing the structure and content of your talk.
  2. Link to a self-record, two-minute preview video, where you explain what your talk is about, and the key takeaways for participants. This preview video helps conference editors understand the lucidity of your thoughts and how invested you are in presenting insights beyond your use case. Please note that the preview video should be submitted irrespective of whether you have spoken at past editions of The Fifth Elephant or last year at Deep Learning.
  3. If you submit a workshop proposal, you must specify the target audience for your workshop; duration; number of participants you can accommodate; pre-requisites for the workshop; link to GitHub repositories and documents showing the full workshop plan.

##Selection Process:

  1. Proposals will be filtered and shortlisted by an Editorial Panel.
  2. Proposers, editors and community members must respond to comments as openly as possible so that the selection processs is transparent.
  3. Proposers are also encouraged to vote and comment on other proposals submitted here.

We expect you to submit an outline of your proposed talk, either in the form of a mind map or a text document or draft slides within two weeks of submitting your proposal to start evaluating your proposal.

Selection Process Flowchart

You can check back on this page for the status of your proposal. We will notify you if we either move your proposal to the next round or if we reject it. Selected speakers must participate in one or two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.

A speaker is NOT confirmed a slot unless we explicitly mention so in an email or over any other medium of communication.

There is only one speaker per session. Entry is free for selected speakers.

We might contact you to ask if you’d like to repost your content on the official conference blog.

##Travel Grants:

Partial or full grants, covering travel and accomodation are made available to speakers delivering full sessions (40 minutes) and workshops. Grants are limited, and are given in the order of preference to students, women, persons of non-binary genders, and speakers from Asia and Africa.

##Commitment to Open Source:

We believe in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like for it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), you should consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support the conference in return for giving you an audience. Your session will be marked on the schedule as a “sponsored session”.

##Important Dates:

  • Deadline for submitting proposals: July 10
  • First draft of the coference schedule: July 15
  • Tutorial and workshop announcements: June 30
  • Final conference schedule: July 20
  • Conference date: July 30

##Contact:

For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.

Please note, we will not evaluate proposals that do not have a slide deck and a video in them.

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more

Kumar Shubham

@kumar_shubham

Augmenting Solr’s NLP Capabilities with Deep-Learning Features to Match Images

Submitted Jun 28, 2017

Matching images with human-like accuracy is typically extremely expensive. A lot of GPU resources and training data are required for the deep-learning model to perform image-matching. While GPU is something that most companies can afford, training data is hard to obtain.

At DataWeave, we crawl millions of products listed across e-commerce websites, and match them to deliver competitive insights to our clients. In the fashion vertical, however, text matching alone is insufficient to accurately match products, as product descriptions are usually not detailed enough.

We asked ourselves, is there any way of complementing information from product descriptions and titles to improve the accuracy of image-matching?

Solr is a popular text search engine known for its NLP capabilities. This talk will present an innovative way of storing deep-learning features in Solr, and augmenting Solr’s NLP capabilities to achieve elevated levels of accuracy in our product matching efforts.

Outline

  1. Searching similar and exact images using deep learning (Importance and problems associated)
  2. Solr – a popular text search engine
  3. Augmenting Solr with Deep learning features
  4. Self-taught hashing
  5. Performance metrics
  6. Demo

Speaker bio

I work as a data engineer at DataWeave, a company that provides Competitive Intelligence as a Service for retailers and consumer brands. Here, I helped develop deep learning and machine-learning infrastructure for large scale product matching capabilities.

I am a keen enthusiast of open source projects, and have been closely associated with a project that integrated TensorFlow with DeepDetect.

I was among the top-5 finalists in the Xerox Research Innovation Challenge - 2016, and winner of the Jaipur Hackathon -2015. One of my projects - sign language converter (SLC) - was among the semi-final entries at TI Innovation Challenge India Design Contest 2015.

I have also co-authored publications that have been accepted in Applied Intelligence, Knowledge Based System, and International Conference of Machine-Learning and Cybernetics.

Slides

https://drive.google.com/file/d/0ByAaSdfBUHSVWWwzWXVsZEZnWlU/view?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more