Anthill Inside 2017
On theory and concepts in Machine Learning, Deep Learning and Artificial Intelligence. Formerly Deep Learning Conf.
Jul 2017
24 Mon
25 Tue
26 Wed
27 Thu
28 Fri
29 Sat 09:00 AM – 05:40 PM IST
30 Sun
##About AnthillInside:
In 2016, The Fifth Elephant branched into a separate conference on Deep Learning. Anthill Inside is the new avataar of the Deep Learning conference.
Anthill Inside attempts to bridge the gap bringing theoretical advances closer to functioning reality.
Proposals are invited for full length talks, crisp talks and poster/demo sessions in the area of ML+DL. The talks need to focus on the techniques used, and may be presented independent of the domain wherein they are applied.
We also invite talks on novel applications of ML+DL, and methods of realising the same in hardware/software.
Case studies of how DL and ML have been applied in different domains will continue to be discussed at The Fifth Elephant.
https://anthillinside.in/2017/
##Format:
Anthill Inside is a two-track conference:
We are inviting proposals for:
You must submit the following details along with your proposal, or within 10 days of submission:
##Selection Process:
We expect you to submit an outline of your proposed talk, either in the form of a mind map or a text document or draft slides within two weeks of submitting your proposal to start evaluating your proposal.
You can check back on this page for the status of your proposal. We will notify you if we either move your proposal to the next round or if we reject it. Selected speakers must participate in one or two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.
A speaker is NOT confirmed a slot unless we explicitly mention so in an email or over any other medium of communication.
There is only one speaker per session. Entry is free for selected speakers.
We might contact you to ask if you’d like to repost your content on the official conference blog.
##Travel Grants:
Partial or full grants, covering travel and accomodation are made available to speakers delivering full sessions (40 minutes) and workshops. Grants are limited, and are given in the order of preference to students, women, persons of non-binary genders, and speakers from Asia and Africa.
##Commitment to Open Source:
We believe in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like for it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), you should consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support the conference in return for giving you an audience. Your session will be marked on the schedule as a “sponsored session”.
##Important Dates:
##Contact:
For more information about speaking proposals, tickets and sponsorships, contact info@hasgeek.com or call +91-7676332020.
Please note, we will not evaluate proposals that do not have a slide deck and a video in them.
Hosted by
irfan basha sheik
@sheikirfanbasha
Submitted Apr 30, 2017
At Imaginea, we run a social network for typoholics called Fontli as our designers have a passion for the field. Folks share typography that they catch in the wild or work that they’ve created themselves. Members ask others for font identification and tips, and tag what they’re able to identify themselves.
Given that we’re into typography, we would love to have a system where we can take a picture of some type and apply it to text of our own choice! We know that Deep Convolutional Neural Networks(DCNNs) have recently been achieving great results in image transformation tasks, most notably in the artistic style transfer. As these DCNNs are capable of capturing and transferring style of one image onto the other, we want to explore them and use to build a new system to transfer style of typography. We call this system as Deep Type.
In this talk, we’ll start with the working of DCNNs and how they are used in style transfer algorithms. We’ll dive into the details about how this algorithms works in practice, especially focusing on jhonson’s neural style implementation. We’ll discuss how the DCNNs with no training and the DCNNs with pre-training perform in the context of style transfer, along with their advantages and disadvantages. We’ll showcase the results of our experiments on typography style transfer.
Basics of ML and Deep Learning
Researcher at Imaginea Labs. Passionate to learn and develop applications which are related to Artificial intelligence (Computer vision, Machine learning), simulators or games.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}