Anthill Inside 2018
On the current state of academic research, practice and development regarding Deep Learning and Artificial Intelligence.
Jul 2018
23 Mon
24 Tue
25 Wed 08:45 AM – 05:25 PM IST
26 Thu
27 Fri
28 Sat
29 Sun
On the current state of academic research, practice and development regarding Deep Learning and Artificial Intelligence.
Jul 2018
23 Mon
24 Tue
25 Wed 08:45 AM – 05:25 PM IST
26 Thu
27 Fri
28 Sat
29 Sun
##About the conference and topics for submitting talks:
In 2016, The Fifth Elephant branched into a separate conference on Deep Learning. The Deep Learning Conference has grown in to a large community under the brand Anthill Inside.
Anthill Inside features talks, panels and Off The Record (OTR) sessions on current research, technologies and developments around Artificial Intelligence (AI) and Deep Learning. Submit proposals for talks and workshops on the following topics:
##Perks for submitting proposals:
Submitting a proposal, especially with our process, is hard work. We appreciate your effort.
We offer one conference ticket at discounted price to each proposer, and a t-shirt.
We only accept one speaker per talk. This is non-negotiable. Workshops may have more than one instructor.
In case of proposals where more than one person has been mentioned as collaborator, we offer the discounted ticket and t-shirt only to the person with who the editorial team corresponded directly during the evaluation process.
##Target audience:
We invite beginner and advanced participants from:
to participate in Anthill Inside. At the 2018 edition, tracks will be curated separately for beginner and advanced audiences.
Developer evangelists from organizations which want developers to use their APIs and technologies for deep learning and AI should participate, speak and/or sponsor Anthill Inside.
##Format:
Anthill Inside is a two-day conference with two tracks on each day. Track details will be announced with a draft schedule in February 2018.
We are accepting sessions with the following formats:
##Selection criteria:
The first filter for a proposal is whether the technology or solution you are referring to is open source or not. The following criteria apply for closed source talks:
The criteria for selecting proposals, in the order of importance, are:
No one submits the perfect proposal in the first instance. We therefore encourage you to:
Our editorial team helps potential speakers in honing their speaking skills, fine tuning and rehearsing content at least twice - before the main conference - and sharpening the focus of talks.
##How to submit a proposal (and increase your chances of getting selected):
The following guidelines will help you in submitting a proposal:
To summarize, we do not accept talks that gloss over details or try to deliver high-level knowledge without covering depth. Talks have to be backed with real insights and experiences for the content to be useful to participants.
##Passes and honorarium for speakers:
We pay an honararium of Rs. 3,000 to each speaker and workshop instructor at the end of their talk/workshop. Confirmed speakers and instructors also get a pass to the conference and networking dinner. We do not provide free passes for speakers’ colleagues and spouses.
##Travel grants for outstation speakers:
Travel grants are available for international and domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans. If you require a grant, request it when you submit your proposal in the field where you add your location. Anthill Inside is funded through ticket purchases and sponsorships; travel grant budgets vary.
##Last date for submitting proposals is: 15 April 2018.
You must submit the following details along with your proposal, or within 10 days of submission:
##Contact details:
For information about the conference, sponsorships and tickets contact support@hasgeek.com or call 7676332020. For queries on talk submissions, write to anthillinside.editorial@hasgeek.com
Hosted by
Karanbir Chahal
@karanchahal
Submitted Mar 31, 2018
The ability to detect objects in images has captured the world’s imagination. One can make a decent case that this application is the poster child of deep learning. What really put it on the map.
But few people really understands how computers have begun to detect these objects in images with a high accuracy. Which is surprising since it is the backbone of the tech powering self driving cars, drones and so many other vision related tasks. This talk aims to go through the history of object detection all the way from 2012 to present day. It will focus on 2 types of algorithms, 2 step approaches (Faster R-CNN) and 1 step approches (Yolo, SSD) and culminate to what the state of the art is right now.
This talk intends to go into the internals of object detection focusing on what the transformations that an image goes through exaclty are. This information is not found easily on the open web as most explanations skimp over very important little details. The aim is that after the talk, the listener will be able to go home and implement these complex algorithms in a day or two on their own. The aim is for the listener to understand the internals of these systems in and out.
The participants must be aware of CNN’s and understand back propogation, gradients etc
They must have experience with a deep learning python framework like Tensorflow, Keras , Pytorch
I am a 21-year-old software engineer and a Computer Engineering graduate of Netaji Subhash Institute of Technology (NSIT) — a premier engineering institution in India. I am currently working as a software engineer at HSBC in Pune where I do work on interesting deep learning projects from time to time.
I have been very interested in learning deep learning since my second year in college. I have always tried to apply deep learning in my projects from college and in HSBC too.
I recently won the NIPS Paper Implementation Challenge 2017 and nurture.ai did a feature on me which you can read about here => https://medium.com/nurture-ai/karanbir-singh-chahal-implementing-ai-research-paper-for-the-first-time-72670b1763bc
I would be the best person to deliver this talk as I am currently writing a survey paper on modern object recognition and have been following object recognition for a while now. I have dived through the code of the best repositories on object detection and can confidently say that I know how each part of the various model works. Also I aim to provide code samples of how each part works , so one can actually apply the knowledge people have gained in this talk . I will make all code open source so that people can play with it. The biggest shortcoming of material on object recognition found online is that, it is very scattered and unless you dive into the code, there are a lot of unanswered questions. As I have some experience with the dirty little tricks and internals of object detectors, I would be a good person to explain all of it and not provide just a high level description of it.
Jul 2018
23 Mon
24 Tue
25 Wed 08:45 AM – 05:25 PM IST
26 Thu
27 Fri
28 Sat
29 Sun
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}