Anthill Inside 2018

Anthill Inside 2018

On the current state of academic research, practice and development regarding Deep Learning and Artificial Intelligence.

##About the conference and topics for submitting talks:
In 2016, The Fifth Elephant branched into a separate conference on Deep Learning. The Deep Learning Conference has grown in to a large community under the brand Anthill Inside.

Anthill Inside features talks, panels and Off The Record (OTR) sessions on current research, technologies and developments around Artificial Intelligence (AI) and Deep Learning. Submit proposals for talks and workshops on the following topics:

  1. Theoretical concepts in Deep Learning, AI and Machine Learning – and how these have been applied in real life situations / specific domains. In 2017, we covered GANS, Reinforcement Learning and Transfer Learning. We seek speakers from academia who can communicate these concepts to an audience of practitioners.
  2. Latest tools, frameworks, libraries – either as short talks demonstrating these, or as full talks explaining why you chose the technology, including comparisons made and metrics used in evaluating the choice.
  3. Application of Computer Vision, NLP, speech recognition, video analytics and voice-to-speech in a specific domain or for building product. We are also interested in talks on application of Deep Learning to hardware and software problems / domains such as GPUs, self-driving cars, etc.
  4. Case studies of AI / Deep Learning and product: the journey of arriving at the product, not an elaboration of the product itself. We’d also like to understand why you chose AI, Deep Learning or Machine Learning for your use case.

##Perks for submitting proposals:
Submitting a proposal, especially with our process, is hard work. We appreciate your effort.
We offer one conference ticket at discounted price to each proposer, and a t-shirt.
We only accept one speaker per talk. This is non-negotiable. Workshops may have more than one instructor.
In case of proposals where more than one person has been mentioned as collaborator, we offer the discounted ticket and t-shirt only to the person with who the editorial team corresponded directly during the evaluation process.

##Target audience:
We invite beginner and advanced participants from:

  1. Academia,
  2. Industry and
  3. Startups,

to participate in Anthill Inside. At the 2018 edition, tracks will be curated separately for beginner and advanced audiences.

Developer evangelists from organizations which want developers to use their APIs and technologies for deep learning and AI should participate, speak and/or sponsor Anthill Inside.

##Format:
Anthill Inside is a two-day conference with two tracks on each day. Track details will be announced with a draft schedule in February 2018.

We are accepting sessions with the following formats:

  1. Crisp (20 min) and full (40 min) talks.
  2. OTR sessions on focussed topics / questions. An OTR is 1 to 1.5 hours long and typically has four facilitators including or excluding one moderator.
  3. Workshops and tutorials of 3-6 hours duration on Machine Learning concepts and tools, full stack data engineering, and data science concepts and tools.
    4. Birds Of Feather (BOF) sessions, talks and workshops for open houses and pre-events in Bangalore and other cities between October 2017 and June 2018. We have events open round the year. Reach out to us on info@hasgeek.com should you be interested in speaking and/or hosting a community event between now and the conference in July 2018.

##Selection criteria:
The first filter for a proposal is whether the technology or solution you are referring to is open source or not. The following criteria apply for closed source talks:

  1. If the technology or solution is proprietary, and you want to speak about your propritary solution to make a pitch to the audience, you should pick up sponsored session. This involves paying for the speaking slot. Write to anthillinside.editorial@hasgeek.com
  2. If the technology or solution is in the process of being open sourced, we will consider the talk only if the solution is open sourced at least three months before the conference.
  3. If your solution is closed source, you should consider proposing a talk explaining why you built it in the first place; what options did you consider (business-wise and technology-wise) before making the decision to develop the solution; or, what is your specific use case that left you without existing options and necessitated creating the in-house solution.

The criteria for selecting proposals, in the order of importance, are:

  1. Key insight or takeaway: what can you share with participants that will help them in their work and in thinking about the ML, big data and data science problem space?
  2. Structure of the talk and flow of content: a detailed outline – either as mindmap or draft slides or textual decription – will help us understand the focus of the talk, and the clarity of your thought process.
  3. Ability to communicate succinctly, and how you engage with the audience. You must submit link to a two-minute preview video explaining what your talk is about, and what is the key takeaway for the audience.

No one submits the perfect proposal in the first instance. We therefore encourage you to:

  1. Submit your proposal early so that we have more time to iterate if the proposal has potential.
  2. Talk to us on our community Slack channel: https://friends.hasgeek.com if you want to discuss an idea for your proposal, and need help / advice on how to structure it.

Our editorial team helps potential speakers in honing their speaking skills, fine tuning and rehearsing content at least twice - before the main conference - and sharpening the focus of talks.

##How to submit a proposal (and increase your chances of getting selected):
The following guidelines will help you in submitting a proposal:

  1. Focus on why, not how. Explain to participants why you made a business or engineering decision, or why you chose a particular approach to solving your problem.
  2. The journey is more important than the solution you may want to explain. We are interested in the journey, not the outcome alone. Share as much detail as possible about how you solved the problem. Glossing over details does not help participants grasp real insights.
  3. Focus on what participants from other domains can learn/abstract from your journey / solution. Refer to these talks, from some of HasGeek’s other conferences, which participants liked most: http://hsgk.in/2uvYKI9 http://hsgk.in/2ufhbWb http://hsgk.in/2vFVVJv http://hsgk.in/2vEF60T
  4. We do not accept how-to talks unless they demonstrate latest technology. If you are demonstrating new tech, show enough to motivate participants to explore the technology later. Refer to talks such as this: http://hsgk.in/2vDpag4 http://hsgk.in/2varOqt http://hsgk.in/2wyseXd to structure your proposal.
  5. Similarly, we don’t accept talks on topics that have already been covered in the previous editions. If you are unsure about whether your proposal falls in this category, drop an email to: anthillinside.editorial@hasgeek.com
  6. Content that can be read off the internet does not interest us. Our participants are keen to listen to use cases and experience stories that will help them in their practice.

To summarize, we do not accept talks that gloss over details or try to deliver high-level knowledge without covering depth. Talks have to be backed with real insights and experiences for the content to be useful to participants.

##Passes and honorarium for speakers:
We pay an honararium of Rs. 3,000 to each speaker and workshop instructor at the end of their talk/workshop. Confirmed speakers and instructors also get a pass to the conference and networking dinner. We do not provide free passes for speakers’ colleagues and spouses.

##Travel grants for outstation speakers:
Travel grants are available for international and domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans. If you require a grant, request it when you submit your proposal in the field where you add your location. Anthill Inside is funded through ticket purchases and sponsorships; travel grant budgets vary.

##Last date for submitting proposals is: 15 April 2018.
You must submit the following details along with your proposal, or within 10 days of submission:

  1. Draft slides, mind map or a textual description detailing the structure and content of your talk.
  2. Link to a self-recorded, two-minute preview video, where you explain what your talk is about, and the key takeaways for participants. This preview video helps conference editors understand the lucidity of your thoughts and how invested you are in presenting insights beyond the solution you have built, or your use case. Please note that the preview video should be submitted irrespective of whether you have spoken at previous editions of Anthill Inside.
  3. If you submit a workshop proposal, you must specify the target audience for your workshop; duration; number of participants you can accommodate; pre-requisites for the workshop; link to GitHub repositories and a document showing the full workshop plan.

##Contact details:
For information about the conference, sponsorships and tickets contact support@hasgeek.com or call 7676332020. For queries on talk submissions, write to anthillinside.editorial@hasgeek.com

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more

Karanbir Chahal

@karanchahal

A Hitchhiker's Guide to Modern Object Detection: A deep learning journey since 2012

Submitted Mar 31, 2018

The ability to detect objects in images has captured the world’s imagination. One can make a decent case that this application is the poster child of deep learning. What really put it on the map.
But few people really understands how computers have begun to detect these objects in images with a high accuracy. Which is surprising since it is the backbone of the tech powering self driving cars, drones and so many other vision related tasks. This talk aims to go through the history of object detection all the way from 2012 to present day. It will focus on 2 types of algorithms, 2 step approaches (Faster R-CNN) and 1 step approches (Yolo, SSD) and culminate to what the state of the art is right now.
This talk intends to go into the internals of object detection focusing on what the transformations that an image goes through exaclty are. This information is not found easily on the open web as most explanations skimp over very important little details. The aim is that after the talk, the listener will be able to go home and implement these complex algorithms in a day or two on their own. The aim is for the listener to understand the internals of these systems in and out.

Outline

  1. Short explanation of convolutional and pooling layers and the concept of a feature map (short, assuming the listeners know about this)
  2. The 2 directions of how object detectors are made , the 2 step and the 1 step approach.
  3. Starting with the 2 Step Approach
  4. The problems with this approach
  5. Now onto the 1 step approaches, Yolo ,SSD (including the latest version of Yolo that came out in Feb)
  6. The problems with the 1 step approaches
  7. The multi scale problem, dealing with images of various scales and resolutions. (not explained well in current age blog posts). Will explain the FPN Model which solves these resolution/scale problems.
  8. The newest most promising approach, Retina Net, which tries to tackle both the problems of the first 2 approaches by using a novel loss function.
  9. Explain the intuition of the loss. And its implications.
  10. Code samples in Pytorch, try to get the intuition through easy to understand simple code.
  11. Resources from where people can find out more about this topic.

Requirements

The participants must be aware of CNN’s and understand back propogation, gradients etc
They must have experience with a deep learning python framework like Tensorflow, Keras , Pytorch

Speaker bio

I am a 21-year-old software engineer and a Computer Engineering graduate of Netaji Subhash Institute of Technology (NSIT) — a premier engineering institution in India. I am currently working as a software engineer at HSBC in Pune where I do work on interesting deep learning projects from time to time.

I have been very interested in learning deep learning since my second year in college. I have always tried to apply deep learning in my projects from college and in HSBC too.

I recently won the NIPS Paper Implementation Challenge 2017 and nurture.ai did a feature on me which you can read about here => https://medium.com/nurture-ai/karanbir-singh-chahal-implementing-ai-research-paper-for-the-first-time-72670b1763bc

I would be the best person to deliver this talk as I am currently writing a survey paper on modern object recognition and have been following object recognition for a while now. I have dived through the code of the best repositories on object detection and can confidently say that I know how each part of the various model works. Also I aim to provide code samples of how each part works , so one can actually apply the knowledge people have gained in this talk . I will make all code open source so that people can play with it. The biggest shortcoming of material on object recognition found online is that, it is very scattered and unless you dive into the code, there are a lot of unanswered questions. As I have some experience with the dirty little tricks and internals of object detectors, I would be a good person to explain all of it and not provide just a high level description of it.

  • Aim to cover all the developments of these highly cited papers in the field.
  • Mask R-CNN. Kaiming He, Georgia Gkioxari, Piotr Dollár, and Ross Girshick. IEEE International Conference on Computer Vision (ICCV), 2017.
  • Focal Loss for Dense Object Detection. Tsung-Yi Lin, Priya Goyal, Ross Girshick, Kaiming He, and Piotr Dollár. IEEE International Conference on Computer Vision (ICCV), 2017.
  • Feature Pyramid Networks for Object Detection. Tsung-Yi Lin, Piotr Dollár, Ross Girshick, Kaiming He, Bharath Hariharan, and Serge Belongie. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • Aggregated Residual Transformations for Deep Neural Networks. Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, and Kaiming He. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017.
  • R-FCN: Object Detection via Region-based Fully Convolutional Networks. Jifeng Dai, Yi Li, Kaiming He, and Jian Sun. Conference on Neural Information Processing Systems (NIPS), 2016.
  • Deep Residual Learning for Image Recognition. Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  • Yolo https://arxiv.org/abs/1612.08242
  • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks Shaoqing Ren, Kaiming He, Ross Girshick, and Jian Sun. Conference on Neural Information Processing Systems (NIPS), 2015.
  • Fast R-CNN. Ross Girshick. IEEE International Conference on Computer Vision (ICCV), 2015.

Slides

https://docs.google.com/presentation/d/1VEviEgkERo2Q_Uj5sk-zxN1xinP_kf-lTDFobDNJ-jc/edit?usp=drivesdk

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more