In 2016, The Fifth Elephant branched into a separate conference on Deep Learning. Anthill Inside is the new avataar of the Deep Learning conference.
Anthill Inside attempts to bridge the gap bringing theoretical advances closer to functioning reality.
Proposals are invited for full length talks, crisp talks and poster/demo sessions in the area of ML+DL. The talks need to focus on the techniques used, and may be presented independent of the domain wherein they are applied.
We also invite talks on novel applications of ML+DL, and methods of realising the same in hardware/software.
Case studies of how DL and ML have been applied in different domains will continue to be discussed at The Fifth Elephant.
- Machine Learning with end-to-end application
- Deep Learning
- Artificial Intelligence
- Hardware / software implementations of advanced Machine Learning and Deep Learning
- IoT and Deep Learning
- Operations research and Machine Learning
Anthill Inside is a two-track conference:
- Talks in the main auditorium and hall 2.
- Birds of Feather (BOF) sessions in expo area.
We are inviting proposals for:
- Full-length 40-minute talks.
- Crisp 15-minute how-to talks or introduction to a new technology.
- Sponsored sessions, of 15 minutes and 40 minutes duration (limited slots available; subject to editorial scrutiny and approval).
- Hands-on workshop sessions of 3 and 6 hour duration where participants follow instructors on their laptops.
- Birds of Feather (BOF) sessions.
You must submit the following details along with your proposal, or within 10 days of submission:
- Draft slides, mind map or a textual description detailing the structure and content of your talk.
- Link to a self-record, two-minute preview video, where you explain what your talk is about, and the key takeaways for participants. This preview video helps conference editors understand the lucidity of your thoughts and how invested you are in presenting insights beyond your use case. Please note that the preview video should be submitted irrespective of whether you have spoken at past editions of The Fifth Elephant or last year at Deep Learning.
- If you submit a workshop proposal, you must specify the target audience for your workshop; duration; number of participants you can accommodate; pre-requisites for the workshop; link to GitHub repositories and documents showing the full workshop plan.
- Proposals will be filtered and shortlisted by an Editorial Panel.
- Proposers, editors and community members must respond to comments as openly as possible so that the selection processs is transparent.
- Proposers are also encouraged to vote and comment on other proposals submitted here.
We expect you to submit an outline of your proposed talk, either in the form of a mind map or a text document or draft slides within two weeks of submitting your proposal to start evaluating your proposal.
You can check back on this page for the status of your proposal. We will notify you if we either move your proposal to the next round or if we reject it. Selected speakers must participate in one or two rounds of rehearsals before the conference. This is mandatory and helps you to prepare well for the conference.
A speaker is NOT confirmed a slot unless we explicitly mention so in an email or over any other medium of communication.
There is only one speaker per session. Entry is free for selected speakers.
We might contact you to ask if you’d like to repost your content on the official conference blog.
Partial or full grants, covering travel and accomodation are made available to speakers delivering full sessions (40 minutes) and workshops. Grants are limited, and are given in the order of preference to students, women, persons of non-binary genders, and speakers from Asia and Africa.
##Commitment to Open Source:
We believe in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like for it to be available under a permissive open source licence. If your software is commercially licensed or available under a combination of commercial and restrictive open source licences (such as the various forms of the GPL), you should consider picking up a sponsorship. We recognise that there are valid reasons for commercial licensing, but ask that you support the conference in return for giving you an audience. Your session will be marked on the schedule as a “sponsored session”.
- Deadline for submitting proposals: July 10
- First draft of the coference schedule: July 15
- Tutorial and workshop announcements: June 30
- Final conference schedule: July 20
- Conference date: July 30
For more information about speaking proposals, tickets and sponsorships, contact firstname.lastname@example.org or call +91-7676332020.
Please note, we will not evaluate proposals that do not have a slide deck and a video in them.
Keep Calm and Trust your Model - On Explainability of Machine Learning Models
The accuracy of Machine Learning models is going up by the day with advances in Deep Learning. But this comes at a cost of explainability of these models. There is a need to uncover these black boxes for the Business users. This is very essential especially for heavily regulated industries like Finance, Medicine, Defence and the likes
A lot of research is going on to make ML models interpretable and explainable. In this talk we will be going through the various approaches taken to unravel machine learning models and explain the reason behind their predictions.
We’ll see the different approaches being taken by discussing the latest research literature, the ‘behind the scenes’ view of what is happening inside these approaches with enough mathematical depth and intuition.
Finally, the aim is to leave the audience with the practical know-how on how to use these approaches in understanding deep learning and classical machine learning models using open source tools in Python by doing a live demo. Link to IPython notebooks
- The need for explainability
- Why are certain models not explainable?
- Linear, monotonic vs Non-linear, non-monotonic functions
Special model specific methods, deep dive into a few of them :
- Tree Interpreter for explaining Tree based models like Random Forest and Gradient Boosted Trees
- Model Specific Visualisations
- Attention mechanism used to explain predictions
- Generating explanations as a part of the model itself (cutting edge deep learning models from MIT and Berkeley that give an explanation as additional output along with the predicted class/value)
- Global scoped Surrogate models, statistical interpretation tools like Variable Importance, Residual plot etc
- Local Interpretable Model-agnostic Explanations : recent research which works on any black box model
- Layerwise Relevance Propagation for understanding Deep Learning
- for CNNs
- for RNNs (the source code for this was released just 15 days back! We’ll be doing a live demo of this method)
- Use open source tools in Python and learn how to make use of them to explain machine learning model predictions
- Conclusion with practical demonstrations and call to action to try out the tools
Basic understanding of Deep Learning and classical Machine Learning algorithms.
Currently working as a as Machine Learning Engineer at datalog.ai, working remotely from Kochi, I’m entirely self-taught in the field, and originally did Bachelors in Mechanical Engineering from CUSAT.
I have completed consulting projects in ML and AI with multiple startups and companies.
Previously I was a Technology Innovation Fellow with Kerala Startup Mission where I started a non-profit student community TinkerHub, that has a focus on creating community spaces across colleges for learning the latest technologies.
My work on CNNs was the winning solution for IBM’s Cognitive Cup challenge in 2016 and gave a talk on the same at the Super Computing conference SC16 at Salt Lake City, Utah : Slides
Explainability and Interpretability of ML is one of my focus areas, after having interacted with many Business owners asking for the reasons behind the working of the prediction models built for them.
- IPython notebooks for Live Demo session : https://github.com/psbots/explainableML