##About the event
When it comes to Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI), three aspects are crucial:
- Clarity of fundamental concepts.
- Insights and nuances when applying concepts to solve real-world problems.
- Knowledge of tools for automating ML and DL.
Anthill Inside Miniconf will provide understanding on each of these fronts.
This miniconf is a full day event consisting of:
- 3-4 talks each, on concepts, applications and tools.
- Birds of Feather (BOF) sessions on focussed topics.
We are accepting proposals for:
- 10 to 40-minute talks, explaining fundamnetal concepts in math, statistics and data science.
- 20 to 40-minute talks on case studies and lessons learned when applyng ML, DL and AI concepts in different domains / to solve diverse data-related problems.
- 10 to 20-minute talks on tools on ML and DL.
- Birds of a Feather (BOF) sessions on failure stories in ML, to what problems / use cases should you use ML and DL, chatbots.
- 3-6 hour hands-on workshops on concepts and tools.
Hands-on workshops for 30-40 participants on 25 November will help in internalizing concepts, and practical aspects of working with tools.
Workshops will be announced shortly. Workshop tickets have to be purchased separately.
##Target audience, and why you should attend this event
- ML engineers who want to learn about concepts in maths, stats and strengthen foundations.
- ML engineers wanting to learn from experiences and insights of others.
- Senior architects and decision-makers who want to quick run-through of concepts, implementation case studies, and overview of tools.
- Masters and doctoral candidates who want to bridge the gap between academia and practice.
Proposals will be shortlisted and reviewed by an editorial team consisting of practitioners from the community. Make sure your abstract contains the following information:
- Key insights you will present, or takeaways for the audience.
- Overall flow of the content.
You must submit links to videos of talks you have delivered in the past, or record and upload a two-min self-recorded video explaining what your talk is about, and why is it relevant for this event.
Also consider submitting links to the following along with your proposal:
- A detailed outline, or
- Mindmap, explaining the structure of the talk, or
- Draft slides.
##Honorarium for selected speakers; travel grants
Selected speakers and workshop instructors will receive an honorarium of Rs. 3,000 each, at the end of their talk. We do not provide free passes for speakers’ colleagues and spouses.
Travel grants are available for domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans.
If you require a grant, mention this in the field where you add your location. Anthill Inside Miniconf is funded through ticket purchases and sponsorships; travel grant budgets vary.
Anthill Inside Miniconf – 24 November, 2017.
Hands-on workshops – 25 November, 2017.
For more information about speaking, Anthill Inside, sponsorships, tickets, or any other information contact email@example.com or call 7676332020.
Inference in Deep Neural Networks
A lot of focus is currently on training neural networks and better architecture. But we don’t focus alot on inference because well we are busy making our models work. Inference is supposed to run millions of time more than training and alot of times the inference is supposed to run on embeded devices. This talk will go into details of how the advancements in hardware have made Deep Learning possible. We will also talk of certain optimization which can be done to speed up computaion when deploying a model on CPU. We will debunk terms GeMM, SIMD, BLAS and SIMT on the way.
- Intro DL Networks.
- How do typical Deep Learning Architetures look.
- A small section using example of one CNN and one LSTM on what mathematical operations do they perform.
- Advancements in Hardware
- Intel Knight CPU’s
- Volta GPU’s
- How exactly the operations are done on garden-variety hardware
- Different type of Architectures
- CPU and GPU’s
- How do these work and bottlenecks
- CPU and GPU’s
- Role Played by Memory access in speeds
- How a lot of times memory is the bottleneck instead of Compute
- Changes in algortihms made to utilise these functionalities
- Example of Google’s Inception V3 model
- Two different type of RNN’s
- How to make your model more efficient at inference.
- Some practical examples
Speaker bio #
Saurabh has been working at MAD Street Den, Chennai as a Machine Learning Engineer since past year and a half,specifically working on Deep Learning based products. He loves to train Convolutional Neural Networks of all types and sizes for different applications. Apart from CNN’s he has special interest in recurrent architectures and discovering their powers. When he is not working on DL based stuff, he loves to play around with micro-controllers.