About the event
When it comes to Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI), three aspects are crucial:
- Clarity of fundamental concepts.
- Insights and nuances when applying concepts to solve real-world problems.
- Knowledge of tools for automating ML and DL.
Anthill Inside Miniconf will provide understanding on each of these fronts.
This miniconf is a full day event consisting of:
- 3-4 talks each, on concepts, applications and tools.
- Birds of Feather (BOF) sessions on focussed topics.
We are accepting proposals for:
- 10 to 40-minute talks, explaining fundamnetal concepts in math, statistics and data science.
- 20 to 40-minute talks on case studies and lessons learned when applyng ML, DL and AI concepts in different domains / to solve diverse data-related problems.
- 10 to 20-minute talks on tools on ML and DL.
- Birds of a Feather (BOF) sessions on failure stories in ML, to what problems / use cases should you use ML and DL, chatbots.
- 3-6 hour hands-on workshops on concepts and tools.
Hands-on workshops for 30-40 participants on 25 November will help in internalizing concepts, and practical aspects of working with tools.
Workshops will be announced shortly. Workshop tickets have to be purchased separately.
Target audience, and why you should attend this event
- ML engineers who want to learn about concepts in maths, stats and strengthen foundations.
- ML engineers wanting to learn from experiences and insights of others.
- Senior architects and decision-makers who want to quick run-through of concepts, implementation case studies, and overview of tools.
- Masters and doctoral candidates who want to bridge the gap between academia and practice.
Proposals will be shortlisted and reviewed by an editorial team consisting of practitioners from the community. Make sure your abstract contains the following information:
- Key insights you will present, or takeaways for the audience.
- Overall flow of the content.
You must submit links to videos of talks you have delivered in the past, or record and upload a two-min self-recorded video explaining what your talk is about, and why is it relevant for this event.
Also consider submitting links to the following along with your proposal:
- A detailed outline, or
- Mindmap, explaining the structure of the talk, or
- Draft slides.
Honorarium for selected speakers; travel grants
Selected speakers and workshop instructors will receive an honorarium of Rs. 3,000 each, at the end of their talk. We do not provide free passes for speakers’ colleagues and spouses.
Travel grants are available for domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans.
If you require a grant, mention this in the field where you add your location. Anthill Inside Miniconf is funded through ticket purchases and sponsorships; travel grant budgets vary.
Anthill Inside Miniconf – 24 November, 2017.
Hands-on workshops – 25 November, 2017.
For more information about speaking, Anthill Inside, sponsorships, tickets, or any other information contact firstname.lastname@example.org or call 7676332020.
Deep Reinforcement Learning: A hands-on approach
Deep Reinforcement Learning has been becoming very popular since the dawn of DeepMind’s AlphaGo and DQN. Algorithms that learn to solve a game (sometimes better than) humans seems very complex from a distance, and we shall unravel the mathematical workings of such models through simple processes. This workshops aims to provide a simple insight about Reinforcement Learning and going to Deep RL.
This session aims to give a gentle introduction to Reinforcement Learning for beginners and moving from simple Dynamic Programming based approaches to Deep RL methods leveraging a variety of Deep Learning methods including but not limited to Convolutions, Recurrent architectures, Attention mechanisms etc. All demonstrations will make use of Keras (with Tensorflow backend) running on a Jupyter Notebook. Some advanced experiments involve the use of OpenAI gym (library for simulating game environments) and/or PyGame. The workshop will be divided into the following sections:
- Introduction (30 mins)
- Introduction to Reinforcement Learning and problem description.
- Intuition about observation-reward based learning and policy evaluation.
- Markov Decision Process (MDP)
- Q-Learning (45 mins)
- Discussion about Value learning and Q-learning.
- Understanding Q-learning with a grid world (Toy problem)
- Learning about playing games from visual input.
- Deep Q-learning (45 mins)
- Improvement over simple Q-learning (Approximate Q-learning)
- SARSA with Deep learning, A simple example on a toy game (Tic-Tac-Toe or Pong)
- DQN for Atari games, a brief discussion
- Final Project (~40 mins)
- Implement the above algorithms on a few games.
- Games including Flappy bird, 2048 etc.
- Discussion about what to do next and how the field is progressing.
After this workshop, you should be able to build your own reinforcement models for solving semi-supervised tasks and catch-up with the recent research in Deep Reinforcement Learning. A concluding discussion about how to progress further and doubts.
Basic knowledge of calculus and linear algebra (matrix oeprations, vectors etc.)
Familiarity with atleast one programming language (Python preferable)
Basic programming concepts such as Functions, loops, conditions, classes (OOP concepts) etc.
A Laptop with Python IDE.
The following libraries are required for the final project and running the demo:
- OpenAI gym
- OpenCV (optional)
code shall be provided in the form of repository at the beginning of the session so that participants mey interact with the demos while adding their code snippets.
Shubham Dokania is currently a Research Intern at Ambient.ai (https://ambient.ai/). Recently he was at CVIT, IIIT-H working under the supervision of Dr. C.V. Jawahar. Previously, a Machine Learning Instructor at Coding Blocks, while parallely working as a research intern at IIIT Delhi, supervised by Dr. Ganesh Bagler. Completed his B.Tech in Mathematics and Computing from Delhi Technological University (Formerly DCE) in 2017. At Coding Blocks, he taught undergrad/grad students and industry professionals about the techniques of Machine Learning with an inclination towards the research background of the methods discussed. Shubham has also spoken at PyData Delhi 2017 and local meetups including PyData Delhi Meetup, DTU workshop sessions and Bootcamps across New Delhi.
- github: https://github.com/shubham1810
- Previous Deep RL workshop: https://github.com/shubham1810/PyDataDelhi_RL_workshop