Anthill Inside Miniconf – Pune

Machine Learning, Deep Learning and Artificial Intelligence: concepts, applications and tools.

##About the event

When it comes to Machine Learning (ML), Deep Learning (DL) and Artificial Intelligence (AI), three aspects are crucial:

  • Clarity of fundamental concepts.
  • Insights and nuances when applying concepts to solve real-world problems.
  • Knowledge of tools for automating ML and DL.

Anthill Inside Miniconf will provide understanding on each of these fronts.

##Format

This miniconf is a full day event consisting of:

  1. 3-4 talks each, on concepts, applications and tools.
  2. Birds of Feather (BOF) sessions on focussed topics.

We are accepting proposals for:

  • 10 to 40-minute talks, explaining fundamnetal concepts in math, statistics and data science.
  • 20 to 40-minute talks on case studies and lessons learned when applyng ML, DL and AI concepts in different domains / to solve diverse data-related problems.
  • 10 to 20-minute talks on tools on ML and DL.
  • Birds of a Feather (BOF) sessions on failure stories in ML, to what problems / use cases should you use ML and DL, chatbots.
  • 3-6 hour hands-on workshops on concepts and tools.

##Hands-on workshops

Hands-on workshops for 30-40 participants on 25 November will help in internalizing concepts, and practical aspects of working with tools.
Workshops will be announced shortly. Workshop tickets have to be purchased separately.

##Target audience, and why you should attend this event

  1. ML engineers who want to learn about concepts in maths, stats and strengthen foundations.
  2. ML engineers wanting to learn from experiences and insights of others.
  3. Senior architects and decision-makers who want to quick run-through of concepts, implementation case studies, and overview of tools.
  4. Masters and doctoral candidates who want to bridge the gap between academia and practice.

##Selection process

Proposals will be shortlisted and reviewed by an editorial team consisting of practitioners from the community. Make sure your abstract contains the following information:

  1. Key insights you will present, or takeaways for the audience.
  2. Overall flow of the content.

You must submit links to videos of talks you have delivered in the past, or record and upload a two-min self-recorded video explaining what your talk is about, and why is it relevant for this event.

Also consider submitting links to the following along with your proposal:

  1. A detailed outline, or
  2. Mindmap, explaining the structure of the talk, or
  3. Draft slides.

##Honorarium for selected speakers; travel grants

Selected speakers and workshop instructors will receive an honorarium of Rs. 3,000 each, at the end of their talk. We do not provide free passes for speakers’ colleagues and spouses.

Travel grants are available for domestic speakers. We evaluate each case on its merits, giving preference to women, people of non-binary gender, and Africans.
If you require a grant, mention this in the field where you add your location. Anthill Inside Miniconf is funded through ticket purchases and sponsorships; travel grant budgets vary.

##Important dates

Anthill Inside Miniconf – 24 November, 2017.
Hands-on workshops – 25 November, 2017.

##Contact details:
For more information about speaking, Anthill Inside, sponsorships, tickets, or any other information contact support@hasgeek.com or call 7676332020.

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more

saurabh agarwal

@saurabh_agl

Inference in Deep Neural Networks

Submitted Nov 1, 2017

A lot of focus is currently on training neural networks and better architecture. But we don’t focus alot on inference because well we are busy making our models work. Inference is supposed to run millions of time more than training and alot of times the inference is supposed to run on embeded devices. This talk will go into details of how the advancements in hardware have made Deep Learning possible. We will also talk of certain optimization which can be done to speed up computaion when deploying a model on CPU. We will debunk terms GeMM, SIMD, BLAS and SIMT on the way.

Outline

  • Intro DL Networks.
    • How do typical Deep Learning Architetures look.
    • A small section using example of one CNN and one LSTM on what mathematical operations do they perform.
  • Advancements in Hardware
    • Intel Knight CPU’s
    • Nervana
    • Volta GPU’s
  • How exactly the operations are done on garden-variety hardware
    • SIMD
    • SIMT
    • GeMM
  • Different type of Architectures
    • CPU and GPU’s
      • How do these work and bottlenecks
  • Role Played by Memory access in speeds
    • How a lot of times memory is the bottleneck instead of Compute
  • Changes in algortihms made to utilise these functionalities
    • Example of Google’s Inception V3 model
    • Two different type of RNN’s
  • Advice
    • How to make your model more efficient at inference.
    • Some practical examples

Speaker bio

Saurabh has been working at MAD Street Den, Chennai as a Machine Learning Engineer since past year and a half,specifically working on Deep Learning based products. He loves to train Convolutional Neural Networks of all types and sizes for different applications. Apart from CNN’s he has special interest in recurrent architectures and discovering their powers. When he is not working on DL based stuff, he loves to play around with micro-controllers.

Slides

https://docs.google.com/presentation/d/e/2PACX-1vS3npIWr-HCcKgpodNvp1-RI3gBUaXAXFS94FhYA6AtNuSbDkvPrSOYSRUni9vyYNzeIM5wBEk_kFqT/pub?start=false&loop=false&delayms=3000

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more