Anthill Inside 2019

A conference on AI and Deep Learning

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

About the 2019 edition:

The schedule for the 2019 edition is published here: https://hasgeek.com/anthillinside/2019/schedule

The conference has three tracks:

  1. Talks in the main conference hall track
  2. Poster sessions featuring novel ideas and projects in the poster session track
  3. Birds of Feather (BOF) sessions for practitioners who want to use the Anthill Inside forum to discuss:
    - Myths and realities of labelling datasets for Deep Learning.
    - Practical experience with using Knowledge Graphs for different use cases.
    - Interpretability and its application in different contexts; challenges with GDPR and intepreting datasets.
    - Pros and cons of using custom and open source tooling for AI/DL/ML.

Who should attend Anthill Inside:

Anthill Inside is a platform for:

  1. Data scientists
  2. AI, DL and ML engineers
  3. Cloud providers
  4. Companies which make tooling for AI, ML and Deep Learning
  5. Companies working with NLP and Computer Vision who want to share their work and learnings with the community

For inquiries about tickets and sponsorships, call Anthill Inside on 7676332020 or write to sales@hasgeek.com


Sponsors:

Sponsorship slots for Anthill Inside 2019 are open. Click here to view the sponsorship deck.


Anthill Inside 2019 sponsors:


Bronze Sponsor

iMerit Impetus

Community Sponsor

GO-JEK iPropal
LightSpeed Semantics3
Google Tact.AI
Amex

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more

Sherin Thomas

@hhsecond

Productionizing deep learning workflow with Hangar, <frameworkOfYourChoice> & RedisAI

Submitted Apr 26, 2019

Managing DL workflow is always a nightmare. Problems include handling the scale, efficient resource utilization, version controlling the data etc. With the heavily organized Hangar, we can keep the data on check now, not as a blob but as tensors in the data store and version at. The super flexible PyTorch gives us the advantage of prototyping faster and iterate smoother. The model prototype can now be pushed to RedisAI, the highly optimized production runtime, as torchscript and scale the serving to multi node redis cluster or redis sentinel, with high availablility ofcourse

Outline

Even with the advancement, the community had made in the past couple of years, problems of a DL engineer starts right from the beginning when they think about version controlling the data and model. None of the toolset available right now could make a platform “git” had provided for programmers years ago. With the entry of Hangar and it’s python APIs, we are now moving ahead of the game. Having a version controlling system like hangar in place, DL folks are still struggling with production deployment with their framework of choice. PyTorch is the most flexible and easy framework that all deep learning developers had loved. But without a plug n play deployment toolkit, PyTorch always suffered to attract people who want to deploy their model to production. Meanwhile, RedisAI is trying to solve the problem every deep learning engineer faces especially when they try to scale. Making sure the production environment is highly available is probably the most daunting task DevOps experts worried especially when they have a deep learning service in production. RedisAI, along with the integration to all other Redis tools in the ecosystem, is trying to bring super easy deployment platform for the user. In the talk, I’ll be presenting the killer combination of PyTorch and RedisAI and explain how can we develop super optimized DL model with LibTorch but without losing the flexibility provided by PyTorch and how to ship it to production without even worried about writing a wrapper Flask/Django service for the model. Not just that, I’ll be showing how to put this deployment into a multi-node cluster and make sure you have 100% availability always. In a nutshell, the talk covers a deep learning workflow by introducing three toolkits for the user that makes the whole pipeline seamless.

Requirements

The audience should have basic python knowledge and a brief understanding of deep learning.

Speaker bio

I am working as a part of the development team of tensorwerk, an infrastructure development company focusing on deep learning deployment problems. I and my team focus on building open source tools for setting up a seamless deep learning workflow. I have been programming since 2012 and started using python since 2014 and moved to deep learning in 2015. I am an open source enthusiast and I spend most of my research time on improving interpretability of AI models using TuringNetwork. I am part of the core development team of Hangar and RedisAI and a constant contributor to PyTorch source. I also have authored a deep learning book. I go by hhsecond on internet

Links

Slides

https://docs.google.com/presentation/d/1G7lIHEluM8SLdWiGp8As9Efcq-4GHAU5xXGHCZYvvFg/edit?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Make a submission

Accepting submissions till 01 Nov 2019, 04:20 PM

Taj M G Road, Bangalore, Bangalore

Hosted by

Anthill Inside is a forum for conversations about Artificial Intelligence and Deep Learning, including: Tools Techniques Approaches for integrating AI and Deep Learning in products and businesses. Engineering for AI. more