Anthill Inside 2019

A conference on AI and Deep Learning

Tickets

Loading…

Sherin Thomas

@hhsecond

Productionizing deep learning workflow with Hangar, <frameworkOfYourChoice> & RedisAI

Submitted Apr 26, 2019

Managing DL workflow is always a nightmare. Problems include handling the scale, efficient resource utilization, version controlling the data etc. With the heavily organized Hangar, we can keep the data on check now, not as a blob but as tensors in the data store and version at. The super flexible PyTorch gives us the advantage of prototyping faster and iterate smoother. The model prototype can now be pushed to RedisAI, the highly optimized production runtime, as torchscript and scale the serving to multi node redis cluster or redis sentinel, with high availablility ofcourse

Outline

Even with the advancement, the community had made in the past couple of years, problems of a DL engineer starts right from the beginning when they think about version controlling the data and model. None of the toolset available right now could make a platform “git” had provided for programmers years ago. With the entry of Hangar and it’s python APIs, we are now moving ahead of the game. Having a version controlling system like hangar in place, DL folks are still struggling with production deployment with their framework of choice. PyTorch is the most flexible and easy framework that all deep learning developers had loved. But without a plug n play deployment toolkit, PyTorch always suffered to attract people who want to deploy their model to production. Meanwhile, RedisAI is trying to solve the problem every deep learning engineer faces especially when they try to scale. Making sure the production environment is highly available is probably the most daunting task DevOps experts worried especially when they have a deep learning service in production. RedisAI, along with the integration to all other Redis tools in the ecosystem, is trying to bring super easy deployment platform for the user. In the talk, I’ll be presenting the killer combination of PyTorch and RedisAI and explain how can we develop super optimized DL model with LibTorch but without losing the flexibility provided by PyTorch and how to ship it to production without even worried about writing a wrapper Flask/Django service for the model. Not just that, I’ll be showing how to put this deployment into a multi-node cluster and make sure you have 100% availability always. In a nutshell, the talk covers a deep learning workflow by introducing three toolkits for the user that makes the whole pipeline seamless.

Requirements

The audience should have basic python knowledge and a brief understanding of deep learning.

Speaker bio

I am working as a part of the development team of tensorwerk, an infrastructure development company focusing on deep learning deployment problems. I and my team focus on building open source tools for setting up a seamless deep learning workflow. I have been programming since 2012 and started using python since 2014 and moved to deep learning in 2015. I am an open source enthusiast and I spend most of my research time on improving interpretability of AI models using TuringNetwork. I am part of the core development team of Hangar and RedisAI and a constant contributor to PyTorch source. I also have authored a deep learning book. I go by hhsecond on internet

Slides

https://docs.google.com/presentation/d/1G7lIHEluM8SLdWiGp8As9Efcq-4GHAU5xXGHCZYvvFg/edit?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hybrid access (members only)

Hosted by

Anthill Inside is a forum for conversations about risk mitigation and governance in Artificial Intelligence and Deep Learning. AI developers, researchers, startup founders, ethicists, and AI enthusiasts are encouraged to: more