Anthill Inside 2017

On theory and concepts in Machine Learning, Deep Learning and Artificial Intelligence. Formerly Deep Learning Conf.

Participate

Apache MXNet, a highly memory efficient deep learning framework

Submitted by Girish Patil (@bookworm) on Saturday, 22 July 2017

Section: Crisp talk Technical level: Intermediate

View proposal in schedule

Abstract

GPU memory is the most expensive deep learning resource. MXNet was designed to allow complex deep learning with most minimal requirements on GPU memory. This allows for training of complex models with accessible chips. This session will discuss how MXNet achieves low memory footprint, as well as other useful features of this rapidly emerging framework.

Outline

What is MXNet, where it came from, who uses it
How it support both declarative and symbolic paradigms
How it minimize memory footprint trading of compute for memory, impact of this tradeoff
Other cool features of MXNet like ease of programming, ease of distributed programming, being able to access caffe, torch layers

Requirements

None, other than being able to control my sldies

Speaker bio

Girish Patil works as an Senior Solutions Architect for AWS. He focuses on deep learning & traditional machine learning and big data projects. He helps many of India’s highly successful start-ups to use these technologies. Girish is also a subject matter expert within Amazon on these technologies and regularly participates global knowledge exchange programs.

Girish’s other interest include building internet scale applications. Girish also leads Developer Envagelism initiatives for AWS, as well Innovation Pavilion initiatives to promote young High Tech start-ups from India.

Links

Slides

https://www.slideshare.net/AIFrontiers/scaling-deep-learning-with-mxnet%20(slide%2073%20to%2097%20from%20the%20full%20presentation,%20plus%20some%20additional%20slide%20for%20deep%20dive)

Comments

Login with Twitter or Google to leave a comment