Sep 2017
11 Mon
12 Tue 08:30 AM – 05:20 PM IST
13 Wed 08:30 AM – 05:30 PM IST
14 Thu
15 Fri
16 Sat
17 Sun
Sep 2017
11 Mon
12 Tue 08:30 AM – 05:20 PM IST
13 Wed 08:30 AM – 05:30 PM IST
14 Thu
15 Fri
16 Sat
17 Sun
A Naveen Kumar
The ability to train the task specific deep learning models is very easy these
days, with the wide range of available libraries and documentation around it. But,
the difficulty lies in bringing it to production ready mode. Especially, if the
application concentrates on Mobile platform.
Though there are existing wrappers of certain libraries to make them work, but,
as of now, they are slow and use up almost the entire memory space of the
phone.
In this talk, I would like to explain, what can be done to make things faster and
how to make models with reduced size. The aim of this talk is to provide insights
on what would be the difficulties which lie ahead and how to build your own
libraries in both iOS and Android.
What is Deep Learning ?
5 mins, introduction and explanation
What are the difficulties faced to push them into mobile production ?
10 minutes
How to solve it in IOS ?
5 minutes
How to solve it in Android ?
5-10 minutes
Conclusion
5 minutes
Basic understanding of AI and their usage
I am a member of the data science team at
Semantics3 - building data-powered software for ecommerce-focused companies. Over the years, I have had the chance to work
on various aspects of Deep Learning, one such scenario was running the models
on mobile. We made an app named Flo, which got featured by Apple on their
twitter page for using AI and their framework to make it run faster.
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}