Jul 2019
22 Mon
23 Tue
24 Wed
25 Thu 09:15 AM – 05:45 PM IST
26 Fri 09:20 AM – 05:30 PM IST
27 Sat
28 Sun
simrat
Most ML effort stagnates at the stage of building ad-hoc models, with only thin layers of customization around them. This is mostly okay, but there are usually no guarantees about elasticity, uptime or even accuracy (since updating models is non-trivial) - all of which are crucial to business. This BoF invites the audience to discuss problems, paradigms and best practices around deploying machine learning to production.
##ML In Production - Early Stages vs At Scale
Usually the easiest way to deploy models is via exposing serialized versions of them to applications through HTTP. This does not scale well for various reasons. APIs may become more complex as applications demand ML functionality beyond just making predictions. As models grow in size, they may become more expensive to deploy and host. As web traffic grows, models must become more and more performant not only in predicting results but also in terms of their availability. Moreover, not all models can be easily deployed over a REST API in the first place. In many cases there are multiple models and multiple endpoints - which eventually become difficult to manage.
##ML as a Service vs ML as a Product
As ML gets more commoditized, organizations find themselves inevitably drawn into the service vs product debate. MLaaS solutions are highly modular and customizable, but place a knowledge overhead on practitioners. On the other hand, ML products are well defined, making them easier to build in an Agile sense, with a relatively lower overhead on knowledge and skills. But they can be too rigid and monolithic. The skills and efforts that go into these two paradigms can be widely different, causing teams and organizations to rethink their approaches often.
##ML on Cloud vs ML on Premise
The abstractions around primitive ML infrastructure provided by cloud services and the self-contained production environments required by on-premise solutions - both have a significant learning curve. Both paradigms place different, but equally demanding constraints on data scientists and application developers. Cloud vs on-premise may not always be a choice, but when organizations have to prioritize what kind of expertise they want to develop, this becomes relevant.
##Data Scientists, Application Developers and SysAdmins
This is more of a cultural than a technical issue. Going from a jupyter notebook to a packaged, deployable solution is messy, at best. We do see data scientists working more often with developers and sysadmins than before - the boundaries of job descriptions are thinning - but this is not as frictionless as we’d like it to be.
Hosted by
{{ gettext('Login to leave a comment') }}
{{ gettext('Post a comment…') }}{{ errorMsg }}
{{ gettext('No comments posted yet') }}