The Fifth Elephant 2019

Gathering of 1000+ practitioners from the data ecosystem

Tickets

Birds of a Feather: ML in production

Submitted by simrat (@sims) via Abhishek Balaji (@booleanbalaji) on Sunday, 21 July 2019

Session type: Birds of a Feather session of 1 hour Session type: BOF session of 1 hour

View proposal in schedule

Abstract

Most ML effort stagnates at the stage of building ad-hoc models, with only thin layers of customization around them. This is mostly okay, but there are usually no guarantees about elasticity, uptime or even accuracy (since updating models is non-trivial) - all of which are crucial to business. This BoF invites the audience to discuss problems, paradigms and best practices around deploying machine learning to production.

Outline

ML In Production - Early Stages vs At Scale

Usually the easiest way to deploy models is via exposing serialized versions of them to applications through HTTP. This does not scale well for various reasons. APIs may become more complex as applications demand ML functionality beyond just making predictions. As models grow in size, they may become more expensive to deploy and host. As web traffic grows, models must become more and more performant not only in predicting results but also in terms of their availability. Moreover, not all models can be easily deployed over a REST API in the first place. In many cases there are multiple models and multiple endpoints - which eventually become difficult to manage.

ML as a Service vs ML as a Product

As ML gets more commoditized, organizations find themselves inevitably drawn into the service vs product debate. MLaaS solutions are highly modular and customizable, but place a knowledge overhead on practitioners. On the other hand, ML products are well defined, making them easier to build in an Agile sense, with a relatively lower overhead on knowledge and skills. But they can be too rigid and monolithic. The skills and efforts that go into these two paradigms can be widely different, causing teams and organizations to rethink their approaches often.

ML on Cloud vs ML on Premise

The abstractions around primitive ML infrastructure provided by cloud services and the self-contained production environments required by on-premise solutions - both have a significant learning curve. Both paradigms place different, but equally demanding constraints on data scientists and application developers. Cloud vs on-premise may not always be a choice, but when organizations have to prioritize what kind of expertise they want to develop, this becomes relevant.

Data Scientists, Application Developers and SysAdmins

This is more of a cultural than a technical issue. Going from a jupyter notebook to a packaged, deployable solution is messy, at best. We do see data scientists working more often with developers and sysadmins than before - the boundaries of job descriptions are thinning - but this is not as frictionless as we’d like it to be.

Speaker bio

  • Jaidev Deshpande
  • Sherin Thomas
  • Simrat Hanspal

Comments

  • Jaidev Deshpande (@jaidevd) 3 months ago (edited 3 months ago)

    Abstract

    Most ML effort stagnates at the stage of building ad-hoc models, with only thin layers of customization around them. This is mostly okay, but there are usually no guarantees about elasticity, uptime or even accuracy (since updating models is non-trivial) - all of which are crucial to business. This BoF invites the audience to discuss problems, paradigms and best practices around deploying machine learning to production.

    Points of Discussion

    • ML In Production - Early Stages vs At Scale

    Usually the easiest way to deploy models is via exposing serialized versions of them to applications through HTTP. This does not scale well for various reasons. APIs may become more complex as applications demand ML functionality beyond just making predictions. As models grow in size, they may become more expensive to deploy and host. As web traffic grows, models must become more and more performant not only in predicting results but also in terms of their availability. Moreover, not all models can be easily deployed over a REST API in the first place. In many cases there are multiple models and multiple endpoints - which eventually become difficult to manage.

    • ML as a Service vs ML as a Product

    As ML gets more commoditized, organizations find themselves inevitably drawn into the service vs product debate. MLaaS solutions are highly modular and customizable, but place a knowledge overhead on practitioners. On the other hand, ML products are well defined, making them easier to build in an Agile sense, with a relatively lower overhead on knowledge and skills. But they can be too rigid and monolithic. The skills and efforts that go into these two paradigms can be widely different, causing teams and organizations to rethink their approaches often.

    • ML on Cloud vs ML on Premise

    The abstractions around primitive ML infrastructure provided by cloud services and the self-contained production environments required by on-premise solutions - both have a significant learning curve. Both paradigms place different, but equally demanding constraints on data scientists and application developers. Cloud vs on-premise may not always be a choice, but when organizations have to prioritize what kind of expertise they want to develop, this becomes relevant.

    • Data Scientists, Application Developers and SysAdmins

    This is more of a cultural than a technical issue. Going from a jupyter notebook to a packaged, deployable solution is messy, at best. We do see data scientists working more often with developers and sysadmins than before - the boundaries of job descriptions are thinning - but this is not as frictionless as we’d like it to be.

    • Abhishek Balaji (@booleanbalaji) Proposer Reviewer 3 months ago

      Thanks, Jaidev. I’ve updated the content.

Login with Twitter or Google to leave a comment