The Fifth Elephant 2018

The Fifth Elephant 2018

The seventh edition of India's best data conference

Rajdeep Dua

@rajdeepd

Building Scalable Machine Learning pipelines with Apache Prediction IO

Submitted Mar 30, 2018

The talk will help developers and data scientists understand how to build ML Pipelines using PredictionIO.
In this talk we will cover how Apache PredictionIO (an open source Machine Learning Server built on top of a state-of-the-art open source stack) helps reduce time from writing a Proof of Concept for a ML model to production ready Model serving micro service with persistent model. We will also show case how Apache PredictionIO helps mix and match multiple models to come up with hybrid Predictions from multiple algorithms.

Outline

  1. Define Machine Learning

  2. Relationship between Data Mining and Other Fields and tools

    • Available Tools
    • Processing Framework
    • Apache Spark, Apache Hadoop
    • Algorithm Libraries
      • MLlib, Mahout
      • Data storage
        • HBase, Cassandra, RDBMs
  3. A Classic Recommender Example: What is Missing?

  4. A Classic Recommender Example : Beyond prototyping
    How to deploy a scalable service that respond to dynamic prediction query?
    How do you persist the predictive model, in a distributed environment?
    How to make HBase, Spark and algorithms talking to each other?
    How should I prepare, or transform, the data for model training?
    How to update the model with new data without downtime?
    Where should I add some business logic?
    How to make the code configurable, re-usable and maintainable?
    How do I build all these with a separate of concerns (SoC)?

  5. Classic Recommender example : Apache Prediction IO
    PredictionIO is a machine learning server for building and deploying predictive engines on production in a fraction of the time. Built on Apache Spark, MLlib and HBase

  6. Event Server : Event Server : Collection Data Collecting Date
    Example Event
    Engine

  7. Functions of an Engine
    A. Train predictive model(s)
    B. Respond to dynamic query

  8. Deploying on Heroku/AWS/ GCE
    Event Server and PIO Engine run as two Applications
    Connected to the same PostgreSQL backend
    Event Server has Single dynos
    Web

PIO Engine has two dynos: Web, Train

  1. Collaborative Filtering and ALS
    Collaborative Filtering :
    Collaborative Filtering(CF) is a subset of algorithms that exploit other users and items along with their ratings(selection, purchase information could be also used)
    Target user history to recommend an item that target user does not have ratings for.
    Assumption behind this approach is that other users preference over the items could be used recommending an item to the user who did not see the item or purchase
    Matrix Factorization
    Both users and items are mapped to a joint latent factor space of dimensionality ‘f’ where user-item interaction is modeled as inner product in this space.
    Item i is associated with vector q
    (where q measures the extent to which the item possesses the latent factors)
    User u is associated with vector p
    (where p measures the extent of interest the user has in the item.)
    The dot product between q and p captures the interaction between user u and item I : i.e. users interest in item.
    Key to model is finding vectors q and p.

  2. Matrix Factorization: Alternative Least Square Method
    ALS works by iteratively solving a series of least squares regression problems. In each iteration, one of the user- or item-factor matrices is treated as fixed, while the other one is updated using the fixed factor and the rating data.
    User Factors : p
    Item Factors : q
    The factor matrix that was solved for is, in turn, treated as fixed, while the other one is updated. This process continues until the model has converged (or for a fixed number of iterations).

  3. Demo ALS

  4. Summary
    Building ML pipeline is about selecting the algorithm , training and tuning the model. Taking it to production is key to realizing the true power on ML and AI Prediction

Requirements

Internet connection, projector, microphone

Speaker bio

Rajdeep Dua has over 18 years of experience in the Cloud and Big
Data space. Currently, he leads Developer Relations team at Salesforce
India. He also works with the Engineering teams at Salesforce building scalable
AI services, which
uses Hadoop and Spark to expose big data processing tools for
developers. He has worked in the advocacy team for Google’s Big Data
tools, BigQuery. He worked on the Greenplum big data platform at
VMware in the developer evangelist team. He worked closely with a team
on porting Spark to run on VMware’s public and private cloud as a
feature set. He has taught Spark and Big Data at some of the most
prestigious tech schools in India.

He has also presented BigQuery and Google App Engine at W3C conference
in Hyderabad (http://wwwconference.org/proceedings/www2011/schedule/w
ww2011_Program.pdf). He led Developer Relations teams at Google,
VMware, and Microsoft. He has spoken at hundreds of other conferences
on the cloud.

His contributions to the open source community are related to Docker,
Kubernetes, Android, OpenStack, and cloudfoundry. He has teaching
experience in big data at IIIT Hyderabad, ISB, IIIT Delhi, and College
of Engineering Pune.

LinkedIn profile can be found at https://www.linkedin.com/in/rajdeepd.

Twitter : @rajdeepdua

Slides

https://drive.google.com/file/d/1nCeFzyOsMggMIg7kbNHaKup_w2XDKCtO/view?usp=sharing

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

Jump starting better data engineering and AI futures