An Integrated Weblog Processing and Machine Learning Workflow for Building and Deploying Intent Prediction Models at Scale
Submitted by Dhanesh Padmanabhan (@dhanesh123us) on Monday, 15 June 2015
To share with the audience our experiences in setting up a scalable infrastructure for weblog processing and machine learning leveraging several technologies such as Hadoop, Vertica, R and Python. The talk will focus on implementing scalable data models for dynamic intent predictions on web/mobile channels and machine learning best practices.
7 Inc provides proactive chat and self service solutions for enterprises, so they can proactively engage with customers and help them with their purchase decisions or post purchase/service-related queries. At heart of these solutions, is a predictive platform that predicts the intent of customers based on user journeys and past browsing history on the web and mobile channels. During the initial client deployments, data scientists hand crafted the predictive models, using machine learning tools such as R and Python, and MySQL for custom ETL and data transformation on the weblogs. The main challenges to this approach were: (i) model deployments were slow since every model required massive engineering effort to replicate the data & model logic in the predictive platform and (ii) scalability was a problem since every client deployment became very unique.
We solved these challenges by setting up an integrated framework of processes/tools that standardizes and automates the various aspects of predictive model development:
1. Analytical data model standardization & automation – We created a batch processing framework in Hadoop that creates modeling ready datasets. The datasets capture user journeys and historical summaries through standardized analytical data transformations, and also configurable data transformations. The resulting datasets are stored in Vertica, on which the data scientists and analytics professionals perform their day-to-day analytical tasks.
2. Modeling process automation – A modeling workflow has been developed capturing various steps of modeling – exploratory data analysis (EDA), feature engineering, model training, model validation & simulation. Modeling workflow combines machine learning, statistical and optimization techniques to provide several potential solutions in these steps, which the data scientists can fine tune and use during the model development process. These modules have been developed as User Defined functions in Vertica using R, C++ and Python.
Apache Oozie, a Hadoop-based workflow technology has been heavily used in the above tools. With these suite of tools, we have been able to cut down development and deployment times from 3 weeks to about a week, and we expect to drive down these times further. We have successfully applied this end to end for the ecommerce portal of an industrial supplier, and in course to apply to an ecommerce portal of a large hotel chain in the US. We will also talk about shortcomings of this approach and how we plan to address these.
Dhanesh Padmanabhan is Director of Data Science Infrastructure (DSI) team within the 24 Innovation Lab’s Data Sciences Group (DSG). He holds the responsibilities of developing the analytics infrastructure and the prediction platform for DSG. He has 11.5 years of data and analytics R&D experience in companies including General Motors R&D, Hewlett & Packard Analytics, and Marketics Technologies (now W.N.S.). He holds a Ph.D. in Mechanical Engineering from the University of Notre Dame.
Linkedin Profile: https://in.linkedin.com/in/dhanesh123us
Slideshare Article on Modeling Workbench: http://www.slideshare.net/247incindia/enabling-big-data-analytics-with-modeling-workbench
Other Research Activities: https://www.researchgate.net/profile/Dhanesh_Padmanabhan