Optimising Model performance using automated ML pipeline for predicting purchase propensity @ Fractal Analytics
Ensemble learning is the process by which multiple machine-learning models are evaluated and combined to help build a combined model that provides better results. Building these models require experimenting with not just multiple Machine-Learning models, but also with various model-parameters that help build good individual models.
In this talk, we will share how did we built an automated machine-learning pipeline to help evaluate multiple machine learning models and model parameters. The purchase propensity model used multiple ML techniques, ranging from regression techniques to Random-Forest based classifiers and helped build a machine-learning ensemble model over 100’s of millions of transaction data-points. The system that was built provided an ability to scale, both for the various modelling combinations available, and for the size of the datasets involved. We will discuss on how did we employ best practices in Spark during every step of building scalable models.
Performing Exploratory Data Analysis using Spark.
Discussion on commonly encountered issues during feature engineering.
Discussion over various classification techniques including-
Logistic regression (experimenting with regularization parameters to avoid overfitting)
Addressing technical challenges in performing K-fold cross-validation.
Search for optimal parameters for modeling using Grid-search
Ensemble based approaches (bagging & self-training) using Spark.
Padma Chitturi is Lead Engineer at Fractal Analytics Pvt Ltd and has over five years of experience in large scale data processing. She has authored the book “Apache Spark for Data Science Cookbook”. Currently, she is part of capability development at Fractal and responsible for solution development for analytical problems across multiple business domains at large scale. Prior to this, she worked for an Airlines product on a real-time processing platform at Amadeus Software Labs. She has worked on realizing large-scale deep networks (Jeffrey dean’s work in Google brain) for image classification on the big data platform Spark at Impetus. She works closely with Kafka, Spark, Storm, Cassandra, Hadoop, Deep Learning, Computer Vision and Real-time streaming. She was an open source contributor to Apache Storm.