The Fifth Elephant 2015

A conference on data, machine learning, and distributed and parallel computing

Machine Learning, Distributed and Parallel Computing, and High-performance Computing are the themes for this year’s edition of Fifth Elephant.

The deadline for submitting a proposal is 15th June 2015

We are looking for talks and workshops from academics and practitioners who are in the business of making sense of data, big and small.

Track 1: Discovering Insights and Driving Decisions

This track is about general, novel, fundamental, and advanced techniques for making sense of data and driving decisions from data. This could encompass applications of the following ML paradigms:

  • Statistical Visualizations
  • Unsupervised Learning
  • Supervised Learning
  • Semi-Supervised Learning
  • Active Learning
  • Reinforcement Learning
  • Monte-carlo techniques and probabilistic programming
  • Deep Learning

Across various data modalities including multi-variate, text, speech, time series, images, video, transactions, etc.

Track 2: Speed at Scale

This track is about tools and processes for collecting, indexing, and processing vast amounts of data. The theme includes:

  • Distributed and Parallel Computing
  • Real Time Analytics and Stream Processing
  • MapReduce and Graph Computing frameworks
  • Kafka, Spark, Hadoop, MPI
  • Stories of parallelizing sequential programs
  • Cost/Security/Disaster Management of Data

Commitment to Open Source

HasGeek believes in open source as the binding force of our community. If you are describing a codebase for developers to work with, we’d like it to be available under a permissive open source license. If your software is commercially licensed or available under a combination of commercial and restrictive open source licenses (such as the various forms of the GPL), please consider picking up a sponsorship. We recognize that there are valid reasons for commercial licensing, but ask that you support us in return for giving you an audience. Your session will be marked on the schedule as a sponsored session.

Workshops

If you are interested in conducting a hands-on session on any of the topics falling under the themes of the two tracks described above, please submit a proposal under the workshops section. We also need you to tell us about your past experience in teaching and/or conducting workshops.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Dhanesh Padmanabhan

@dhanesh123us

An Integrated Weblog Processing and Machine Learning Workflow for Building and Deploying Intent Prediction Models at Scale

Submitted Jun 15, 2015

To share with the audience our experiences in setting up a scalable infrastructure for weblog processing and machine learning leveraging several technologies such as Hadoop, Vertica, R and Python. The talk will focus on implementing scalable data models for dynamic intent predictions on web/mobile channels and machine learning best practices.

Outline

[24]7 Inc provides proactive chat and self service solutions for enterprises, so they can proactively engage with customers and help them with their purchase decisions or post purchase/service-related queries. At heart of these solutions, is a predictive platform that predicts the intent of customers based on user journeys and past browsing history on the web and mobile channels. During the initial client deployments, data scientists hand crafted the predictive models, using machine learning tools such as R and Python, and MySQL for custom ETL and data transformation on the weblogs. The main challenges to this approach were: (i) model deployments were slow since every model required massive engineering effort to replicate the data & model logic in the predictive platform and (ii) scalability was a problem since every client deployment became very unique.

We solved these challenges by setting up an integrated framework of processes/tools that standardizes and automates the various aspects of predictive model development:

  1. Analytical data model standardization & automation – We created a batch processing framework in Hadoop that creates modeling ready datasets. The datasets capture user journeys and historical summaries through standardized analytical data transformations, and also configurable data transformations. The resulting datasets are stored in Vertica, on which the data scientists and analytics professionals perform their day-to-day analytical tasks.
  2. Modeling process automation – A modeling workflow has been developed capturing various steps of modeling – exploratory data analysis (EDA), feature engineering, model training, model validation & simulation. Modeling workflow combines machine learning, statistical and optimization techniques to provide several potential solutions in these steps, which the data scientists can fine tune and use during the model development process. These modules have been developed as User Defined functions in Vertica using R, C++ and Python.
  3. Modeling deployment automation – We have also developed a model publishing workflow that converts data transformations & model output to realtime prediction entities such as Predictive Model Markup Language (PMML), MVEL and Javascript. The deployment also takes into account A/B experimentation setups and other business condition specifications.

Apache Oozie, a Hadoop-based workflow technology has been heavily used in the above tools. With these suite of tools, we have been able to cut down development and deployment times from 3 weeks to about a week, and we expect to drive down these times further. We have successfully applied this end to end for the ecommerce portal of an industrial supplier, and in course to apply to an ecommerce portal of a large hotel chain in the US. We will also talk about shortcomings of this approach and how we plan to address these.

Speaker bio

Dhanesh Padmanabhan is Director of Data Science Infrastructure (DSI) team within the 24[7] Innovation Lab’s Data Sciences Group (DSG). He holds the responsibilities of developing the analytics infrastructure and the prediction platform for DSG. He has 11.5 years of data and analytics R&D experience in companies including General Motors R&D, Hewlett & Packard Analytics, and Marketics Technologies (now W.N.S.). He holds a Ph.D. in Mechanical Engineering from the University of Notre Dame.

Linkedin Profile: https://in.linkedin.com/in/dhanesh123us
Slideshare Article on Modeling Workbench: http://www.slideshare.net/247incindia/enabling-big-data-analytics-with-modeling-workbench
Other Research Activities: https://www.researchgate.net/profile/Dhanesh_Padmanabhan

Comments

{{ gettext('Login to leave a comment') }}

{{ gettext('Post a comment…') }}
{{ gettext('New comment') }}
{{ formTitle }}

{{ errorMsg }}

{{ gettext('No comments posted yet') }}

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more