Submissions
MLOps Conference

MLOps Conference

On DataOps, productionizing ML models, and running experiments at scale.

Tickets

Loading…

ML workflows and processes are critical for enabling rapid prototyping and deployment of ML/AI models in any organization. This conference is a platform to present and discuss such workflows and patterns that drive data-driven organizations to operate at scale. expand

ML workflows and processes are critical for enabling rapid prototyping and deployment of ML/AI models in any organization. This conference is a platform to present and discuss such workflows and patterns that drive data-driven organizations to operate at scale.

We are accepting experiential talks and written content on the following topics:

  1. ML development workflows.
  2. ML deployment frameworks.
  3. Data lineage.
  4. Model lineage.
  5. Model ethics/bias testing.
  6. A/B testing frameworks.
  7. Model governance.
  8. Explainability/interpretability of models in run-time.
  9. Impact of change in MLOps mindset in product organizations.
  10. DataOps workflows.
  11. DataOps frameworks.
  12. Alerting, monitoring and managing models in production.
  13. Growing and managing data teams.
  14. MLOps in research.
  15. Deployment and infrastructure for machine learning.
  16. ROI (Return on Investment) for MLOps.

Who should speak?

  1. MLOps engineers who build and maintain ML workflows and deploy ML models.
  2. Data engineers building production scale data pipelines, feature stores, model dashboards, and model maintenance.
  3. Tech leaders/engineers/scientists/product managers of companies who have built tools and products for ML productivity.
  4. Tech leaders/engineers/scientists/product managers of companies who have built tools, products, processes for Data Ops to support ML.
  5. Tech Leaders/engineers/scientists/product managers who have experience with products that failed to make a mark in the market due to ML failures.
  6. Investors who are investing in the space of ML productivity tools, frameworks and landscape.
  7. Privacy/ethics stakeholders involved in model governance and testing for ethics/bias.

Content can be submitted in the form of:

  • 15 minute talks
  • 30 minute talks
  • 1,000 word written articles

All content will be peer-reviewed by practitioners from industry.

Make a submission

Submissions close 14 Jul 2021, 11:00 PM

upendra singh

Privacy Attacks in Machine Learning Systems - Discover, Detect and Defend

My name is Upendra Singh. I work at Twilio as an Architect. As a part of this talk proposal I would like to shed some light on the new kind of attacks machine learning systems are facing nowadays - Privacy Attacks. During the talk we will explain and demonstrate how to discover, detect and defend Privacy related vulnerabilities in our machine learning models. Will also explain why it is so critic… more
  • 22 comments
  • Submitted
  • 17 Apr 2021

haridas n

Process of Open Source your ML Service

Pic2Card is an Opensource ML service that helps to create AdaptiveCards from an Image. We have recently contributed this service to AdaptiveCards, an Opensource card authoring framework from Microsoft. more
  • 12 comments
  • Submitted
  • 17 Apr 2021
Venkata Pingali

Venkata Pingali

Past and Future of Feature Stores

Audience Level: Intermediate Nature: Conceptual Scribble has built and operate feature stores for companies for the past few years. This is a perspective talk on why feature stores came about, what is being built today, and what we foresee over the next few years. more
  • 8 comments
  • Submitted
  • 11 May 2021
Venkata Pingali

Venkata Pingali

Mid-Market Feature Stores - Whats the Big Deal?

Mid-Market Feature Stores - Whats the Big Deal? Majority of feature store products and discussions have a specific pattern: Larg(ish) company, clean interfaces, and large volumes of data. They are very impressive but not good fit for mid-market enterprises. Scribble has built and operated feature stores for the past few years for mid-market enterprises. That experience has led to a different arch… more
  • 1 comments
  • Submitted
  • 11 May 2021

Vishal Gupta

Jupyter to Jupiter : Scaling multi-tenant ML Pipelines

A brief talk summarising the journey of an ML feature from a Jupyter Notebook to production. At Freshworks, given the diverse pool of customers using our products, each feature has dedicated models for each account, churning out millions of predictions every hour. This talk shall encompass the different tools and measures we’ve used to scale our ML products. Additionally, I’ll also be touching up… more
  • 8 comments
  • Submitted
  • 22 May 2021

Gaetan Castelein

Using feature stores to build a fraud model

Feature stores enable companies to make the difficult leap from research to production machine learning. At their best, feature stores allow you to define new features, automate the data pipelines to process feature values, and serve data for training and online inference. You can quickly and reliably serve features to your production models so your customers aren’t waiting for predictions. more
  • 2 comments
  • Submitted
  • 02 Jun 2021

sachin nagargoje

Tackling fraudsters in Email Communication at Twilio using Machine Learning

My name is Sachin Nagargoje. I work at Twilio as a Staff Data Scientist. As a part of this talk proposal I would like to shed some light on the kind of attacks we are facing at Twilio nowadays and how we are tackling it via different innovative ways and Machine Learning techniques. I want to showcase what are the challenges we face, and how we do and what we do to catch such unwanted communicatio… more
  • 5 comments
  • Submitted
  • 02 Jun 2021

Monojit Choudhury

Rethinking Linguistic Diversity and Inclusion in the Context of Technology

Natural Language processing is undergoing a paradigm shift right now; the models today are ever more powerful, capable and accurate. Thus, it looks like we are close to achieving the holy grail of AI. However, these assertions are true only for a handful of the world’s languages which have enough language data to train powerful and large models. What about the rest of the languages? In this talk,… more
  • 3 comments
  • Submitted
  • 11 Jun 2021
Sandya Mannarswamy

Sandya Mannarswamy

Opening the NLP Blackbox - Analysis, Evaluation and Testing of NLP Models

Rapid progress in NLP Research has seen a swift translation to real world commercial deployment. While a number of success stories of NLP applications have emerged, failures of translating scientific progress in NLP to real-world software have also been considerable. Evaluation of NLP models is often limited to held out test set accuracy on a handful of datasets, and analysis of NLP models is oft… more
  • 1 comments
  • Submitted
  • 13 Jun 2021

Milecia McGregor

Tuning Hyperparameters with DVC Experiments

When you start exploring multiple model architectures with different hyperparameter values, you need a way to quickly iterate. There are a lot of ways to handle this, but all of them require time and you might not be able to go back to a particular point to resume or restart training. more
  • 3 comments
  • Submitted
  • 14 Jun 2021

swaroopch

Maintaining Machine Learning Model Accuracy Through Monitoring

Machine Learning models begin to lose accuracy as soon as they are put into production. At DoorDash, we implemented a robust monitoring system to diagnose this issue and maintain the accuracy of our forecasts. more
  • 3 comments
  • Confirmed
  • 15 Jun 2021

Schaun Wheeler

Story-telling as a method for building production-ready machine-learning systems

This presentation is about the essential but overlooked role storytelling plays in machine learning productionization. Although often assumed to be at best a tangential concern, machine learning systems only become successful through successfully building trust in the model, the data, and the system in which the model and data operate. That trust-building happens through storytelling. Most often,… more
  • 2 comments
  • Submitted
  • 16 Jun 2021

Neha Gupta

Automatic rollbacks for MLOps deployments in Kubernetes

While there are different tooling to automate deployments of ML models most of them require manually written rules for verifying deployments in production. more
  • 2 comments
  • Submitted
  • 18 Jun 2021

shilpa shivapuram

Brands Dilemma: Personalization at the cost of privacy

We are in an era where we are so well connected virtually we are part of this humongous digital footprint that we are leaving behind. For eg when we buy anything from a marketplace, our app purchases, our entertainment preferences, and many more. These footprints are patterns of our behavior which could be private and public. Brands are hugely investing in this data to understand and cater to the… more
  • 2 comments
  • Submitted
  • 16 Apr 2021

Mario Rozario

ML Model Scaling with Teradata Vantage

My name is Mario Rozario and I work at Teradata. As psrt of this talk I would be briefly covering our latest product Vantage and some of its inherent strengths. One of the biggest benefits of Vantage is the ability to now scale Machine Learning workloads based on our parallel technology. Now with Vantage, users will also be able to scale ML workloads that they build elsewhere and scale them here.… more
  • 8 comments
  • Rejected
  • 16 Apr 2021
Make a submission

Submissions close 14 Jul 2021, 11:00 PM

Hosted by

The Fifth Elephant - known as one the best #datascience and #machinelearning conference in Asia - is transitioning into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Supported by

Scribble Data builds feature stores for data science teams that are serious about putting models (ML, or even sub-ML) into production. The ability to systematically transform data is the single biggest determinant of how well these models do. Scribble Data streamlines the feature engineering proces… more

Promoted