Machine Learning Operations - aka MLOps - is gaining immense popularity. A lot of people in the industry thought this was another wave of terminologies and that it would not last. Yet here were are, another conference dedicated to MLOps.

The biggest driver for MLOps has been the fact that a big portion (70-90% based on various reports) of machine learning models never make it production or fail much before they get there. It is no more a question if machine learning models are capable, there has been a decade of research and constant breakthroughs, it is now the question around business viability and impact. It is the economics around machine learning that has led to MLOps taking centre stage.

In the previous episode of MLOps conference, we had practitioners talking about:

  • Economies around their machine learning problems
  • The platforms they have built or working with to reduce cost of experiments, to evaluate their models in production as data and users keep changing thereby requiring constant supervision
  • Fairness and bias in machine learning models and what one can do to keep them in check
  • Technologies and tools that have been designed and built to help the MLOps ecosystem flourish like feature stores, model versioning and more.

In this edition, we have taken a bit of a different take. Commonly, most conferences are looking for new speakers and new content. However, in real life, practitioners are not jumping on new bandwagons every few months. There is continuity in their systems. And as the system evolves, the hypotheses are validated or discarded, which have a profound impact on design and thinking, especially in Machine learning and Machine Learning operations. We want to bring this real world flavour to the mix by:

  • Inviting a few speakers from our previous edition to talk about how their systems have evolved in the last six months, since they spoke at the event. What can we learn from their wins and discarded hypothesis?
  • Panel discussions around learning from failures in MLOps.
  • Panel discussions on how to staff MLOps team, coordination between data science and engineering teams.

Usually, we are trained to be biased towards to learning from success. Especially in engineering and machine learning, systems, processes and thinking are sort of inspirations of success found in companies that are unicorns. This is not a wrong in anyway, as there is proof that it works and provides immense value. However, this is not the only way. We can learn a lot from looking at failures (In our personal opinion a lot more than looking at success).

As, Melvin Conway observed that technical system design was directly influenced by the communication pattern in the company. So how does one setup teams to success? In one of the panel discussions, we will look at how teams can be staffed and ways of working can be put in place, that has big impact on communication and Machine learning systems design.

It is an exciting episode that is very close to our hearts.

Who should participate in MLOps conference?

  1. Data/MLOps engineers who want to learn about state-of-the-art tools and techniques.
  2. Data scientists who want a deeper understanding of model deployment/governance.
  3. Architects who are building ML workflows that scale.
  4. Tech founders who are building products that require ML or building developer productivity products for ML.
  5. Product managers, who are seeking to learn about the process of building ML products.
  6. Directors, VPs and senior tech leadership who are building ML teams.

Contact information: Join The Fifth Elephant Telegram group on https://t.me/fifthel or follow @fifthel on Twitter. For inquiries, contact The Fifth Elephant on fifthelephant.editorial@hasgeek.com or call 7676332020.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more

Machine Learning Operations - aka MLOps - is gaining immense popularity. A lot of people in the industry thought this was another wave of terminologies and that it would not last. Yet here were are, another conference dedicated to MLOps.

The biggest driver for MLOps has been the fact that a big portion (70-90% based on various reports) of machine learning models never make it production or fail much before they get there. It is no more a question if machine learning models are capable, there has been a decade of research and constant breakthroughs, it is now the question around business viability and impact. It is the economics around machine learning that has led to MLOps taking centre stage.

In the previous episode of MLOps conference, we had practitioners talking about:

  • Economies around their machine learning problems
  • The platforms they have built or working with to reduce cost of experiments, to evaluate their models in production as data and users keep changing thereby requiring constant supervision
  • Fairness and bias in machine learning models and what one can do to keep them in check
  • Technologies and tools that have been designed and built to help the MLOps ecosystem flourish like feature stores, model versioning and more.

In this edition, we have taken a bit of a different take. Commonly, most conferences are looking for new speakers and new content. However, in real life, practitioners are not jumping on new bandwagons every few months. There is continuity in their systems. And as the system evolves, the hypotheses are validated or discarded, which have a profound impact on design and thinking, especially in Machine learning and Machine Learning operations. We want to bring this real world flavour to the mix by:

  • Inviting a few speakers from our previous edition to talk about how their systems have evolved in the last six months, since they spoke at the event. What can we learn from their wins and discarded hypothesis?
  • Panel discussions around learning from failures in MLOps.
  • Panel discussions on how to staff MLOps team, coordination between data science and engineering teams.

Usually, we are trained to be biased towards to learning from success. Especially in engineering and machine learning, systems, processes and thinking are sort of inspirations of success found in companies that are unicorns. This is not a wrong in anyway, as there is proof that it works and provides immense value. However, this is not the only way. We can learn a lot from looking at failures (In our personal opinion a lot more than looking at success).

As, Melvin Conway observed that technical system design was directly influenced by the communication pattern in the company. So how does one setup teams to success? In one of the panel discussions, we will look at how teams can be staffed and ways of working can be put in place, that has big impact on communication and Machine learning systems design.

It is an exciting episode that is very close to our hearts.

Who should participate in MLOps conference?

  1. Data/MLOps engineers who want to learn about state-of-the-art tools and techniques.
  2. Data scientists who want a deeper understanding of model deployment/governance.
  3. Architects who are building ML workflows that scale.
  4. Tech founders who are building products that require ML or building developer productivity products for ML.
  5. Product managers, who are seeking to learn about the process of building ML products.
  6. Directors, VPs and senior tech leadership who are building ML teams.

Contact information: Join The Fifth Elephant Telegram group on https://t.me/fifthel or follow @fifthel on Twitter. For inquiries, contact The Fifth Elephant on fifthelephant.editorial@hasgeek.com or call 7676332020.

Hosted by

The Fifth Elephant - known as one of the best data science and Machine Learning conference in Asia - has transitioned into a year-round forum for conversations about data and ML engineering; data science in production; data security and privacy practices. more