The Fifth Elephant 2019

Gathering of 1000+ practitioners from the data ecosystem

It's Launched! Why do I need to continuously benchmark and monitor my computer vision model?

Submitted by Nitin Gupta (@nitinguptadr) on Friday, 14 June 2019

Session type: Short talk of 20 mins Status: Rejected


Open source models like Imagenet and Resnet have opened the door to enable millions of computer vision use cases. But launching enterprise computer vision application doesn’t end when the model is trained - that’s just the first step. To build an end-to-end solution, one needs to understand the appropriate steps and best practices to follow.

If you are planning to build and launch a computer vision application, you need to consider what happens to the ML model after it has reached a level of accuracy and performance for your use case. How exactly are you going to architect your application software? How are you going to deploy and scale models to potentially hundreds or thousands of devices in production? How do you extract useful information from the models for retraining? Where do you store the results and metrics of the predictions? Is your application mission critical, or can the model be run offline? How do you setup a vision ML pipeline that doesn’t break the bank or require an army of engineers and computer vision experts to maintain?

In this presentation, Dori will focus specifically on challenges that every enterprise application developer will face when building a computer vision application and how to set the enterprise up for success. The talk will draw from previous examples of how typical software pipelines have been set up and demonstrate the best practices to quickly build machine learning computer vision pipelines that can scale to millions of deployments. The talk will also cover best practices of how to effectively benchmark and monitor your machine learning model in order to continuously improve the quality and system performance.


Evolution of Computer Vision

  • Introduction into how computer vision models have evolved
  • Accuracy & performance improvements over the past few years
  • Acceptability of deep learning in enterprise use cases

Why Now?

  • Discussion of the opportunity that Open Source has created for deep learning
  • The disparity of choice that open source has created

The Enterprise Challenge

  • Why do enterprises still struggle to productize a single model?
  • What questions need to be answered to create an AI application?
  • What tools & infrastructure are needed?

Setting Up a Robust Pipeline

  • A formula for success to set up a robust ML development pipeline
  • Why benchmarking and monitoring are key steps of the ML development lifecycle
  • Model/System metrics vs Business Metrics - what is important?
  • How to extract value and useful data from a model in production


Come with an open mind to learn.

Speaker bio

Nitin Gupta helped found Dori with the vision of enabling the applied AI market with a platform that accelerates the adoption of AI/ML across major vertical industries. He has 12+ years of experience leading product/architecture in complex edge, mobile, and computer vision systems.

Prior to Dori, Nitin was a product lead at Google responsible for commercializing AI/ML systems for VRCore/ARCore products within the Daydream team and also led teams at Pebble and Qualcomm. His Ph.D. work involved novel search algorithms to achieve timing closure for embedded SOC designs.



Preview video


  • Abhishek Balaji (@booleanbalaji) 11 months ago

    Hi Nitin,

    Thank you for submitting a proposal. We need to see detailed slides and a preview video to evaluate your proposal. Your slides must cover the following:

    • Problem statement/context, which the audience can relate to and understand. The problem statement has to be a problem (based on this context) that can be generalized for all.
    • What were the tools/frameworks available in the market to solve this problem? How did you evaluate these, and what metrics did you use for the evaluation? Why did you pick the option that you did?
    • Explain how the situation was before the solution you picked/built and how it changed after implementing the solution you picked and built? Show before-after scenario comparisons & metrics.
    • What compromises/trade-offs did you have to make in this process?
    • What is the one takeaway that you want participants to go back with at the end of this talk? What is it that participants should learn/be cautious about when solving similar problems?

    We need your updated slides and preview video by Jun 27, 2019 to evaluate your proposal. If we do not receive an update, we’d be moving your proposal for evaluation under a future event.

    • Nitin Gupta (@nitinguptadr) Proposer 11 months ago

      Sure I will update with some more details this week. Thanks.

  • Abhishek Balaji (@booleanbalaji) 11 months ago

    Marked as rejected since proposer hasnt responded to comments/updated content before deadline. Will be considered for a future event if content is updated.

Login to leave a comment