Wait, I can explain this! (ML models explaining their predictions)
Today ML/AI is being used in mission critical applications. However, it is still difficult for a human being to trust a black-boxy ML algorithm. Wouldn’t it be cool if an algorithm could also explain why it had predicted a particular result and thereby strengthen it’s voice? That’s what exactly this talk is all about. Would walk you through how we implemented a model explainer for ZOHO’s ML suite and the story of pushing it into production.
- Motivation behind the problem statement
- Local & Interpretable explanations
- Overall system design
- Case study - Our churn predictor
Own the Machine Learning / Deep Learning product stack at Zoho Corporation. Have made high impact full stack ML/DL releases, reaching over a million users and have scaled and tweaked the platform accordingly!