Going beyond what and asking why: Explainability in Machine/Deep Learning
Section: Full talk Technical level: Intermediate
As machine learning methods get increasingly absorbed in technologies ranging from high-end aerospace systems to low-end consumer technologies, there is a gradual, however steady, increase in the demand for explaining the decisions made by machine learning algorithms. DARPA launched a large initiative in 2016 to further the progress of explainable AI methods, underscoring the need for a concerted effort in this domain. This talk will present an introductory overview of the efforts in machine learning so far in this direction, as well as present our recent work in this domain. One of our recent efforts, Grad-CAM++, presented at WACV 2018, provides a methodology to understand what a Convolutional Neural Network (CNN) looks at in the image, while making a particular class prediction. In particular, it showed superior performance to other competing methods when multiple objects are present in the scene, and also helps provide more holistic visual explanations (https://arxiv.org/abs/1710.11063). This talk will also present another of our recent efforts to explain the decisions of a Recurrent Neural Network (RNN) for time series analysis using foundations of causality.
- Motivation: Need for explainability in machine learning
- Where do machine learning algorithms stand today in their explainability? A Review
- Overview of efforts towards explainability in machine learning (focus on deep learning)
- Our work on Grad-CAM++: Towards generalized visual explanations in CNNs
- Our work on using causality for explanations in RNNs for time series analysis
Vineeth N Balasubramanian is an Associate Professor in the Department of Computer Science and Engineering at the Indian Institute of Technology, Hyderabad. His research interests include deep learning, machine learning, computer vision, non-convex optimization and real-world applications in these areas. He has around 60 research publications in premier peer-reviewed venues including CVPR, ICCV, KDD, ICDM, IEEE TPAMI and ACM MM, 5 patents under review, and an edited book on a recent development in machine learning called Conformal Prediction. His PhD dissertation at Arizona State University (completed in 2010) on the Conformal Predictions framework was nominated for the Outstanding PhD Dissertation at the Department of Computer Science. He was also awarded the Gold Medals for Academic Excellence in the Bachelors program in Math in 1999, and for his Masters program in Computer Science in 2003. He is an active reviewer/contributor at many conferences such as ICCV, IJCAI, ACM MM and ACCV, as well as journals including IEEE TNNLS, Machine Learning and Pattern Recognition. He is a member of the IEEE, ACM and currently serves as the Secretary of the AAAI India Chapter.
How organizations can leverage 'Large Scale Graph Based Analytics’ to derive value from their data.
An organization’s data is like a living organism - growing, expanding and evolving over time to form complicated and connected systems. This is similar to biological evolution, where life forms evolved from simple unicellular structures to more and more complex multicellular organisms. And as organizations compile more and more data, it is crucial for them to understand that the value of any data… more