Explainable AI: Behind the Scenes
Submitted by Manjunath (@manjunath-123) on Tuesday, 30 April 2019
Section: Tutorials Technical level: Intermediate Session type: Tutorial
With the Ai based systems proliferating in to all applications of the daily life, getting a better insight of their working mechanism is a much sought-after affair. Often the rationale behind the decision thrown out by an AI system is not well understood. Which part or feature of the input has influenced the decision to what extent is not known. This presentation provides insights to demystify the black box of the AI models
Research community and students: The tutorial provides better insights to the working of the deep learning models which otherwise work as black boxes. The different techniques for generating the heat map would be detailed with examples. Insights to the explanation text generation mechanism would be provided
AI & ML solution providers: They can pick up the steps in the workflow of the explanation generation and adopt the same in the design of the solution for deep learning based business problems.
Architects of products: The layer wise and neuron wise visualization of the activations in the network would provide information on importance of a neuron in decision making. Accordingly, the rest of the network may be pruned. The test cases to be designed are to activate the neurons under consideration. These concepts would be illustrated with the example of a case study involving misclassification identification in OCR.
Software vendors: The different components of explanation such as heat map, text explanation, misclassification identification etc. throw open multiple options for software vendor to provide them as a service These options would be illustrated with examples.
Design engineers: Provides a platform to crosscheck the design and debug the code while realizing the solution
Tool vendors: Helps to develop tools required to support the end to end workflow of explainable solution
Introduction: Background, positioning of the problem, examples
Relevance based explanation: Heat map generation, examples, Layer wise relevance propagation, Sensitivity analysis
Text explanation: Training of the LSTM models, vocabulary generation, sentence formation
Visualization of the layers: unwinding of the black box, layer wise activation generation, misclassification identification case study
Note book, Pen
Dr. Manjunath Ramachandra is working as Principal consultant at the AI research group of Wipro Limited. He has a blend of academic and industry experience over two decades. He has filed about 50 patents, chaired 33 conferences, conducted 17 tutorials and workshops, authored 180 Research papers in international conferences and journals and a book. He represented the industry in international standardization bodies such as Wi-Fi Alliance, served as the editor for the regional profiles standard in Digital living network alliance (DLNA) and as the industry liaison officer for the CE-Linux Forum. His areas of research include deep learning, NLP, AI applications etc.