Adversarial Attacks on Deep Learning
Submitted by Gaurav Goswami (@gauravgoswami) on Thursday, 12 April 2018
Deep learning algorithms are highly popular and being applied to solve various problems with high accuracies. However, they are not infallible and are, in fact, highly susecptible to adversarial attacks. These attacks can manifest in the form of synthetically generated data or perturbed data which mean one thing to a human observer and something completely different to the deep network. Thereby, such attacks make the predictions provided by deep networks less robust and unreliable for practical deployment. In this talk, I will provide an introduction to deep learning, adversarial perturbations, types of adversarial attacks, and defenses against such attacks.
- What are adversarial attacks?
- Why do adversarial attacks exist?
- What are some of the methods of defending against adversarial attacks?
- What does it mean for a deep learning based system that you may be designing today?
Draft slides are included in the slides link. Mathematical and algorithmic detail will be removed for the talk to fit the material into a crisp format.
Gaurav Goswami is a researcher primarily working in the areas of artificial intelligence, machine learning, computer vision, and face recognition. He is the author of more than 20 articles in reputed international peer-reviewed journals, conferences, and book chapters and has filed 3 US patents for his contributions in AI methodologies. He is the recipient of the 2015-16 IBM Ph.D. fellowship award, 3rd at the IDRBT Doctoral Colloquium 2014, best Doctoral Consortium award at IJCB 2014, and the best poster presentation award at the BTAS conference in 2013. He joined IBM in 2017 as an Artificial Intelligence/Machine Learning expert and has been working on real-life problems in AI with both established companies and startups. As part of his role at IBM, he has also delivered various sessions on topics in AI and ML in various conferences, meetups, and events.