Automating Workflows for AI Projects arrow_forward
Machine Learning Security - The Data Scientist's Guide to Hardening ML Models
Submitted by Arjun Bahuguna (@arjunbahuguna) on Tuesday, 30 April 2019
Session type: Tutorial
With increased attack incidents on machine learning models (adversarial images, membership inference, model inversion, information reconstruction, data poisoning, etc) it becomes imperative for companies to be aware of the attack surface of their ML services and published results.
In this 1.5 hour tutorial, the speakers will provide insights from their 3 years of research in privacy-preserving data mining, and show how companies like Google and Microsoft are coping with threats to their machine-learning models and user data privacy. The session will contain live-demos and be interactive.
- Learn about attacks happening on ML models today
- Learn how to code defenses against them, using existing libraries
- Real-World Attacks on Machine Learning Systems
- Membership Inference at AWS, GCP, Azure
- Data Linkage Attacks at Netflix
- Dataset Poisoning at Microsoft
- Attacks on Amazon Alexa
- Adversarial Image Attacks at Clarif.ai
- Attacks using Google’s Prediction API
- Implications for Business Compliance
- Penalties under International Data Regulation
- Penalties under Indian Data Regulation
- Ethical Issues in Data Acquisition
- Why do these Attacks occur?
- How do these Attacks affect ML pipelines?
- How to customize your defense for business needs (tradeoffs and tips)?
- Insights from 3 years of privacy-preserving machine learning at Next Tech Lab
- Defenses (being used in-production)
- Homomorphic Encryption at Microsoft
- Multi-party Computation at VISA Research
- Federated Learning at Google
- Differential Privacy at Google
- Blockchain-based solutions at OpenMined
- Defenses (upcoming theoretical research)
- Zero-knowledge Proofs
- Garbled Circuits
- Machine Learning on Secure Enclaves
- Quantum Defenses
- Learn to Implement
- Adversarial Image Attacks
- Implement a secure-MPC pipeline using PyTorch
- Differential Privacy using Tensorflow
- Implement SPDZ for Tensorflow
- Learn to Use Existing Implementations & Frameworks
- Tensorflow Encrypted
- Tensorflow Cleverhans
- PyTorch PySyft
- Microsoft’s PySEAL
- Google’s RAPPOR
Laptops with Tensorflow and PyTorch pre-installed
Arjun Bahuguna is an applied cryptography researcher at Next Tech Lab. His interests include statistical learning theory, privacy-enhancing technologies, and distributed systems. His research has been awarded with two university gold medals.
Sourav Sharan is a computer vision engineer at Next Tech Lab with 2 years of experience, with a focus on deep learning approaches. His interests include facial recognition systems, adversarial image attacks, numerical optimization, and chess.