Privacy Preserving AI: Protecting User Privacy without Compromising Quality of Service
Submitted by upendra singh (@upendrasingh1) on Wednesday, 19 February 2020
There are a numerous ways in which an “adversary” can exploit a users interaction with an AI based system(for example Recommender Systems!). Let us take three use cases:
1. Continuious observation of recommendations with some background information makes it possible to infer the individual’s rating or even transaction history, especially for neighborhood-based methods. An adversary can infer the rating history of an active user by creating fake neighbours based on background information.
2. There are two possible queries for a location-based service: snapshot and continuous queries. A snapshot query is a request submitted once by the user. For example,Where is the closest Tea Shop? A continuous query is submitted at discrete time points by the same user. For example, Continuously send me petrol price coupons as I travel the highway. Both types of queries are prevalent nowadays in location based systems. Adversary can infer the current location (from snapshot query) or the trajectory (from continuous query) of the user.
3. Like other machine learning models, deep learning models are susceptible to several types of attacks. For example, a centralized collection of photos, speech, and video clips from millions of individuals might meet with privacy risks when shared with others. Learning models can also disclose sensitive information.Potential privacy leaks can stem from malicious inference with the model’s inputs and outputs.
There are many many such potentially exploitative use cases where not only users privacy is being threatened but also can pose danger to the user.
How can we solve privacy problems in our AI applications?
For recommendation based, location based, deep learning based services, Privacy Preserving AI calls for methods that preserve as much as the quality of the desired services, while hindering the undesired tracking/exploitative capacities of those services. We will discuss how we can solve privacy problems in our AI applications.
Note: Will update this over the next few days and weeks
1. First we will discuss various kind of threats like reconstruction attacks, model inversion attacks, membership inference attacks, de-anonymization attack.
2. Various approaches on how to tackle attacks using various techniques like:
Cryptographic Approaches like Homomorphic Encryption, Garbled Curcuits, Secret Sharing, Secure Processors
Perturbation Approaches like differential privacy(local and global), dimensionality reduction
Differential Privacy for Deep Learning, Secure Federated Learning
3. Encrypted Deep Learning: Building encrypted dataset and generate an encrypted prediction with an encrypted neural network on an encrypted dataset(Hands On Demo!)
Speaker is a seasoned full stack data scientist with close to 12+ years of experience in data science, machine learning and big data engineering.
Senior Principal Architect - Data Sciences @ Epsilon