Send email Copy Email Address

Differential Privacy Defenses and Sampling Attacks for Membership Inference


Machine learning models are commonly trained on sensitive and personal data such as pictures, medical records, financial records, etc. A serious breach of the privacy of this training set occurs when an adversary is able to decide whether or not a specific data point in her possession was used to train a model. While all previous membership inference attacks rely on access to the posterior probabilities, we present the first attack which only relies on the predicted class label - yet shows high success rate.


14th ACM Workshop on Artificial Intelligence and Security, co-located with the 28th ACM Conference on Computer and Communications Security

Date published


Date last modified

2021-12-08 09:05:01