In the below taxonomy, membership inference attacks are categorized by: target model, adversarial knowledge, attack approach, training method, and target domain.
Target Model
The target model category of this membership inference attack taxonomy is subcategorized into the following: classification models, generative models, regression models, and embedding models.
Classification Models
The classification models subcategory of the target model category of this membership inference attack taxonomy is divided into the following groups: binary-class classifiers, and multi-class classifiers.
Binary-Class Classifiers
- A Pragmatic Approach to Membership Inferences on Machine Learning Models – Long et al. – https://experts.illinois.edu/en/publications/a-pragmatic-approach-to-membership-inferences-on-machine-learning
- Demystifying Membership Inference Attacks in Machine Learning as a Service – Truex et al. – https://ieeexplore.ieee.org/document/8634878
- Differentially Private Learning Does Not Bound Membership Inference – Humphries et al. – https://www.arxiv.org/abs/2010.12112v1
- Disparate Vulnerability to Membership Inference Attacks – Kulynych et al. – https://arxiv.org/abs/1906.00389
- Membership Inference Attacks against Machine Learning Models – Shokri et al. – https://arxiv.org/abs/1610.05820
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models – Salem et al. – https://arxiv.org/abs/1806.01246
- On the Privacy Risks of Model Explanations – Shokri et al. – https://arxiv.org/abs/1907.00164
- Practical Blind Membership Inference Attack via Differential Comparisons – Hui et al. – https://arxiv.org/abs/2101.01341
- Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference – Leino et al. – https://par.nsf.gov/servlets/purl/10238792
- When Machine Unlearning Jeopardizes Privacy – Chen et al. – https://arxiv.org/abs/2005.02205
- Understanding Membership Inferences on Well-Generalized Learning Models – Long et al. – https://arxiv.org/abs/1802.04889
Multi-Class Classifiers
- A Pragmatic Approach to Membership Inferences on Machine Learning Models – Long et al. – https://experts.illinois.edu/en/publications/a-pragmatic-approach-to-membership-inferences-on-machine-learning
- Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning – Nasr et al. – https://ieeexplore.ieee.org/document/8835245
- Demystifying Membership Inference Attacks in Machine Learning as a Service – Truex et al. – https://ieeexplore.ieee.org/document/8634878
- Disparate Vulnerability to Membership Inference Attacks – Kulynych et al. – https://arxiv.org/abs/1906.00389
- Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability – Truex et al. – https://arxiv.org/pdf/1911.09777
- Exploiting Unintended Feature Leakage in Collaborative Learning – Melis et al. – https://www.cs.cornell.edu/~shmat/shmat_oak19.pdf
- Label-Only Membership Inference Attacks – Choquette et al. – https://arxiv.org/abs/2007.14321
- Membership Inference Attacks against Adversarially Robust Deep Learning Models – Song et al. – https://www.princeton.edu/~pmittal/publications/liwei-dls19.pdf
- Membership Inference Attack against Differentially Private Deep Learning Model – Rahman et al. – https://www.researchgate.net/publication/324980710_Membership_inference_attack_against_differentially_private_deep_learning_model
- Membership Inference Attacks against Machine Learning Models – Shokri et al. – https://arxiv.org/abs/1610.05820
- Membership Inference Attacks and Defenses in Classification Models – Li et al. – https://arxiv.org/abs/2002.12062
- Membership Inference Attack on Graph Neural Networks – Olatunji et al. – https://arxiv.org/abs/2101.06570
- Membership Leakage in Label-Only Exposures – Li and Zhang – https://arxiv.org/abs/2007.15528
- MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples – Jia et al. – https://arxiv.org/abs/1909.10594
- ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models – Liu et al. – https://arxiv.org/abs/2102.02551
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models – Salem et al. – https://arxiv.org/abs/1806.01246
- Node-Level Membership Inference Attacks Against Graph Neural Networks – He et al. – https://arxiv.org/abs/2102.05429
- On the Difficulty of Membership Inference Attacks – Rezaei and Liu – https://openaccess.thecvf.com/content/CVPR2021/html/Rezaei_On_the_Difficulty_of_Membership_Inference_Attacks_CVPR_2021_paper.html
- On the Effectiveness of Regularization Against Membership Inference Attacks – Kaya et al. – https://arxiv.org/abs/2006.05336
- On the Privacy Risks of Algorithmic Fairness – Chang and Shokri – https://arxiv.org/abs/2011.03731
- On the Privacy Risks of Model Explanations – Shokri et al. – https://arxiv.org/abs/1907.00164
- Practical Blind Membership Inference Attack via Differential Comparisons – Hui et al. – https://arxiv.org/abs/2101.01341
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting – Yeom et al. – https://ieeexplore.ieee.org/document/8429311
- Revisiting Membership Inference Under Realistic Assumptions – Jayaraman et al. – https://arxiv.org/abs/2005.10881
- Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries – Rahimian et al. – https://arxiv.org/abs/2009.00395
- Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference – Leino et al. – https://par.nsf.gov/servlets/purl/10238792
- Systematic Evaluation of Privacy Risks of Machine Learning Models – Song and Mittal – https://www.usenix.org/conference/usenixsecurity21/presentation/song
- Understanding Membership Inferences on Well-Generalized Learning Models – Long et al. – https://arxiv.org/abs/1802.04889
- When Machine Unlearning Jeopardizes Privacy – Chen et al. – https://arxiv.org/abs/2005.02205
- White-box vs Black-box: Bayes Optimal Strategies for Membership Inference – Sablayrolles et al. – https://arxiv.org/abs/1908.11229
Generative Models
The generative models subcategory of the target model category of this membership inference attack taxonomy is divided into the following groups: GANs, and VAEs.
GANs
- GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models – Chen et al. – https://arxiv.org/abs/1909.03935
- Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection – Wu et al. – https://arxiv.org/abs/1908.07882
- Membership inference attacks against generative models – Hayes et al. – https://arxiv.org/abs/1705.07663
- Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models – Hilprecht et al. – https://petsymposium.org/popets/2019/popets-2019-0067.pdf
- Performing Co-Membership Attacks Against Deep Generative Models – Liu et al. – https://sites.rutgers.edu/jie-gao/wp-content/uploads/sites/375/2021/10/attack-GAN.pdf
- privGAN: Protecting GANs from membership inference attacks at low cost to utility – Mukherjee et al. – https://petsymposium.org/popets/2021/popets-2021-0041.pdf
- This Person (Probably) Exists. Identity Membership Attacks Against GAN Generated Faces – Webster et al. – https://arxiv.org/abs/2107.06018
VAEs
- GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models – Chen et al. – https://arxiv.org/abs/1909.03935
- Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models – Hilprecht et al. – https://petsymposium.org/popets/2019/popets-2019-0067.pdf
- Performing Co-Membership Attacks Against Deep Generative Models – Liu et al. – https://sites.rutgers.edu/jie-gao/wp-content/uploads/sites/375/2021/10/attack-GAN.pdf
Regression Models
The regression models subcategory of the target model category of this membership inference attack taxonomy includes only one group: deep regression.
Deep Regression
- Membership Inference Attacks on Deep Regression Models for Neuroimaging – Gupta et al. – https://arxiv.org/abs/2105.02866
Embedding Models
The embedding models subcategory of the target model category of this membership inference attack taxonomy is divided into the following groups: NLP embedding, graph embedding, and image encoder.
NLP Embedding
- Information Leakage in Embedding Models – Song and Raghunathan – https://arxiv.org/abs/2004.00053
- Investigating the Impact of Pre-trained Word Embeddings on Memorization in Neural Networks – Thomas et al. – https://dl.acm.org/doi/10.1007/978-3-030-58323-1_30
- Membership Inference on Word Embedding and Beyond – Mahloujifar et al. – https://arxiv.org/abs/2106.11384
Graph Embedding
- Quantifying Privacy Leakage in Graph Embedding – Duddu et al. – https://arxiv.org/abs/2010.00906
Image Encoder
- EncoderMI: Membership Inference against Pre-trained Encoders in Contrastive Learning – Liu et al. – https://arxiv.org/abs/2108.11023
Adversarial Knowledge
The adversarial knowledge category of this membership inference attack taxonomy is subcategorized into the following: black-box attacks, and white-box attacks.
Black-Box Attacks
The black-box attacks subcategory of the adversarial knowledge category of this membership inference attack taxonomy is divided into the following groups: Prediction Vector, Top-K Confidence, and Label Only.
Prediction Vector
- A Pragmatic Approach to Membership Inferences on Machine Learning Models – Long et al. – https://experts.illinois.edu/en/publications/a-pragmatic-approach-to-membership-inferences-on-machine-learning
- Demystifying Membership Inference Attacks in Machine Learning as a Service – Truex et al. – https://ieeexplore.ieee.org/document/8634878
- Disparate Vulnerability to Membership Inference Attacks – Kulynych et al. – https://arxiv.org/abs/1906.00389
- Membership Inference Attacks against Machine Learning Models – Shokri et al. – https://arxiv.org/abs/1610.05820
- On the Privacy Risks of Model Explanations – Shokri et al. – https://arxiv.org/abs/1907.00164
- Practical Blind Membership Inference Attack via Differential Comparisons – Hui et al. – https://arxiv.org/abs/2101.01341
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting – Yeom et al. – https://ieeexplore.ieee.org/document/8429311
- Revisiting Membership Inference Under Realistic Assumptions – Jayaraman et al. – https://arxiv.org/abs/2005.10881
- SocInf: Membership Inference Attacks on Social Media Health Data With Machine Learning – Liu et al. – https://ieeexplore.ieee.org/document/8728167
- Systematic Evaluation of Privacy Risks of Machine Learning Models – Song and Mittal – https://www.usenix.org/conference/usenixsecurity21/presentation/song
- Understanding Membership Inferences on Well-Generalized Learning Models – Long et al. – https://arxiv.org/abs/1802.04889
- When Machine Unlearning Jeopardizes Privacy – Chen et al. – https://arxiv.org/abs/2005.02205
- White-box vs Black-box: Bayes Optimal Strategies for Membership Inference – Sablayrolles et al. – https://arxiv.org/abs/1908.11229
Top-K Confidence
- Membership Inference Attacks against Machine Learning Models – Shokri et al. – https://arxiv.org/abs/1610.05820
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models – Salem et al. – https://arxiv.org/abs/1806.01246
Label Only
- Label-Only Membership Inference Attacks – Choquette et al. – https://arxiv.org/abs/2007.14321
- Membership Leakage in Label-Only Exposures – Li and Zhang – https://arxiv.org/abs/2007.15528
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting – Yeom et al. – https://ieeexplore.ieee.org/document/8429311
- Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries – Rahimian et al. – https://arxiv.org/abs/2009.00395
White-Box Attacks
The white-box attacks subcategory of the adversarial knowledge category of this membership inference attack taxonomy is not further divided into additional groups.
- Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning – Nasr et al. – https://ieeexplore.ieee.org/document/8835245
- Exploiting Unintended Feature Leakage in Collaborative Learning – Melis et al. – https://www.cs.cornell.edu/~shmat/shmat_oak19.pdf
- GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models – Chen et al. – https://arxiv.org/abs/1909.03935
- Membership inference attacks against generative models – Hayes et al. – https://arxiv.org/abs/1705.07663
- Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models – Hilprecht et al. – https://petsymposium.org/popets/2019/popets-2019-0067.pdf
- On the Difficulty of Membership Inference Attacks – Rezaei and Liu – https://openaccess.thecvf.com/content/CVPR2021/html/Rezaei_On_the_Difficulty_of_Membership_Inference_Attacks_CVPR_2021_paper.html
- Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference – Leino et al. – https://par.nsf.gov/servlets/purl/10238792
Attack Approach
The attack approach category of this membership inference attack taxonomy is subcategorized into the following: classifier-based attacks, metric-based attacks, and differential comparisons-based attacks.
Classifier-Based Attacks
The classifier-based attacks subcategory of the attack approach category of this membership inference attack taxonomy includes only one group: shadow training.
Shadow Training
- A Pragmatic Approach to Membership Inferences on Machine Learning Models – Long et al. – https://experts.illinois.edu/en/publications/a-pragmatic-approach-to-membership-inferences-on-machine-learning
- Auditing Data Provenance in Text-Generation Models – Song and Shmatikov – https://arxiv.org/abs/1811.00513
- Demystifying Membership Inference Attacks in Machine Learning as a Service – Truex et al. – https://ieeexplore.ieee.org/document/8634878
- Membership Inference Attack with Multi-Grade Service Models in Edge Intelligence – Wang et al. – https://dl.acm.org/doi/abs/10.1109/MNET.011.2000246
- Membership Inference Attacks against Machine Learning Models – Shokri et al. – https://arxiv.org/abs/1610.05820
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models – Salem et al. – https://arxiv.org/abs/1806.01246
- On the Privacy Risks of Model Explanations – Shokri et al. – https://arxiv.org/abs/1907.00164
- Practical Membership Inference Attack Against Collaborative Inference in Industrial IoT – Chen et al. – https://ieeexplore.ieee.org/document/9302683
- SocInf: Membership Inference Attacks on Social Media Health Data With Machine Learning – Liu et al. – https://ieeexplore.ieee.org/document/8728167
- When Machine Unlearning Jeopardizes Privacy – Chen et al. – https://arxiv.org/abs/2005.02205
Metric-Based Attacks
The metric-based attacks subcategory of the attack approach category of this membership inference attack taxonomy is divided into the following groups: prediction correctness, prediction loss, prediction confidence, prediction entropy, adversarial perturbation, and hypothesis test.
Prediction Correctness
- Demystifying the Membership Inference Attack – Irolla and Châtel – https://ieeexplore.ieee.org/document/8962136
- Label-Only Membership Inference Attacks – Choquette et al. – https://arxiv.org/abs/2007.14321
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting – Yeom et al. – https://ieeexplore.ieee.org/document/8429311
- Quantifying Membership Inference Vulnerability via Generalization Gap and Other Model Metrics – Bentley et al. – https://arxiv.org/abs/2009.05669
- White-box vs Black-box: Bayes Optimal Strategies for Membership Inference – Sablayrolles et al. – https://arxiv.org/abs/1908.11229
Prediction Loss
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting – Yeom et al. – https://ieeexplore.ieee.org/document/8429311
- White-box vs Black-box: Bayes Optimal Strategies for Membership Inference – Sablayrolles et al. – https://arxiv.org/abs/1908.11229
Prediction Confidence
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models – Salem et al. – https://arxiv.org/abs/1806.01246
Prediction Entropy
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models – Salem et al. – https://arxiv.org/abs/1806.01246
- Systematic Evaluation of Privacy Risks of Machine Learning Models – Song and Mittal – https://www.usenix.org/conference/usenixsecurity21/presentation/song
Adversarial Perturbation
- Label-Only Membership Inference Attacks – Choquette et al. – https://arxiv.org/abs/2007.14321
- Membership Leakage in Label-Only Exposures – Li and Zhang – https://arxiv.org/abs/2007.15528
Hypothesis Test
- A Pragmatic Approach to Membership Inferences on Machine Learning Models – Long et al. – https://experts.illinois.edu/en/publications/a-pragmatic-approach-to-membership-inferences-on-machine-learning
- Understanding Membership Inferences on Well-Generalized Learning Models – Long et al. – https://arxiv.org/abs/1802.04889
Differential Comparisons-Based Attacks
The differential comparisons-based attacks subcategory of the attack approach category of this membership inference attack taxonomy includes only one group: BLINDMI.
BLINDMI
- Practical Blind Membership Inference Attack via Differential Comparisons – Hui et al. – https://arxiv.org/abs/2101.01341
Training Method
The training method category of this membership inference attack taxonomy is subcategorized into the following: centralized training, and federated training.
Centralized Training
The centralized training subcategory of the training method category of this membership inference attack taxonomy is not further divided into additional groups.
- A Pragmatic Approach to Membership Inferences on Machine Learning Models – Long et al. – https://experts.illinois.edu/en/publications/a-pragmatic-approach-to-membership-inferences-on-machine-learning
- GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models – Chen et al. – https://arxiv.org/abs/1909.03935
- Label-Only Membership Inference Attacks – Choquette et al. – https://arxiv.org/abs/2007.14321
- Membership inference attacks against generative models – Hayes et al. – https://arxiv.org/abs/1705.07663
- Membership Inference Attacks against Machine Learning Models – Shokri et al. – https://arxiv.org/abs/1610.05820
- Membership Leakage in Label-Only Exposures – Li and Zhang – https://arxiv.org/abs/2007.15528
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models – Salem et al. – https://arxiv.org/abs/1806.01246
- Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models – Hilprecht et al. – https://petsymposium.org/popets/2019/popets-2019-0067.pdf
- Practical Blind Membership Inference Attack via Differential Comparisons – Hui et al. – https://arxiv.org/abs/2101.01341
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting – Yeom et al. – https://ieeexplore.ieee.org/document/8429311
- Revisiting Membership Inference Under Realistic Assumptions – Jayaraman et al. – https://arxiv.org/abs/2005.10881
- Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference – Leino et al. – https://par.nsf.gov/servlets/purl/10238792
- Systematic Evaluation of Privacy Risks of Machine Learning Models – Song and Mittal – https://www.usenix.org/conference/usenixsecurity21/presentation/song
- Understanding Membership Inferences on Well-Generalized Learning Models – Long et al. – https://arxiv.org/abs/1802.04889
- When Machine Unlearning Jeopardizes Privacy – Chen et al. – https://arxiv.org/abs/2005.02205
- White-box vs Black-box: Bayes Optimal Strategies for Membership Inference – Sablayrolles et al. – https://arxiv.org/abs/1908.11229
Federated Training
The federated training subcategory of the training method category of this membership inference attack taxonomy is divided into the following groups: FedAvg, and FedSGD.
FedAvg
- Beyond Model-Level Membership Privacy Leakage: an Adversarial Approach in Federated Learning – Chen et al. – https://hhannuaa.github.io/papers/icccn_chen_2020.pdf
- Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning – Nasr et al. – https://ieeexplore.ieee.org/document/8835245
- Digestive neural networks: A novel defense strategy against inference attacks in federated Learning – Lee et al. – https://research-information.bris.ac.uk/ws/portalfiles/portal/308852605/Full_text_PDF_final_published_version_.pdf
- GAN Enhanced Membership Inference: A Passive Local Attack in Federated Learning – Zhang et al. – https://ieeexplore.ieee.org/document/9148790
- Source Inference Attacks in Federated Learning – Hu et al. – https://arxiv.org/abs/2109.05659
FedSGD
- Exploiting Unintended Feature Leakage in Collaborative Learning – Melis et al. – https://www.cs.cornell.edu/~shmat/shmat_oak19.pdf
Target Domain
The target domain category of this membership inference attack taxonomy is subcategorized into the following: natural language processing (NLP), computer vision (CV), graph, audio, and recommender system.
Natural Language Processing (NLP)
The natural language processing (NLP) subcategory of the target domain category of this membership inference attack taxonomy is divided into the following groups: text classification, text generation, and word embedding.
Text Classification
- Exploiting Unintended Feature Leakage in Collaborative Learning – Melis et al. – https://www.cs.cornell.edu/~shmat/shmat_oak19.pdf
- On the privacy-utility trade-off in differentially private hierarchical text classification – Wunderlich et al. – https://arxiv.org/abs/2103.02895
- SocInf: Membership Inference Attacks on Social Media Health Data With Machine Learning – Liu et al. – https://ieeexplore.ieee.org/document/8728167
Text Generation
- Auditing Data Provenance in Text-Generation Models – Song and Shmatikov – https://arxiv.org/abs/1811.00513
- Membership Inference Attacks on Sequence-to-Sequence Models: Is My Data In Your Machine Translation System? – Hisamoto et al. – https://arxiv.org/abs/1904.05506
Word Embedding
- Extracting training data from large language models – Carlini et al. – https://arxiv.org/abs/2012.07805
- Information Leakage in Embedding Models – Song and Raghunathan – https://arxiv.org/abs/2004.00053
- Investigating the Impact of Pre-trained Word Embeddings on Memorization in Neural Networks – Thomas et al. – https://dl.acm.org/doi/10.1007/978-3-030-58323-1_30
- Membership Inference Attack Susceptibility of Clinical Language Models – Jagannatha et al. – https://arxiv.org/abs/2104.08305
- Membership Inference on Word Embedding and Beyond – Mahloujifar et al. – https://arxiv.org/abs/2106.11384
Computer Vision (CV)
The computer vision (CV) subcategory of the target domain category of this membership inference attack taxonomy is divided into the following groups: image classification, image generation, and image segmentation.
Image Classification
- A Pragmatic Approach to Membership Inferences on Machine Learning Models – Long et al. – https://experts.illinois.edu/en/publications/a-pragmatic-approach-to-membership-inferences-on-machine-learning
- Comprehensive Privacy Analysis of Deep Learning: Passive and Active White-box Inference Attacks against Centralized and Federated Learning – Nasr et al. – https://ieeexplore.ieee.org/document/8835245
- Demystifying Membership Inference Attacks in Machine Learning as a Service – Truex et al. – https://ieeexplore.ieee.org/document/8634878
- Effects of Differential Privacy and Data Skewness on Membership Inference Vulnerability – Truex et al. – https://arxiv.org/pdf/1911.09777
- Exploiting Unintended Feature Leakage in Collaborative Learning – Melis et al. – https://www.cs.cornell.edu/~shmat/shmat_oak19.pdf
- Label-Only Membership Inference Attacks – Choquette et al. – https://arxiv.org/abs/2007.14321
- Membership Inference Attack against Differentially Private Deep Learning Model – Rahman et al. – https://www.researchgate.net/publication/324980710_Membership_inference_attack_against_differentially_private_deep_learning_model
- Membership Inference Attacks against Machine Learning Models – Shokri et al. – https://arxiv.org/abs/1610.05820
- Membership Inference Attacks and Defenses in Classification Models – Li et al. – https://arxiv.org/abs/2002.12062
- Membership Leakage in Label-Only Exposures – Li and Zhang – https://arxiv.org/abs/2007.15528
- MemGuard: Defending against Black-Box Membership Inference Attacks via Adversarial Examples – Jia et al. – https://arxiv.org/abs/1909.10594
- ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine Learning Models – Liu et al. – https://arxiv.org/abs/2102.02551
- ML-Leaks: Model and Data Independent Membership Inference Attacks and Defenses on Machine Learning Models – Salem et al. – https://arxiv.org/abs/1806.01246
- On the Difficulty of Membership Inference Attacks – Rezaei and Liu – https://openaccess.thecvf.com/content/CVPR2021/html/Rezaei_On_the_Difficulty_of_Membership_Inference_Attacks_CVPR_2021_paper.html
- On the Effectiveness of Regularization Against Membership Inference Attacks – Kaya et al. – https://arxiv.org/abs/2006.05336
- On the Privacy Risks of Model Explanations – Shokri et al. – https://arxiv.org/abs/1907.00164
- Practical Blind Membership Inference Attack via Differential Comparisons – Hui et al. – https://arxiv.org/abs/2101.01341
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting – Yeom et al. – https://ieeexplore.ieee.org/document/8429311
- Revisiting Membership Inference Under Realistic Assumptions – Jayaraman et al. – https://arxiv.org/abs/2005.10881
- Sampling Attacks: Amplification of Membership Inference Attacks by Repeated Queries – Rahimian et al. – https://arxiv.org/abs/2009.00395
- Stolen Memories: Leveraging Model Memorization for Calibrated White-Box Membership Inference – Leino et al. – https://par.nsf.gov/servlets/purl/10238792
- Systematic Evaluation of Privacy Risks of Machine Learning Models – Song and Mittal – https://www.usenix.org/conference/usenixsecurity21/presentation/song
- Understanding Membership Inferences on Well-Generalized Learning Models – Long et al. – https://arxiv.org/abs/1802.04889
- When Machine Unlearning Jeopardizes Privacy – Chen et al. – https://arxiv.org/abs/2005.02205
- White-box vs Black-box: Bayes Optimal Strategies for Membership Inference – Sablayrolles et al. – https://arxiv.org/abs/1908.11229
Image Generation
- GAN-Leaks: A Taxonomy of Membership Inference Attacks against Generative Models – Chen et al. – https://arxiv.org/abs/1909.03935
- Generalization in Generative Adversarial Networks: A Novel Perspective from Privacy Protection – Wu et al. – https://arxiv.org/abs/1908.07882
- Membership inference attacks against generative models – Hayes et al. – https://arxiv.org/abs/1705.07663
- Monte Carlo and Reconstruction Membership Inference Attacks against Generative Models – Hilprecht et al. – https://petsymposium.org/popets/2019/popets-2019-0067.pdf
- Performing Co-Membership Attacks Against Deep Generative Models – Liu et al. – https://sites.rutgers.edu/jie-gao/wp-content/uploads/sites/375/2021/10/attack-GAN.pdf
- privGAN: Protecting GANs from membership inference attacks at low cost to utility – Mukherjee et al. – https://petsymposium.org/popets/2021/popets-2021-0041.pdf
Image Segmentation
- Membership Inference Attacks are Easier on Difficult Problems – Shafran et al. – https://arxiv.org/abs/2102.07762
- Segmentations-Leak: Membership Inference Attacks and Defenses in Semantic Image Segmentation – He et al. – https://arxiv.org/abs/1912.09685
Graph
The graph subcategory of the target domain category of this membership inference attack taxonomy is divided into the following groups: knowledge graphs, node classification, and graph classification.
Knowledge Graphs
- Membership Inference Attacks on Knowledge Graphs – Wang and Sun – https://arxiv.org/abs/2104.08273
Node Classification
- Quantifying Privacy Leakage in Graph Embedding – Duddu et al. – https://arxiv.org/abs/2010.00906
- Membership Inference Attack on Graph Neural Networks – Olatunji et al. – https://arxiv.org/abs/2101.06570
- Node-Level Membership Inference Attacks Against Graph Neural Networks – He et al. – https://arxiv.org/abs/2102.05429
Graph Classification
- Adapting Membership Inference Attacks to GNN for Graph Classification: Approaches and Implications – Wu et al. – https://arxiv.org/abs/2110.08760
Audio
The audio subcategory of the target domain category of this membership inference attack taxonomy includes only one group: speech recognition.
Speech Recognition
- Evaluating the Vulnerability of End-to-End Automatic Speech Recognition Models To Membership Inference Attacks – Shah et al. – https://www.isca-archive.org/interspeech_2021/shah21_interspeech.pdf
- The Audio Auditor: User-Level Membership Inference in Internet of Things Voice Services – Miao et al. – https://arxiv.org/abs/1905.07082
Recommender System
The recommender system subcategory of the target domain category of this membership inference attack taxonomy includes only one group: collaborative filtering.
Collaborative Filtering
- Membership Inference Attacks Against Recommender Systems – Zhang et al. – https://arxiv.org/abs/2109.08045
Thanks for reading!