Note that the below are in alphabetical order. Enjoy!
- A Brief Introduction To AI Model Inversion Attacks – https://cryptopunk4762.com/f/a-brief-introduction-to-ai-model-inversion-attacks
- A Brief Introduction To AI Model Inversion Technical Defenses – https://cryptopunk4762.com/f/a-brief-introduction-to-ai-model-inversion-technical-defenses
- A curated list of resources for model inversion attack (MIA) – https://github.com/AndrewZhou924/Awesome-model-inversion-attack
- A GAN-Based Defense Framework Against Model Inversion Attacks – https://ieeexplore.ieee.org/document/10184476
- A Methodology for Formalizing Model-Inversion Attacks – https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=7536387&casa_token=ClIVAMYo6dcAAAAA:u75HHyFHj5lBRec9h5SqOZyAsL2dICcWIuQPCj6ltk8McREFCaM4ex42mv3S-oNPiGJLDfUqg0qL
- A Review of Confidentiality Threats Against Embedded Neural Network Models – https://arxiv.org/pdf/2105.01401
- A Survey of Privacy Attacks in Machine Learning – https://arxiv.org/pdf/2007.07646
- A Survey on Gradient Inversion: Attacks, Defenses and Future Directions – https://arxiv.org/pdf/2206.07284
- Algorithms that Remember: Model Inversion Attacks and Data Protection Law – https://arxiv.org/abs/1807.04644
- An Attack-Based Evaluation Method for Differentially Private Learning Against Model Inversion Attack – https://ieeexplore.ieee.org/document/8822435
- Are Large Pre-Trained Language Models Leaking Your Personal Information? – https://aclanthology.org/2022.findings-emnlp.148.pdf
- Attacks against Machine Learning Privacy (Part 1): Model Inversion Attacks with the IBM-ART Framework – https://franziska-boenisch.de/posts/2020/12/model-inversion/
- Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks – https://arxiv.org/pdf/2310.06549
- Bilateral Dependency Optimization: Defending Against Model-inversion Attacks – https://arxiv.org/pdf/2206.05483
- Black-Box Face Recovery from Identity Features – https://arxiv.org/pdf/2007.13635
- Boosting Model Inversion Attacks with Adversarial Examples – https://arxiv.org/abs/2306.13965
- Breaching FedMD: Image Recovery via Paired-Logits Inversion Attack – https://arxiv.org/pdf/2304.11436
- Breaking the Black-Box: Confidence-Guided Model Inversion Attack for Distribution Shift – https://arxiv.org/html/2402.18027v1
- Broadening Differential Privacy for Deep Learning Against Model Inversion Attacks – https://ieeexplore.ieee.org/document/9378274
- C2FMI: Corse-to-Fine Black-Box Model Inversion Attack – https://ieeexplore.ieee.org/document/10148574
- Deep Face Recognizer Privacy Attack: Model Inversion Initialization by a Deep Generative Adversarial Data Space Discriminator – https://ieeexplore.ieee.org/document/9306253
- Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps – https://arxiv.org/pdf/1312.6034
- Deep Learning Model Inversion Attacks and Defenses: A Comprehensive Survey – https://arxiv.org/abs/2501.18934
- Defending Model Inversion and Membership Inference Attacks via Prediction Purification – https://arxiv.org/pdf/2005.03915
- Evaluating Gradient Inversion Attacks and Defenses in Federated Learning – https://arxiv.org/abs/2112.00059
- Evaluation Indicator for Model Inversion Attack – https://drive.google.com/file/d/1rl77BGtGHzZ8obWUEOoqunXCjgvpzE8d/view
- Exploiting Explanations for Model Inversion Attacks – https://openaccess.thecvf.com/content/ICCV2021/papers/Zhao_Exploiting_Explanations_for_Model_Inversion_Attacks_ICCV_2021_paper.pdf
- Exploring Model Inversion Attacks in the Black-box Setting – https://petsymposium.org/popets/2023/popets-2023-0012.php
- Exploring Privacy-Preserving Techniques on Synthetic Data as a Defense Against Model Inversion Attacks – https://link.springer.com/chapter/10.1007/978-3-031-49187-0_1
- Extracting Prompts by Inverting LLM Outputs – https://arxiv.org/pdf/2405.15012
- Finding MNEMON: Reviving Memories of Node Embeddings – https://arxiv.org/pdf/2204.06963
- GAMIN: An Adversarial Approach to Black-Box Model Inversion – https://arxiv.org/pdf/1909.11835
- How Model Inversion Attacks Compromise AI Systems – https://securing.ai/ai-security/model-inversion/
- Improved Techniques for Model Inversion Attack – https://www.researchgate.net/publication/344552055_Improved_Techniques_for_Model_Inversion_Attack
- Improving Robustness to Model Inversion Attacks via Mutual Information Regularization – https://arxiv.org/pdf/2009.05241
- Information Leakage in Embedding Models – https://arxiv.org/pdf/2004.00053
- Introduction To AI Model Inversion Risk Mitigation Best Practices – https://cryptopunk4762.com/f/introduction-to-ai-model-inversion-risk-mitigation-best-practices
- Inverting Gradients – How easy is it to break privacy in federated learning? – https://proceedings.neurips.cc/paper/2020/hash/c4ede56bbd98819ae6112b20ac6bf145-Abstract.html
- Inverting visual representations with convolutional networks – https://arxiv.org/pdf/1506.02753
- KART: Parameterization of Privacy Leakage Scenarios from Pre-trained Language Models – https://arxiv.org/pdf/2101.00036v1
- Knowledge-Enriched Distributional Model Inversion Attacks – https://arxiv.org/pdf/2010.04092
- Label-Only Model Inversion Attacks via Boundary Repulsion – https://arxiv.org/pdf/2203.01925
- Language Model Inversion – https://arxiv.org/abs/2311.13647
- Machine Learning Models that Remember Too Much – https://arxiv.org/pdf/1709.07886
- MIRROR: Model Inversion for Deep Learning Network with High Fidelity – https://www.cs.purdue.edu/homes/an93/static/papers/ndss2022_model_inversion.pdf
- Model Inversion: The Essential Guide – https://www.nightfall.ai/ai-security-101/model-inversion
- Model inversion and membership inference: Understanding new AI security risks and mitigating vulnerabilities – https://www.hoganlovells.com/en/publications/model-inversion-and-membership-inference-understanding-new-ai-security-risks-and-mitigating-vulnerabilities
- Model Inversion Attack by Integration of Deep Generative Models: Privacy-Sensitive Face Generation From a Face Recognition System – https://dl.acm.org/doi/abs/10.1109/TIFS.2022.3140687
- Model Inversion Attack with Least Information and an In-depth Analysis of its Disparate Vulnerability – https://openreview.net/pdf?id=x42Lo6Mkcrf
- Model Inversion Attack against a Face Recognition System in a Black-Box Setting – http://www.apsipa.org/proceedings/2021/pdfs/0001800.pdf
- Model inversion attacks against collaborative inference – https://dl.acm.org/doi/10.1145/3359789.3359824
- Model Inversion Attacks against Graph Neural Networks – https://arxiv.org/pdf/2209.07807
- Model Inversion Attacks: A Growing Threat to AI Security – https://www.tillion.ai/blog/model-inversion-attacks-a-growing-threat-to-ai-security
- Model Inversion Attacks: A Survey of Approaches and Countermeasures – https://arxiv.org/html/2411.10023v1
- Model Inversion Attacks Against Collaborative Inference – http://palms.ee.princeton.edu/system/files/Model+Inversion+Attack+against+Collaborative+Inference.pdf
- Model Inversion Attacks And The Board – https://www.linkedin.com/pulse/model-inversion-attacks-board-dr-sunando-roy-rjsof
- Model Inversion Attacks for Prediction Systems: Without knowledge of Non-Sensitive Attributes – https://ieeexplore.ieee.org/document/8476925
- Model Inversion Attacks on Homogeneous and Heterogeneous Graph Neural Networks – https://arxiv.org/pdf/2310.09800
- Model Inversion Attacks that Exploit Confidence Information and Basic Countermeasures – https://dl.acm.org/doi/10.1145/2810103.2813677
- Model Inversion Robustness: Can Transfer Learning Help? – https://openaccess.thecvf.com/content/CVPR2024/papers/Ho_Model_Inversion_Robustness_Can_Transfer_Learning_Help_CVPR_2024_paper.pdf
- Neural Network Inversion in Adversarial Setting via Background Knowledge Alignment – https://dl.acm.org/doi/abs/10.1145/3319535.3354261?casa_token=J81Ps-ZWXHkAAAAA%3AFYnXo7DQoHpdhqns8x2TclKFeHpAQlXVxMBW2hTrhJ5c20XKdsounqdT1Viw1g6Xsu9FtKj85elxQaA
- Overlearning Reveals Sensitive Attributes – https://arxiv.org/pdf/1905.11742
- Plug & Play Attacks: Towards Robust and Flexible Model Inversion Attacks – https://arxiv.org/pdf/2201.12179
- Plug-In Inversion: Model-Agnostic Inversion for Vision with Data Augmentations – https://proceedings.mlr.press/v162/ghiasi22a/ghiasi22a.pdf
- Practical Black Box Model Inversion Attacks Against Neural Nets – https://link.springer.com/chapter/10.1007/978-3-030-93733-1_3
- Practical Defences Against Model Inversion Attacks for Split Neural Networks – https://arxiv.org/pdf/2104.05743
- PRID: Model Inversion Privacy Attacks in Hyperdimensional Learning Systems – https://dl.acm.org/doi/abs/10.1109/DAC18074.2021.9586217
- Privacy and Security Issues in Deep Learning: A Survey – https://ieeexplore.ieee.org/abstract/document/9294026
- Privacy in Pharmacogenetics: An End-to-End Case Study of Personalized Warfarin Dosing – https://www.usenix.org/system/files/conference/usenixsecurity14/sec14-paper-fredrikson-privacy.pdf
- Privacy Preserving Facial Recognition Against Model Inversion Attacks – https://ieeexplore.ieee.org/document/9322508
- Privacy Risk in Machine Learning: Analyzing the Connection to Overfitting – https://ieeexplore.ieee.org/stamp/stamp.jsp?arnumber=8429311
- Privacy Risks of General-Purpose Language Models – https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=9152761
- Privacy Vulnerability of Split Computing to Data-Free Model Inversion Attacks – https://arxiv.org/abs/2107.06304
- Pseudo Label-Guided Model Inversion Attack via Conditional Generative Adversarial Network – https://arxiv.org/pdf/2302.09814
- Reconstructing Training Data from Diverse ML Models by Ensemble Inversion – https://arxiv.org/pdf/2111.03702
- Reducing Risk of Model Inversion Using Privacy-Guided Training – https://arxiv.org/pdf/2006.15877
- Reinforcement Learning-Based Black-Box Model Inversion Attacks – https://arxiv.org/pdf/2304.04625
- ResSFL: A Resistance Transfer Framework for Defending Model Inversion Attack in Split Federated Learning – https://openaccess.thecvf.com/content/CVPR2022/html/Li_ResSFL_A_Resistance_Transfer_Framework_for_Defending_Model_Inversion_Attack_CVPR_2022_paper.html
- Re-thinking Model Inversion Attacks Against Deep Neural Networks – https://arxiv.org/pdf/2304.01669
- Robust or Private? (Model Inversion Part II) – https://gab41.lab41.org/robust-or-private-model-inversion-part-ii-94d54fd8d4a5
- Sentence Embedding Leaks More Information than You Expect: Generative Embedding Inversion Attack to Recover the Whole Sentence – https://arxiv.org/pdf/2305.03010
- SoK: Model Inversion Attack Landscape: Taxonomy, Challenges, and Future Roadmap – https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=10221914
- Sparse Black-Box Inversion Attack with Limited Information – https://ieeexplore.ieee.org/document/10095514
- Text Embedding Inversion Security for Multilingual Language Models – https://arxiv.org/abs/2401.12192
- Text Revealer: Private Text Reconstruction via Model Inversion Attacks against Transformers – https://arxiv.org/pdf/2209.10505
- The Secret Revealer: Generative Model-Inversion Attacks Against Deep Neural Networks – https://arxiv.org/abs/1911.07135
- The Secret Sharer: Evaluating and Testing Unintended Memorization in Neural Networks – https://www.usenix.org/system/files/sec19-carlini.pdf
- Threats to the Model: Model Inversion – https://www.ituonline.com/comptia-securityx/comptia-securityx-1/threats-to-the-model-model-inversion/
- Transferable Embedding Inversion Attack: Uncovering Privacy Risks in Text Embeddings without Model Queries – https://aclanthology.org/2024.acl-long.230/
- Uncovering a model’s secrets (Model Inversion Part I) – https://gab41.lab41.org/uncovering-a-models-secrets-model-inversion-part-i-ce460eab93d6
- Understanding Deep Image Representations by Inverting Them – https://openaccess.thecvf.com/content_cvpr_2015/papers/Mahendran_Understanding_Deep_Image_2015_CVPR_paper.pdf
- UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label Inference Attacks Against Split Learning – https://arxiv.org/pdf/2108.09033
- Unstoppable Attack: Label-Only Model Inversion via Conditional Diffusion Model – https://arxiv.org/pdf/2307.08424
- Variational Model Inversion Attacks – https://proceedings.neurips.cc/paper/2021/file/50a074e6a8da4662ae0a29edde722179-Paper.pdf
Thanks for reading!