The Big List Of AI Prompt Injection References And Resources
Introduction This curated collection of references and resources serves as a…
Read MoreA History Of AI Jailbreaking Attacks
Introduction The last couple years have seen an explosion in research…
Read MoreWhat Is AutoAttack? Evaluating Adversarial Robustness
Introduction AutoAttack has become the de facto standard for adversarial robustness…
Read MoreWhat Are The Adversarial Attacks That Create Adversarial Examples? Typology And Definitions
Introduction Adversarial Examples exploit vulnerabilities in machine learning systems by leveraging…
Read MoreAdversarial Examples In Model Extraction
Introduction While primarily known for their use in evasion attacks (causing…
Read MoreBackdoor Attacks – The Problem Has Outpaced The Solution
The concept of the backdoor, or “trojan”, AI attack was first…
Read MoreGradient And Update Leakage (GAUL) In Federated Learning
Introduction Gradient and Update Leakage attacks intercept and analyze gradient updates…
Read MoreAn Introduction To AI Model Extraction
Introduction AI model extraction refers to an attack method where an…
Read MoreWhat Are The Types Of AI Model Extraction Attacks?
Introduction Model Extraction Attacks aim at stealing model architecture, training hyperparameters, learned…
Read MoreWhat Is Alignment-Aware Extraction?
Introduction Alignment-Aware Extraction goes beyond conventional extraction methods by strategically capturing both the…
Read More