Because of the AI industry’s heavy reliance on open-source components, vulnerabilities in widely-used libraries, frameworks, or models can have cascading effects across thousands of systems and organizations – compromises in popular open-source…
Author: Brian Colwell
The Open-Source Revolution In AI Development: A Supply Chain Problem
The open-source revolution in AI development has created the ability for researchers, developers, and organizations to collaborate frictionlessly across the world and build upon one other’s work in real-time, which has accelerated…
What Is AI Training Data Extraction? A Combination Of Techniques
A significant security vulnerability in machine learning systems, training data extraction attacks effectively transform AI models into unintended data storage mechanisms – where sensitive information becomes inadvertently accessible to attackers – creating…
The 2024-2025 MIA Landscape Reveals Relentless Evolution In Membership Inference Attack Sophistication
Membership Inference Attacks (MIAs) were first identified in genomics by Homer et al., (2008), and then later formalized for machine learning by Shokri et al., (2017). Since Shokri et al.’s landmark demonstration…
Membership Inference Attacks Leverage AI Model Behaviors
Not only are membership inference attacks practical, cost-effective, and widely applicable in real-world scenarios, but recent advances in generative AI, particularly Large Language Models (LLMs), create novel challenges for membership privacy that…
A Brief Taxonomy Of AI Membership Inference Attacks
In the below taxonomy, membership inference attacks are categorized by: target model, adversarial knowledge, attack approach, training method, and target domain. Target Model The target model category of this membership inference attack…
A Brief Taxonomy Of AI Membership Inference Defenses
In the below taxonomy, membership inference defenses are categorized as confidence masking, regularization, differential privacy, or knowledge distillation. Confidence Masking Confidence masking in machine learning is a technique where predictions with low…
The Bitter Reality Of AI Backdoor Attacks
In the rapidly evolving landscape of artificial intelligence, a silent threat lurks beneath the surface of seemingly trustworthy models: backdoor attacks. At its core, a backdoor attack is a method of compromising…
A Brief Introduction To AI Data Poisoning
As machine learning systems have become integrated into safety and security-sensitive applications at exponential speed, the responsible deployment of language models has increasingly presented complex challenges that extend beyond technical implementation: not…
A History Of Clean-Label AI Data Poisoning Backdoor Attacks
With significant advancements in stealth and effectiveness across diverse domains in just seven short years, the field of clean-label AI data poisoning has quickly evolved from the first major clean-label attack framework…