Note that the following are organized in chronological order by title. Enjoy! Thanks for reading!
Category: Artificial Intelligence
AI Supply Chain Attacks Are A Pervasive Threat
That artificial intelligence tools, especially LLMs and generative systems, are transforming industries is obvious. What isn’t obvious to most is the level of risk in integrating these tools into critical business management…
Briefly On AI Supply Chain Attack Risk Mitigation
Without a doubt, modern AI supply chains present a complex, difficult-to-assess threat landscape, and many organizations have implicit dependencies on numerous external entities that they neither fully document nor understand. As the…
Supply Chain Threats Exist In The Anatomy Of The AI Data Pipeline
AI data pipelines are the critical pathways through which information flows into AI systems, transforming raw data from a variety of sources into the structured inputs that power machine learning models. These…
Social Engineering Attacks In AI Supply Chains Expose Critical Vulnerabilities
The AI ecosystem faces an escalating threat from sophisticated social engineering attacks, attacks which exploit both human psychology and technical vulnerabilities by targeting the collaborative nature of AI development, where trust relationships…
What Exploitable Vulnerabilities Exist In The Open-Source AI Supply Chain?
Because of the AI industry’s heavy reliance on open-source components, vulnerabilities in widely-used libraries, frameworks, or models can have cascading effects across thousands of systems and organizations – compromises in popular open-source…
The Open-Source Revolution In AI Development: A Supply Chain Problem
The open-source revolution in AI development has created the ability for researchers, developers, and organizations to collaborate frictionlessly across the world and build upon one other’s work in real-time, which has accelerated…
What Is AI Training Data Extraction? A Combination Of Techniques
A significant security vulnerability in machine learning systems, training data extraction attacks effectively transform AI models into unintended data storage mechanisms – where sensitive information becomes inadvertently accessible to attackers – creating…
The 2024-2025 MIA Landscape Reveals Relentless Evolution In Membership Inference Attack Sophistication
Membership Inference Attacks (MIAs) were first identified in genomics by Homer et al., (2008), and then later formalized for machine learning by Shokri et al., (2017). Since Shokri et al.’s landmark demonstration…
Membership Inference Attacks Leverage AI Model Behaviors
Not only are membership inference attacks practical, cost-effective, and widely applicable in real-world scenarios, but recent advances in generative AI, particularly Large Language Models (LLMs), create novel challenges for membership privacy that…