Brian D. Colwell

Menu
  • Home
  • Blog
  • Contact
Menu

Category: Prompt Injection & Jailbreaking

A History Of AI Jailbreaking Attacks

Posted on June 7, 2025June 7, 2025 by Brian Colwell

The last couple years have seen an explosion in research into jailbreaking attack methods and jailbreaking has emerged as the primary attack vector for bypassing Large Language Model (LLM) safeguards. To date,…

A List Of AI Prompt Injection And Jailbreaking Attack Resources

Posted on June 7, 2025June 7, 2025 by Brian Colwell

Note that the below are in alphabetical order by title. Please let me know if there are any sources you would like to see added to this prompt injection and jailbreaking attack…

Browse Topics

  • Artificial Intelligence
    • Adversarial Attacks & Examples
    • Alignment & Ethics
    • Backdoor & Trojan Attacks
    • Federated Learning
    • Model Extraction
    • Prompt Injection & Jailbreaking
    • Watermarking
  • Biotech & Agtech
  • Commodities
    • Agricultural
    • Energies & Energy Metals
    • Gases
    • Gold
    • Industrial Metals
    • Minerals & Metalloids
  • Economics
  • Management
  • Marketing
  • Philosophy
  • Robotics
  • Sociology
    • Group Dynamics
    • Political Science
    • Religious Sociology
    • Sociological Theory
  • Web3 Studies
    • Bitcoin & Cryptocurrencies
    • Blockchain & Cryptography
    • DAOs & Decentralized Organizations
    • NFTs & Digital Identity

Recent Posts

  • A History Of AI Jailbreaking Attacks

    A History Of AI Jailbreaking Attacks

    June 7, 2025
  • What Is AutoAttack? Evaluating Adversarial Robustness

    What Is AutoAttack? Evaluating Adversarial Robustness

    June 7, 2025
  • Introduction To Adversarial Attacks: Typology And Definitions

    Introduction To Adversarial Attacks: Typology And Definitions

    June 7, 2025
©2025 Brian D. Colwell | Theme by SuperbThemes