The last couple years have seen an explosion in research into jailbreaking attack methods and jailbreaking has emerged as the primary attack vector for bypassing Large Language Model (LLM) safeguards. To date,…
Category: Prompt Injection & Jailbreaking
A List Of AI Prompt Injection And Jailbreaking Attack Resources
Note that the below are in alphabetical order by title. Please let me know if there are any sources you would like to see added to this prompt injection and jailbreaking attack…