ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
-
Updated
Feb 13, 2025
ChatGPT Jailbreaks, GPT Assistants Prompt Leaks, GPTs Prompt Injection, LLM Prompt Security, Super Prompts, Prompt Hack, Prompt Security, Ai Prompt Engineering, Adversarial Machine Learning.
The fastest && easiest LLM security guardrails for CX AI Agents and applications.
MINOTAUR: The STRONGEST Secure Prompt EVER! Prompt Security Challenge, Impossible GPT Security, Prompts Cybersecurity, Prompting Vulnerabilities, FlowGPT, Secure Prompting, Secure LLMs, Prompt Hacker, Cutting-edge Ai Security, Unbreakable GPT Agent, Anti GPT Leak, System Prompt Security.
Bullet-proof your custom GPT system prompt security with KEVLAR, the ultimate prompt protector against rules extraction, prompt injections, and leaks of AI agent secret instructions.
🚀 Unofficial Node.js SDK for Prompt Security's Protection API.
The LLM guardian kernel
This github repository features a variety of unique prompts to jailbreak ChatGPT, and other AI to go against OpenAI policy. Please read the notice at the bottom of the README.md file for more information.
Add a description, image, and links to the prompt-security topic page so that developers can more easily learn about it.
To associate your repository with the prompt-security topic, visit your repo's landing page and select "manage topics."