AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
-
Updated
Oct 7, 2025 - TypeScript
AI Red Teaming playground labs to run AI Red Teaming trainings including infrastructure.
A security scanner for your LLM agentic workflows
Whistleblower is a offensive security tool for testing against system prompt leakage and capability discovery of an AI application exposed through API. Built for AI engineers, security researchers and folks who want to know what's going on inside the LLM-based app they use daily
AspGoat is an intentionally vulnerable ASP.NET Core application for learning and practicing web application security.
Code scanner to check for issues in prompts and LLM calls
Open-source LLM Prompt-Injection and Jailbreaking Playground
AI security and prompt injection payload toolkit
Comprehensive taxonomy of AI security vulnerabilities, LLM adversarial attacks, prompt injection techniques, and machine learning security research. Covers 71+ attack vectors including model poisoning, agentic AI exploits, and privacy breaches.
A comprehensive guide to adversarial testing and security evaluation of AI systems, helping organizations identify vulnerabilities before attackers exploit them.
Hackaprompt v1.0 AIRT Agents
Projet issu du codelab Devfest Nantes 2025 “La guerre des prompts” : atelier de 2h pour apprendre à pirater des IA et comment les protéger via des frameworks open source
Spice up your burning op with AI
Run Repello Artemis security scans on your AI assets.
A collection of resources documenting my research and learning journey in AI System Security.
Add a description, image, and links to the ai-red-teaming topic page so that developers can more easily learn about it.
To associate your repository with the ai-red-teaming topic, visit your repo's landing page and select "manage topics."