Medusa
Open-source framework for offensive AI testing and jailbreaking.
What it does
An open-source offensive AI testing framework from Pantheon Security. Provides a library of jailbreaking techniques, prompt injection payloads, and adversarial attack patterns against LLMs.
Security relevance
Medusa aggregates known attack techniques into a reusable framework, making it easier to systematically test LLM defences. Covers jailbreak methods, role-playing attacks, and encoding-based bypasses that map directly to LLM01 and LLM02.
When to use it
Use when you need granular control over offensive testing and want to extend or customise attack patterns. Requires Python expertise and familiarity with adversarial ML concepts. Not a point-and-click tool — this is for hands-on red teamers.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 2 in LLM, 1 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: Medusa slug: medusa type: Mixed category: AI Red Teaming url: https://github.com/Pantheon-Security/medusa reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [LLM01, LLM02] asi: [ASI01] complexity: Expert Required pricing: — audience: Red Team lifecycle: [test] tags: [GitHub, Jailbreak, Open Source]