Giskard
Open-source LLM testing for vulnerabilities, bias, and hallucination.
What it does
An open-source testing framework for LLM vulnerabilities, bias, and hallucination. Provides automated test generation, vulnerability scanning, and continuous testing integration with a focus on both security and fairness.
Security relevance
Giskard uniquely combines security testing (prompt injection, data leakage) with fairness and bias evaluation. This dual focus is valuable because regulatory frameworks like the EU AI Act require both security and fairness assessments. One tool covering both reduces integration complexity.
When to use it
Use when you need to evaluate both security vulnerabilities and bias/fairness issues, particularly for EU AI Act compliance. Good for teams that want a growing open-source alternative to commercial red teaming platforms.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 3 in LLM, 2 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: Giskard slug: giskard type: Mixed category: AI Red Teaming url: https://www.giskard.ai reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [LLM01, LLM02, LLM05] asi: [ASI01, ASI06] complexity: Guided Setup pricing: — audience: Builder lifecycle: [test] tags: [Bias, Open Source, Testing]