~ / directory / giskard
GI
Mixed · AI Red Teaming · reviewed 2026-04

Giskard

Open-source LLM testing for vulnerabilities, bias, and hallucination.

Visit www.giskard.ai
01

What it does

An open-source testing framework for LLM vulnerabilities, bias, and hallucination. Provides automated test generation, vulnerability scanning, and continuous testing integration with a focus on both security and fairness.

02

Security relevance

Giskard uniquely combines security testing (prompt injection, data leakage) with fairness and bias evaluation. This dual focus is valuable because regulatory frameworks like the EU AI Act require both security and fairness assessments. One tool covering both reduces integration complexity.

03

When to use it

Use when you need to evaluate both security vulnerabilities and bias/fairness issues, particularly for EU AI Act compliance. Good for teams that want a growing open-source alternative to commercial red teaming platforms.

04

OWASP coverage

Risks addressed — mapped to both OWASP Top 10 standards. 3 in LLM, 2 in Agentic.

Agentic Top 10 · 2026 · 2/10 covered
01
02
03
04
05
06
07
08
09
10
05

The raw record

What Yuntona stores. Single source of truth — fork it on GitHub.

name: Giskard
slug: giskard
type: Mixed
category: AI Red Teaming
url: https://www.giskard.ai

reviewed:   2026-04
added:      2026-04
updated:    2026-04

risks:
  llm:  [LLM01, LLM02, LLM05]
  asi:  [ASI01, ASI06]

complexity:    Guided Setup
pricing:       —
audience:      Builder
lifecycle:     [test]

tags: [Bias, Open Source, Testing]