COMPL-AI
Open-source EU AI Act compliance evaluation framework for LLMs. Technical interpretation of regulatory requirements mapped to benchmarks. By ETH Zurich, INSAIT, and LatticeFlow AI.
What it does
Open-source compliance-centered evaluation framework created by ETH Zurich, INSAIT, and LatticeFlow AI. Provides the first technical interpretation of the EU AI Act, translating broad regulatory requirements into measurable technical requirements for LLMs. Covers 6 core principles: Human Agency & Oversight, Technical Robustness & Safety, Privacy & Data Governance, Transparency, Diversity & Fairness, and Societal & Environmental Well-being. Built on UK AI Safety Institute's Inspect Framework. Supports API, cloud, and local model providers.
Security relevance
Evaluates LLMs against EU AI Act compliance benchmarks covering robustness, safety, bias, fairness, and transparency. The renewed benchmarking suite addresses saturation and contamination in frontier model evaluation. Also adding support for EU AI Act Code of Practice Safety & Security chapter.
When to use it
Use when evaluating LLMs for EU AI Act compliance or when benchmarking foundation models against regulatory safety and fairness requirements.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 0 in LLM, 0 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: COMPL-AI slug: compl-ai type: Generative category: Compliance Automation url: https://compl-ai.org reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [] asi: [] complexity: Plug & Play pricing: — audience: GRC · MLEng lifecycle: [test] tags: [Academic, Benchmarking, Compliance, EU AI Act, Open Source]