~ / directory / cosai-coalition-for-secure-ai
CC
Mixed · Education & Research · reviewed 2026-04

CoSAI (Coalition for Secure AI)

Industry coalition producing open-source AI security frameworks — Risk Map, Agentic Principles, AI Incident Response, Model Signing, and CodeGuard.

Visit coalitionforsecureai.org
01

What it does

The Coalition for Secure AI — an OASIS Open Project bringing together 45+ organisations (Google, Microsoft, IBM, NVIDIA, Anthropic, Palo Alto Networks, Amazon, OpenAI, Cisco, Wiz) to produce open-source AI security frameworks and tools. Key outputs: CoSAI Risk Map (donated from Google SAIF), Principles for Secure-by-Design Agentic Systems, AI Incident Response Framework, ML Artifact Signing guidance, and Project CodeGuard (secure-by-default rules for AI coding agents). Four active workstreams covering supply chain security, defender frameworks, AI security governance, and agentic system security.

02

Security relevance

CoSAI is becoming the industry standard-setter for AI security — positioned similarly to how OWASP standardised web security. The Risk Map provides a shared taxonomy for AI security risks. The Agentic Systems Principles establish baseline expectations for autonomous AI. The Incident Response Framework (on GitHub at cosai-oasis/ws2-defenders) provides actionable playbooks for when AI systems are compromised. All outputs are open source and free to use.

03

When to use it

Reference when building AI security programmes, writing policies, or establishing governance frameworks. CoSAI outputs carry weight with auditors and regulators due to the breadth of industry backing. Technical participation is free and open to all developers — contribute via GitHub. Subscribe to mailing lists for updates on new framework releases.

04

OWASP coverage

Risks addressed — mapped to both OWASP Top 10 standards. 6 in LLM, 6 in Agentic.

05

The raw record

What Yuntona stores. Single source of truth — fork it on GitHub.

name: CoSAI (Coalition for Secure AI)
slug: cosai-coalition-for-secure-ai
type: Mixed
category: Education & Research
url: https://coalitionforsecureai.org

reviewed:   2026-04
added:      2026-04
updated:    2026-04

risks:
  llm:  [LLM01, LLM02, LLM03, LLM05, LLM07, LLM08]
  asi:  [ASI01, ASI02, ASI03, ASI04, ASI07, ASI08]

complexity:    Plug & Play
pricing:       —
audience:      All
lifecycle:     [govern]

tags: [Agentic, Framework, Incident Response, Open Source, Standard]