Husn Canary
Canary tokens designed specifically for AI model data leakage detection.
What it does
Canary tokens designed specifically for detecting AI model data leakage. Creates unique, trackable tokens that can be embedded in training data, documents, or knowledge bases to detect when an AI system exposes them.
Security relevance
Addresses LLM06 (Sensitive Information Disclosure) through detection rather than prevention. If your canary token appears in an LLM's output, you have concrete evidence that the model has been trained on or has access to your data. This is particularly valuable for detecting unauthorised training data usage.
When to use it
Use as a detection layer alongside guardrails. Embed canary tokens in sensitive documents before they enter RAG pipelines or training datasets. Monitor for token exposure in LLM outputs. Lightweight to deploy but requires a monitoring strategy.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 1 in LLM, 2 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: Husn Canary slug: husn-canary type: Mixed category: AI Red Teaming url: https://www.husncanary.com reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [LLM06] asi: [ASI01, ASI02] complexity: Guided Setup pricing: — audience: Blue Team lifecycle: [monitor] tags: [Canary, Detection, DLP]