~ / directory / husn-canary
HC
Mixed · AI Red Teaming · reviewed 2026-04

Husn Canary

Canary tokens designed specifically for AI model data leakage detection.

Visit www.husncanary.com
01

What it does

Canary tokens designed specifically for detecting AI model data leakage. Creates unique, trackable tokens that can be embedded in training data, documents, or knowledge bases to detect when an AI system exposes them.

02

Security relevance

Addresses LLM06 (Sensitive Information Disclosure) through detection rather than prevention. If your canary token appears in an LLM's output, you have concrete evidence that the model has been trained on or has access to your data. This is particularly valuable for detecting unauthorised training data usage.

03

When to use it

Use as a detection layer alongside guardrails. Embed canary tokens in sensitive documents before they enter RAG pipelines or training datasets. Monitor for token exposure in LLM outputs. Lightweight to deploy but requires a monitoring strategy.

04

OWASP coverage

Risks addressed — mapped to both OWASP Top 10 standards. 1 in LLM, 2 in Agentic.

LLM Top 10 · 2025 · 1/10 covered
01
02
03
04
05
06
07
08
09
10
Agentic Top 10 · 2026 · 2/10 covered
01
02
03
04
05
06
07
08
09
10
05

The raw record

What Yuntona stores. Single source of truth — fork it on GitHub.

name: Husn Canary
slug: husn-canary
type: Mixed
category: AI Red Teaming
url: https://www.husncanary.com

reviewed:   2026-04
added:      2026-04
updated:    2026-04

risks:
  llm:  [LLM06]
  asi:  [ASI01, ASI02]

complexity:    Guided Setup
pricing:       —
audience:      Blue Team
lifecycle:     [monitor]

tags: [Canary, Detection, DLP]