What it does
An open-source security toolkit from Protect AI that scans both LLM inputs and outputs in real-time. Provides modular scanners for prompt injection detection, PII redaction, toxicity filtering, and output validation.
Security relevance
LLM Guard sits in the request/response path and applies security scanning at both ends. Input scanners catch injection attempts and sensitive data before it reaches the model. Output scanners detect PII leakage, toxic content, and malformed responses before they reach the user. Modular design means you deploy only the scanners you need.
When to use it
Use when you need runtime input/output scanning for LLM applications. Python library that integrates into your application code — requires API integration and scanner configuration but well-documented. Good starting point for teams implementing guardrails for the first time.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 3 in LLM, 2 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: LLM Guard (Protect AI) slug: llm-guard-protect-ai type: Mixed category: AI Guardrails & Firewalls url: https://protectai.com/llm-guard reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [LLM01, LLM02, LLM06] asi: [ASI01, ASI06] complexity: Guided Setup pricing: — audience: Builder lifecycle: [deploy] tags: [Guardrails, Open Source, Scanner]