~ / directory / llm-guard-protect-ai
LG
Mixed · AI Guardrails & Firewalls · reviewed 2026-04

LLM Guard (Protect AI)

Security scanner for LLM inputs and outputs.

Visit protectai.com/llm-guard
01

What it does

An open-source security toolkit from Protect AI that scans both LLM inputs and outputs in real-time. Provides modular scanners for prompt injection detection, PII redaction, toxicity filtering, and output validation.

02

Security relevance

LLM Guard sits in the request/response path and applies security scanning at both ends. Input scanners catch injection attempts and sensitive data before it reaches the model. Output scanners detect PII leakage, toxic content, and malformed responses before they reach the user. Modular design means you deploy only the scanners you need.

03

When to use it

Use when you need runtime input/output scanning for LLM applications. Python library that integrates into your application code — requires API integration and scanner configuration but well-documented. Good starting point for teams implementing guardrails for the first time.

04

OWASP coverage

Risks addressed — mapped to both OWASP Top 10 standards. 3 in LLM, 2 in Agentic.

Agentic Top 10 · 2026 · 2/10 covered
01
02
03
04
05
06
07
08
09
10
05

The raw record

What Yuntona stores. Single source of truth — fork it on GitHub.

name: LLM Guard (Protect AI)
slug: llm-guard-protect-ai
type: Mixed
category: AI Guardrails & Firewalls
url: https://protectai.com/llm-guard

reviewed:   2026-04
added:      2026-04
updated:    2026-04

risks:
  llm:  [LLM01, LLM02, LLM06]
  asi:  [ASI01, ASI06]

complexity:    Guided Setup
pricing:       —
audience:      Builder
lifecycle:     [deploy]

tags: [Guardrails, Open Source, Scanner]