NeMo Guardrails (NVIDIA)
Toolkit for adding programmable guardrails to LLM-based systems.
What it does
NVIDIA's open-source toolkit for adding programmable guardrails to LLM-based systems. Uses Colang — a custom modelling language — to define conversational rails that control what the LLM can and cannot do.
Security relevance
NeMo Guardrails provides deep, programmable control over LLM behaviour. Unlike pattern-matching filters, Colang rails can implement complex conversational policies: topic restrictions, fact-checking flows, and multi-turn safety checks. This granularity is essential for high-stakes applications.
When to use it
Use when you need fine-grained control over LLM behaviour beyond simple input/output filtering. Requires learning Colang, designing rail policies, and integrating with your LLM pipeline. Expert-level work but provides the most customisable guardrail system available.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 2 in LLM, 3 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: NeMo Guardrails (NVIDIA) slug: nemo-guardrails-nvidia type: Mixed category: AI Guardrails & Firewalls url: https://developer.nvidia.com/nemo-guardrails reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [LLM01, LLM02] asi: [ASI01, ASI02, ASI07] complexity: Expert Required pricing: — audience: Builder lifecycle: [deploy] tags: [Guardrails, NVIDIA, Open Source]