Vijil
LLM security testing — automated vulnerability scanning and continuous monitoring for AI applications.
What it does
LLM security testing platform that automates vulnerability scanning for AI applications. Tests for prompt injection, data leakage, jailbreaking, and output safety issues. Provides continuous monitoring to detect when AI behavior drifts from established baselines. Appears in CB Insights' AI agent tech stack Oversight layer under Observability, Evaluation, & Governance.
Security relevance
Fills the gap between one-time security assessments and continuous AI security monitoring. Automated scanning catches vulnerabilities that manual red teaming might miss, while continuous monitoring detects behavioral drift in production. Tests against OWASP LLM Top 10 attack vectors.
When to use it
Use when you need automated, repeatable security testing for LLM applications. Good complement to manual red teaming tools like Promptfoo or Garak. Integrates into CI/CD for continuous validation.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 3 in LLM, 2 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: Vijil slug: vijil type: Mixed category: AI Red Teaming url: https://www.vijil.ai reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [LLM01, LLM02, LLM06] asi: [ASI01, ASI06] complexity: Guided Setup pricing: — audience: Red Team lifecycle: [monitor] tags: [API, Scanner, Testing, Vulnerability]