What it does
An open-source LLM observability platform providing traces, evaluations, and prompt management. Captures every LLM interaction — prompts, completions, latency, costs, and scores — creating a complete audit trail for AI systems. Cited by CB Insights as a leading AI agent observability tool in the emerging Trust & Performance layer of the agent tech stack. Critical for detecting ASI06 (Memory & Context Poisoning) and ASI08 (Cascading Failures) in agentic deployments.
Security relevance
Observability is the foundation of AI security monitoring. Without traces, you can't detect prompt injection attempts, data leakage, or anomalous model behaviour. Langfuse provides the audit trail that security teams need to investigate incidents and demonstrate compliance.
When to use it
Deploy when you need observability and audit trails for LLM applications. SDK integration is straightforward — add tracing calls to your application code. Self-hosted or cloud options. Essential infrastructure for any AI security monitoring programme.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 0 in LLM, 2 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: Langfuse slug: langfuse type: Generative category: AI Development Tools url: https://langfuse.com reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [] asi: [ASI06, ASI08] complexity: Guided Setup pricing: — audience: Builder lifecycle: [monitor] tags: [Observability, Open Source, Tracing]