Lethal Trifecta (Simon Willison)
Essential article on Prompt Injection + Tool Use + Permissions.
What it does
An essential article by Simon Willison (creator of Datasette) that describes the fundamental security threat model for AI agents: the combination of Prompt Injection + Tool Use + Permissions that creates the core vulnerability pattern in agentic AI systems.
Security relevance
This is the conceptual framework every AI security professional should internalise. The 'Lethal Trifecta' explains why agentic AI is fundamentally dangerous: LLMs that can be manipulated (prompt injection) given the ability to act (tool use) with real permissions creates an exploitable attack surface that no single control fully mitigates.
When to use it
Read this as foundational education before designing security controls for any AI agent system. Share with engineering teams building agentic applications. Reference in threat models and architecture reviews. It takes 10 minutes to read and will reshape how you think about AI agent security.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 3 in LLM, 3 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: Lethal Trifecta (Simon Willison) slug: lethal-trifecta-simon-willison type: Mixed category: Education & Research url: https://simonwillison.net/2025/Jun/16/the-lethal-trifecta reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [LLM01, LLM07, LLM08] asi: [ASI01, ASI02, ASI04] complexity: Plug & Play pricing: — audience: All lifecycle: [scope] tags: [Article, Education, Risk]