What it does
A tool for running LLMs locally on your own hardware. Downloads, manages, and serves open-source models with a simple CLI. Supports a wide range of models from Llama to Mistral to custom fine-tunes.
Security relevance
Essential infrastructure for AI security work. Running models locally means no data leaves your environment — critical for red team labs, sensitive testing, and air-gapped deployments. Also enables testing against specific model versions without API variability.
When to use it
Use when you need local LLM inference for testing, red teaming, or sovereign deployment. Requires local installation, sufficient hardware (RAM/GPU depending on model size), and CLI familiarity. Guided setup — the tool is straightforward but choosing the right model and hardware configuration requires some knowledge.
OWASP coverage
Risks addressed — mapped to both OWASP Top 10 standards. 0 in LLM, 0 in Agentic.
The raw record
What Yuntona stores. Single source of truth — fork it on GitHub.
name: Ollama slug: ollama type: Generative category: Foundation Models url: https://ollama.com reviewed: 2026-04 added: 2026-04 updated: 2026-04 risks: llm: [] asi: [] complexity: Guided Setup pricing: — audience: Builder lifecycle: [deploy] tags: [Local, Open Source, Self-Hosted]