~ / directory / ollama
OL
Generative · Foundation Models · reviewed 2026-04

Ollama

Run LLMs locally.

Visit ollama.com
01

What it does

A tool for running LLMs locally on your own hardware. Downloads, manages, and serves open-source models with a simple CLI. Supports a wide range of models from Llama to Mistral to custom fine-tunes.

02

Security relevance

Essential infrastructure for AI security work. Running models locally means no data leaves your environment — critical for red team labs, sensitive testing, and air-gapped deployments. Also enables testing against specific model versions without API variability.

03

When to use it

Use when you need local LLM inference for testing, red teaming, or sovereign deployment. Requires local installation, sufficient hardware (RAM/GPU depending on model size), and CLI familiarity. Guided setup — the tool is straightforward but choosing the right model and hardware configuration requires some knowledge.

04

OWASP coverage

Risks addressed — mapped to both OWASP Top 10 standards. 0 in LLM, 0 in Agentic.

LLM Top 10 · 2025 · 0/10 covered
01
02
03
04
05
06
07
08
09
10
Agentic Top 10 · 2026 · 0/10 covered
01
02
03
04
05
06
07
08
09
10
05

The raw record

What Yuntona stores. Single source of truth — fork it on GitHub.

name: Ollama
slug: ollama
type: Generative
category: Foundation Models
url: https://ollama.com

reviewed:   2026-04
added:      2026-04
updated:    2026-04

risks:
  llm:  []
  asi:  []

complexity:    Guided Setup
pricing:       —
audience:      Builder
lifecycle:     [deploy]

tags: [Local, Open Source, Self-Hosted]