Config
summarize supports an optional JSON config file for defaults.
Location
Default path:
~/.summarize/config.json
Precedence
For model:
- CLI flag
--model - Env
SUMMARIZE_MODEL - Config file
model - Built-in default (
auto)
For output language:
- CLI flag
--language/--lang - Config file
output.language(preferred) orlanguage(legacy) - Built-in default (
auto= match source content language)
See docs/language.md for supported values.
For prompt:
- CLI flag
--prompt/--prompt-file - Config file
prompt - Built-in default prompt
For UI theme:
- CLI flag
--theme - Env
SUMMARIZE_THEME - Config file
ui.theme - Built-in default (
aurora)Format
~/.summarize/config.json:
{
"model": { "id": "google/gemini-3-flash-preview" },
"output": { "language": "auto" },
"prompt": "Explain like I am five.",
"ui": { "theme": "ember" }
}
Shorthand (equivalent):
{
"model": "google/gemini-3-flash-preview"
}
model can also be auto:
{
"model": { "mode": "auto" }
}
Shorthand (equivalent):
{
"model": "auto"
}
Prompt
prompt replaces the built-in summary instructions (same behavior as --prompt).
Example:
{
"prompt": "Explain for a kid. Short sentences. Simple words."
}
Cache
Configure the on-disk SQLite cache (extracted content, transcripts, summaries).
{
"cache": {
"enabled": true,
"maxMb": 512,
"ttlDays": 30,
"path": "~/.summarize/cache.sqlite",
"media": {
"enabled": true,
"maxMb": 2048,
"ttlDays": 7,
"path": "~/.summarize/cache/media",
"verify": "size"
}
}
}
Notes:
cache.mediacontrols the media file cache (yt-dlp downloads).--no-cachebypasses summary caching only (LLM output); extract/transcript caches still apply. Use--no-media-cachefor media.verify:size(default),hash, ornone.
UI theme
Set a default CLI theme:
{
"ui": { "theme": "moss" }
}
Slides defaults
Enable slides by default and tune extraction parameters:
{
"slides": {
"enabled": true,
"ocr": false,
"dir": "slides",
"sceneThreshold": 0.3,
"max": 20,
"minDuration": 2
}
}
Logging (daemon)
Enable JSON log files for the daemon:
{
"logging": {
"enabled": true,
"level": "info",
"format": "json",
"file": "~/.summarize/logs/daemon.jsonl",
"maxMb": 10,
"maxFiles": 3
}
}
Notes:
- Default: logging is off.
format:json(default) orpretty.maxMbis per file;maxFilescontrols rotation (ring).- Extension “Extended logging” sends full input/output to daemon logs (large). Cache hits skip content logging.
Presets
Define presets you can select via --model <preset>:
{
"models": {
"fast": { "id": "openai/gpt-5-mini" },
"or-free": {
"rules": [
{
"candidates": [
"openrouter/google/gemini-2.0-flash-exp:free",
"openrouter/meta-llama/llama-3.3-70b-instruct:free"
]
}
]
}
}
}
Notes:
autois reserved and can’t be defined as a preset.freeis built-in (OpenRouter:freecandidates). Override it by definingmodels.freein your config, or regenerate it viasummarize refresh-free.
Use a preset as your default model:
{
"model": "fast"
}
Notes:
- For presets,
"mode": "auto"is optional when"rules"is present.
For auto selection with rules:
{
"model": {
"mode": "auto",
"rules": [
{
"when": ["video"],
"candidates": ["google/gemini-3-flash-preview"]
},
{
"when": ["website", "youtube"],
"bands": [
{
"token": { "max": 8000 },
"candidates": ["openai/gpt-5-mini"]
},
{
"candidates": ["xai/grok-4-fast-non-reasoning"]
}
]
},
{
"candidates": ["openai/gpt-5-mini", "openrouter/openai/gpt-5-mini"]
}
]
},
"media": { "videoMode": "auto" }
}
Notes:
- Parsed leniently (JSON5), but comments are not allowed.
- Unknown keys are ignored.
model.rulesis optional. If omitted, built-in defaults apply.model.rules[].when(optional) must be an array (e.g.["video","youtube"]).model.rules[]must use eithercandidatesorbands.
Output language
Set a default output language for summaries:
{
"output": { "language": "auto" }
}
Examples:
"auto"(default): match the source language."en","de": common shorthands."english","german": common names."en-US","pt-BR": BCP-47-ish tags.
CLI config
{
"cli": {
"enabled": ["gemini"],
"codex": { "model": "gpt-5.2" },
"claude": { "binary": "/usr/local/bin/claude", "extraArgs": ["--verbose"] }
}
}
Notes:
cli.enabledis an allowlist (auto uses CLIs only when set; explicit--cli/--model cli/...must be included).- Recommendation: keep
cli.enabledto["gemini"]unless you have a reason to add others (extra latency/variance). cli.<provider>.binaryoverrides CLI binary discovery.cli.<provider>.extraArgsappends extra CLI args.
OpenAI config
{
"openai": {
"baseUrl": "https://my-openai-proxy.example.com/v1",
"useChatCompletions": true,
"whisperUsdPerMinute": 0.006
}
}
Notes:
openai.baseUrloverrides the OpenAI-compatible API endpoint. Use this for proxies, gateways, or OpenAI-compatible APIs. EnvOPENAI_BASE_URLtakes precedence.openai.whisperUsdPerMinuteis only used to estimate transcription cost in the finish-line metrics when Whisper transcription runs via OpenAI.
Provider base URLs
Override API endpoints for any provider to use proxies, gateways, or compatible APIs:
{
"openai": { "baseUrl": "https://my-openai-proxy.example.com/v1" },
"anthropic": { "baseUrl": "https://my-anthropic-proxy.example.com" },
"google": { "baseUrl": "https://my-google-proxy.example.com" },
"xai": { "baseUrl": "https://my-xai-proxy.example.com" }
}
Or via environment variables (which take precedence over config):
| Provider | Environment Variable(s) |
|---|---|
| OpenAI | OPENAI_BASE_URL |
| Anthropic | ANTHROPIC_BASE_URL |
GOOGLE_BASE_URL (alias: GEMINI_BASE_URL) |
|
| xAI | XAI_BASE_URL |