Talome

Talome Documentation

Your server, one message away. AI-powered home server management with 230+ tools, a multi-source app store, and a self-improving codebase.

Self-hosting is a paradox. You get total control over your data, your media, your infrastructure — but you pay for it with an endless tax of YAML files, port conflicts, broken upgrades, and hours spent wiring services together that should just work. Every new app means another compose file, another reverse proxy entry, another set of credentials to manage. The tools exist to build something remarkable, but the friction stops most people from ever getting there.

Talome changes the equation. Instead of configuring your server through files and admin panels, you have a conversation with it.

Linux / macOS:

curl -fsSL https://get.talome.dev | bash

Windows (PowerShell):

irm https://get.talome.dev/install.ps1 | iex

One command. Open http://localhost:3000. Tell it what you want.


What Makes Talome Different

AI-first, not AI-added

Talome's assistant is not a chatbot bolted onto a dashboard. It is the primary interface. 230+ purpose-built tools across 16 domains give it deep, structured access to your system — Docker containers, app configurations, media libraries, network routing, storage health, and its own source code. It does not generate suggestions for you to follow. It executes.

When you say "set up a media stack," the AI chains search_apps to find the right images, install_app five times in parallel, wire_apps to connect download clients, arr_add_root_folder to set media paths, and arr_sync_indexers_from_prowlarr to push indexers — then reports back with URLs. That is not autocomplete. That is an agent with real tools and real authority over your infrastructure.

Tools are organized into domains that activate dynamically. Install Sonarr, and 27 arr tools appear. Configure Home Assistant, and 5 smart home tools unlock. The AI only sees tools it can actually use, which keeps it fast and accurate.

Multi-source app store

One search, every ecosystem. Talome aggregates apps from multiple sources into a single, unified catalog:

  • CasaOS stores — official and community apps from the CasaOS ecosystem
  • Umbrel stores — official and community apps from Umbrel
  • Your Creations — describe what you need and Talome builds a complete Docker app with compose, config, and optional web UI

Install Jellyfin from Talome's catalog, Pi-hole from CasaOS, and a Bitcoin node from Umbrel — all from the same interface, all managed the same way. No ecosystem lock-in.

Self-improving codebase

Talome can read and modify its own TypeScript source code. The pipeline is: plan_change (propose a diff) then apply_change (write it), which triggers tsc --noEmit automatically. If the compiler fails, the change is rolled back via git stash. If it passes, the change is committed with a full audit trail.

Found a bug in the dashboard? Tell the AI. It reads the component with read_file, proposes a fix, validates the types, and deploys. Your server gets better the more you use it. Every change is reversible with rollback_change.

Zero-config philosophy

When you install an app, Talome does not hand you a URL and say "go configure it." The AI follows a post-install checklist automatically:

  1. inspect_container — verify volume mounts match your data paths
  2. add_volume_mount + restart_app — fix any incorrect mounts
  3. arr_add_root_folder — configure media paths for arr apps
  4. arr_add_download_client — wire download clients
  5. arr_sync_indexers_from_prowlarr — push indexers
  6. check_service_health — confirm everything is healthy

The goal is that every app works on first launch, not after thirty minutes of manual setup. If a configuration tool fails, the AI falls back to reading and writing config files directly — it never tells you to "open the settings page and configure it manually."


See It in Action

Here is a real conversation. The user wants a complete media server. No compose files, no port spreadsheets, no manual API key copying.

Talome Assistant
Set up a complete media stack with Jellyfin, Sonarr, Radarr, Prowlarr, and qBittorrent. My media is at /mnt/media.

Installing 5 apps...

search_appsfound Jellyfin, Sonarr, Radarr, Prowlarr, qBittorrent
install_app x 5all containers healthy
wire_appsconnected Sonarr/Radarr to qBittorrent as download client
arr_add_root_folder/mnt/media/TV on Sonarr, /mnt/media/Movies on Radarr
arr_sync_indexers_from_prowlarr3 indexers synced to both apps

Your media stack is running. Jellyfin at :8096, Sonarr at :8989, Radarr at :7878, Prowlarr at :9696, qBittorrent at :8080. All services are healthy. Sonarr and Radarr are using qBittorrent as their download client and share 3 indexers from Prowlarr. Media paths point to /mnt/media. Want me to add Overseerr so family members can request media?

Five apps. Configured and connected. One message.

Behind that conversation, the AI executed 10+ tool calls: searching the app store, pulling Docker images, creating containers with health checks, discovering APIs, configuring download clients via Sonarr/Radarr's API, syncing Prowlarr indexers, and verifying health on every service. The user typed one sentence.


Capabilities


Who Is Talome For?

Media enthusiasts who want Jellyfin, Sonarr, Radarr, and a download client working together without reading five wikis. Ask for a media stack and it is running in under a minute.

Home lab builders who run dozens of containers and are tired of managing them through Portainer and SSH. Talome gives you a single pane of glass with an AI that understands your entire infrastructure.

Privacy-focused users who want Pi-hole, Vaultwarden, and Tailscale configured correctly without being a networking expert. Everything runs locally. No data leaves your server unless you choose a cloud AI provider.

Developers and tinkerers who want to extend their server with custom apps. Describe what you need, and Talome generates the complete Docker stack — compose, config, manifest, and optional web UI.

People who tried self-hosting before and gave up. The gap between "I want Jellyfin" and "Jellyfin is working with hardware transcoding, connected to Sonarr, auto-downloading my shows" used to be hours of work. Talome closes that gap to one conversation.


The Memory System

The AI remembers what you tell it. Not just within a conversation — permanently.

Talome Assistant
Remember that my media is stored at /mnt/nas/media
Remember that I prefer Jellyfin over Plex
Remember that port 8080 is taken by my dev server

Memories are stored locally in SQLite with four types: preference, fact, context, and correction. The top 10 most relevant memories are injected into every conversation, ranked by recency, access frequency, and confidence score. Deduplication at 80% bigram similarity prevents redundant storage.

The next time you install a media app, the AI already knows where your media lives, which server you prefer, and which ports to avoid — without you repeating anything.


Automations in Plain English

Talome's automation engine turns natural language into scheduled workflows:

Talome Assistant
Create an automation that checks disk usage every hour and notifies me on Telegram if it exceeds 90%

This creates an automation with:

  • Trigger: Cron schedule (0 * * * *)
  • Step 1: get_disk_usage tool action
  • Step 2: Condition — if any mount exceeds 90%
  • Step 3: send_notification via your configured Telegram channel

Four trigger types (cron schedule, container stopped, disk usage threshold, webhook) and four step types (tool action, AI prompt, condition, notify) combine into workflows that handle backups, monitoring, cleanup, and maintenance — all running unattended.


Architecture at a Glance

Talome is a TypeScript monorepo with three packages:

  • apps/core — Hono backend serving the API, AI agent, 230+ tools, Docker orchestration, SQLite database, automation engine, and MCP server
  • apps/dashboard — Next.js 16 frontend with a widget-based dashboard, app store, media views, terminal, and the AI chat interface
  • packages/types — Shared TypeScript types used by both apps

All state lives in a single SQLite database — users, conversations, memories, automations, widgets, audit logs, app records. The backend talks to Docker via the Unix socket. The AI communicates with Anthropic Claude (recommended), OpenAI, or local Ollama models through the Vercel AI SDK. The MCP server exposes every tool to external AI environments — Claude Code, Cursor, and Claude Desktop can manage your server with the same 230+ tools the dashboard uses.

Security

Three security modes control what the AI can do:

  • Permissive — full access, zero friction
  • Cautious (default) — destructive operations require confirmation
  • Locked — read-only, no modifications

Every tool is classified into a tier (read / modify / destructive), and the security gateway enforces the active mode on every call. Every tool execution is logged to an audit trail. No eval(), no arbitrary code execution — all inputs validated with Zod schemas.


Next Steps

On this page