hiveloom serve
Starts the long-running Hiveloom service. This is the process that owns the admin API, every per-agent MCP endpoint, and the in-process agent runtime. In production it runs under systemd; locally it runs in a terminal.
Synopsis
hiveloom serve [OPTIONS]Options
| Flag | Default | Description |
|---|---|---|
--host <HOST> | 127.0.0.1 | Bind address. Use 0.0.0.0 only behind a reverse proxy. |
--port <PORT> | 3000 | Listen port for HTTP. |
--data-dir <DATA_DIR> | /var/lib/hiveloom | Where the platform DB, tenant DBs, and master key live. Also reads HIVELOOM_DATA_DIR. |
--no-scheduler | (off) | Disable the in-process job scheduler. The HTTP API still serves chat and admin requests, but cron-driven schedule create jobs do not fire. Useful for ephemeral local sessions or when a separate scheduler process owns that role. |
Examples
Run on a developer laptop, default settings:
hiveloom serveRun a production VPS bound to localhost (Caddy terminates TLS in front):
hiveloom serve --host 127.0.0.1 --port 3000 --data-dir /var/lib/hiveloomOverride the data directory via env:
HIVELOOM_DATA_DIR=/srv/hiveloom hiveloom serveEnvironment variables
| Variable | Purpose |
|---|---|
HIVELOOM_DATA_DIR | Same as --data-dir. Wins if both are set. |
HIVELOOM_OPENAI_BASE_URL (alias HIVELOOM_OPENAI_COMPAT_BASE_URL) | Redirect the OpenAI-spec HTTP client at any other endpoint that implements the OpenAI Chat Completions OpenAPI spec — OpenRouter, Groq, Together, DeepSeek, Mistral, vLLM, LiteLLM, Ollama, … Must include /v1 (or the upstream’s equivalent prefix). Process-wide; one Hiveloom instance ⇒ one upstream. See Use a different model provider. |
HIVELOOM_MEMORY_CURATION_INTERVAL_TURNS | Cadence of the per-agent automatic memory curator. Default 8; 0 disables the periodic pass (explicit “remember that …” requests still trigger). |
What gets exposed
| Path | Purpose |
|---|---|
/healthz | Liveness check used by Caddy and uptime monitors. |
/api/admin/... | Admin API for tenants, agents, credentials, etc. |
/api/agents/<id>/mcp | Per-agent MCP endpoint for chat clients. |
In-process workers
hiveloom serve runs three things in the same process:
- the HTTP API,
- the agent runtime (executes chat turns, scheduled-job runs, and event-routed runs against the configured LLM provider),
- the scheduler, which polls
scheduled_jobsonce per second and fires due cron entries through the agent runtime.
The scheduler can be disabled with --no-scheduler. The agent runtime is
always on — it has no flag, since chat would not work without it.
Operating it
- Under systemd:
sudo systemctl start hiveloom(see systemd setup). - Verify it’s healthy:
hiveloom health. - Tail logs:
journalctl -u hiveloom -forhiveloom tail.
Never expose port 3000 directly to the public internet — terminate TLS with
Caddy and proxy from :443 to 127.0.0.1:3000. See
Reverse proxy.
Last updated on