# Hiveloom — Full documentation
Multi-tenant open-source AI agent platform.
Source: https://docs.hiveloom.cloud
Generated: 2026-04-27T16:48:06.180Z
---
# Introduction
URL: https://docs.hiveloom.cloud/
# Introduction
Hiveloom is a multi-tenant AI agent platform. One binary, one SQLite file per tenant,
one CLI. Self-host it on a small VPS, manage it from the terminal or a TUI, and
expose agents over HTTP and MCP to clients like Claude Desktop and Cursor.
These docs are **opinionated and linear**. If you follow them top to bottom you will
end up with:
1. A Hiveloom instance reachable over `https://` with a valid
public HTTPS URL.
2. An agent that answers your chat messages, backed by the LLM provider of your
choice.
3. That same agent connected to Claude Desktop (and any other MCP client) as a tool
source.
4. A custom markdown skill of your own design, changing how the agent behaves.
## Pick your deployment path
Two production-friendly paths are documented and converge on the same MCP URL
shape (`https:///mcp//`): [Deploy on a VPS](/deploy)
with Caddy + Let's Encrypt, or [Cloudflare Tunnel](/deploy/cloudflare-tunnel)
for outbound-only HTTPS without opening inbound ports.
## Who this is for
You're comfortable with SSH and a terminal. You have a VPS (Ubuntu/Debian), a domain,
and an LLM API key — Anthropic, OpenAI, or any provider that implements the
OpenAI Chat Completions OpenAPI spec (OpenRouter, Groq, Together, DeepSeek,
Mistral, vLLM, LiteLLM, Ollama, …). You do **not** need to know Rust, Caddy,
or the Model Context Protocol. The docs cover everything.
## The guided journey
The sidebar on the left and the next/previous links at the bottom of every page walk
you through the five-stage journey. Skip ahead if you already have a running
instance; otherwise, start with [Install](/install).
## Agent-discoverable
These docs are also machine-readable. Every page is reachable as raw markdown by
appending `.md` to its URL (for example [`/install.md`](/install.md)), and the full
corpus is indexed at [`/llms.txt`](/llms.txt) and concatenated at
[`/llms-full.txt`](/llms-full.txt). If you're an AI assistant reading this: start
there.
---
# Install
URL: https://docs.hiveloom.cloud/install
# Install
Hiveloom is a single Rust binary. The fastest path is the release installer;
building from source works too.
## Recommended: one-line install
Linux (x86_64 or aarch64), macOS (Intel or Apple Silicon):
```bash
curl -fsSL https://bin.hiveloom.cloud/install.sh | bash
```
That installer:
1. Detects your OS and CPU architecture.
2. Downloads the matching `hiveloom` binary from
`https://bin.hiveloom.cloud/releases/latest/` into `/usr/local/bin`.
3. On Linux, also creates a `hiveloom` system user, prepares
`/var/lib/hiveloom`, and writes `/etc/systemd/system/hiveloom.service`.
It does **not** start the service.
Verify:
```bash
hiveloom --version
```
Install variants (pin a version, custom dir, no systemd, direct download)
#### Pin a specific version
```bash
curl -fsSL https://bin.hiveloom.cloud/install.sh | bash -s -- --version 0.2.0
```
Available versions are listed under
`https://bin.hiveloom.cloud/releases/`. The `latest` alias always mirrors
the most recent release.
#### Install somewhere other than `/usr/local/bin`
```bash
curl -fsSL https://bin.hiveloom.cloud/install.sh | bash -s -- --install-dir ~/.local/bin
```
Make sure the target directory is on your `$PATH`.
#### Skip the systemd service
For a user-level install (no service, no system user, no data dir):
```bash
curl -fsSL https://bin.hiveloom.cloud/install.sh | bash -s -- --no-service
```
Useful on macOS and on Linux developer laptops where you just want the CLI.
#### Direct binary download
If you don't want to pipe to bash, pick the binary for your platform and
install it yourself:
```bash
# Linux x86_64
curl -fsSL -o hiveloom https://bin.hiveloom.cloud/releases/latest/hiveloom-linux-x86_64
chmod +x hiveloom
sudo mv hiveloom /usr/local/bin/
# Linux aarch64
curl -fsSL -o hiveloom https://bin.hiveloom.cloud/releases/latest/hiveloom-linux-aarch64
# macOS Apple Silicon
curl -fsSL -o hiveloom https://bin.hiveloom.cloud/releases/latest/hiveloom-darwin-aarch64
# macOS Intel
curl -fsSL -o hiveloom https://bin.hiveloom.cloud/releases/latest/hiveloom-darwin-x86_64
```
SHA-256 checksums are published next to each binary as `.sha256`.
## Build from source
```bash
git clone https://github.com/FrancescoMrn/hiveloom-rust
cd hiveloom-rust
cargo build --release
./target/release/hiveloom --version
```
Requires a stable Rust toolchain (edition 2021).
## Platform support matrix
| Platform | One-line install |
|---|---|
| Linux x86_64 | Yes (`linux-x86_64`) |
| Linux aarch64 | Yes (`linux-aarch64`) |
| macOS aarch64 (Apple Silicon) | Yes (`darwin-aarch64`) |
| macOS x86_64 (Intel) | Yes (`darwin-x86_64`) |
| Windows | No — use WSL2 |
The guided journey assumes **Linux (Ubuntu/Debian)** as the VPS target.
macOS is supported as a developer workstation; skip the deployment pages if
you're not running on Linux.
## What you just installed
One binary with a lot of subcommands. Run `hiveloom --help` to see them.
The most relevant for the next pages:
| Command | What it does |
|---|---|
| `hiveloom serve` | Starts the HTTP service |
| `hiveloom agent` | Manages agents (create, list, show, delete) |
| `hiveloom credential` | Stores LLM provider API keys |
| `hiveloom chat` | Streams a conversation with an agent |
| `hiveloom tls render` | Prints a Caddyfile for a given hostname |
| `hiveloom mcp-identity` | Creates MCP client credentials |
Full list at [`/cli`](/cli).
Uninstall
```bash
# Stop and disable the service (Linux, if installed)
sudo systemctl stop hiveloom
sudo systemctl disable hiveloom
sudo rm /etc/systemd/system/hiveloom.service
# Remove the binary
sudo rm /usr/local/bin/hiveloom
# Remove the data directory (this deletes all tenants, agents, and credentials)
sudo rm -rf /var/lib/hiveloom
# Remove the service user
sudo userdel hiveloom
```
## Next
Go to [Deploy on a VPS](/deploy) to make your instance reachable from
the internet with Caddy, or to [Cloudflare Tunnel](/deploy/cloudflare-tunnel)
for an outbound-only HTTPS path. If you just want a local playground, skip
straight to [Store an LLM credential](/first-agent/credentials).
---
# Deploy on a VPS
URL: https://docs.hiveloom.cloud/deploy
# Deploy on a VPS
This section walks you from a plain `hiveloom serve` to a production-ish
deployment reachable over `https://` with an automatically-renewed
Let's Encrypt certificate. The admin and MCP ports stay off the public internet.
If your goal is "give Claude/Cursor an HTTPS URL quickly" without opening ports
80 and 443, jump to [Cloudflare Tunnel](/deploy/cloudflare-tunnel) instead.
## The five steps
A or AAAA record for your hostname.
SSH + 80 + 443 only.
Caddy is the default. Drop in the Hiveloom Caddyfile.
Trigger Let's Encrypt issuance, verify HTTPS.
Long-lived, bound to loopback.
Follow them in order. Each page has the full command set — you don't need to
cross-reference anything.
## What Hiveloom contributes
```bash
hiveloom tls render --host hiveloom.example.com --email you@example.com \
| sudo tee /etc/caddy/Caddyfile.d/hiveloom.caddy
```
Prints a ready-to-use Caddyfile to stdout. That is the entire built-in tooling
for this feature — DNS, firewall, Caddy install, and systemd unit are all
operator action.
## Smoke test at the end
If everything is wired up correctly:
```bash
curl -s https://hiveloom.example.com/healthz
# {"status":"ok"}
curl -s https://hiveloom.example.com/.well-known/oauth-authorization-server | jq .issuer
# "https://hiveloom.example.com"
```
The OAuth metadata URLs **must** start with `https://`. If they start with
`http://`, the reverse proxy isn't forwarding the right headers — revisit
[Reverse proxy](/deploy/reverse-proxy).
## Alternative path
Outbound-only HTTPS for MCP clients without opening inbound ports.
---
# Cloudflare Tunnel
URL: https://docs.hiveloom.cloud/deploy/cloudflare-tunnel
# Cloudflare Tunnel
This is the fastest production-friendly path for exposing Hiveloom to Claude,
Cursor, and other remote MCP clients that require a public `https://` URL.
## Why pick this path
- No inbound ports required on the VPS.
- No local reverse proxy required.
- No certificate issuance or renewal on the box.
- Works well when your main goal is "make `/mcp//` reachable over HTTPS".
Use [Deploy on a VPS](/deploy) instead if you want Caddy and full control of
TLS on your own host.
## Requirements
- A VPS with Hiveloom running on `127.0.0.1:3000`
- A domain managed in Cloudflare
- `cloudflared` installed on the VPS
Verify Hiveloom first:
```bash
curl -s http://127.0.0.1:3000/healthz
# {"status":"ok"}
```
## The quick path
```bash
# 1. Authenticate cloudflared with your Cloudflare account
cloudflared tunnel login
# 2. Create the tunnel
cloudflared tunnel create hiveloom
# 3. Create the DNS route
cloudflared tunnel route dns hiveloom mcp.hiveloom.cloud
```
Then write `/etc/cloudflared/config.yml`:
```yaml
tunnel: hiveloom
credentials-file: /root/.cloudflared/.json
ingress:
- hostname: mcp.hiveloom.cloud
service: http://127.0.0.1:3000
originRequest:
httpHostHeader: mcp.hiveloom.cloud
- service: http_status:404
```
Start it as a service:
```bash
sudo cloudflared service install
sudo systemctl enable --now cloudflared
sudo systemctl status cloudflared
```
Your MCP base URL is now:
```text
https://mcp.hiveloom.cloud
```
And an agent endpoint looks like:
```text
https://mcp.hiveloom.cloud/mcp/default/support-bot
```
## Why `httpHostHeader` matters
Hiveloom builds OAuth and MCP metadata from forwarded headers. In the current
server code, public URLs are derived from:
- `X-Forwarded-Proto` or `X-Forwarded-Protocol`
- `X-Forwarded-Host` or `Host`
Cloudflare forwards `X-Forwarded-Proto`, and `httpHostHeader` makes sure the
upstream `Host` matches your public hostname. That keeps OAuth discovery and
protected-resource metadata on `https://mcp.hiveloom.cloud/...` instead of
falling back to loopback.
## Verify before touching an MCP client
```bash
curl -s https://mcp.hiveloom.cloud/healthz
curl -s https://mcp.hiveloom.cloud/.well-known/oauth-authorization-server | jq .issuer
```
Expect:
```text
"https://mcp.hiveloom.cloud"
```
If that issuer comes back as `http://...`, fix the tunnel config before you
continue.
## Why this works for Claude and remote MCP clients
Anthropic's MCP connector docs currently say:
- the MCP server must be publicly exposed through HTTP
- the MCP server URL must start with `https://`
That is exactly what this tunnel provides.
## Create an MCP identity
Once HTTPS is correct:
```bash
hiveloom mcp-identity create \
--tenant default \
--name claude-desktop \
--agent support-bot
```
If the printed MCP URL still shows `http://127.0.0.1:3000`, keep the same
`/mcp//` path and replace only the scheme + host with your
public hostname:
```text
https://mcp.hiveloom.cloud/mcp/default/support-bot
```
## Troubleshooting
- `cloudflared` shows `502` or `Unable to reach the origin service`:
Hiveloom is not listening on `127.0.0.1:3000`, or the tunnel points at the
wrong port.
- OAuth discovery returns `http://127.0.0.1:3000`:
`httpHostHeader` is missing or wrong in `config.yml`.
- Tunnel works, but the MCP client still rejects it:
fetch `/.well-known/oauth-authorization-server` and verify every URL starts
with `https://`.
- You want the classic reverse-proxy route instead:
use [Deploy on a VPS](/deploy).
## Next
- [Discover the MCP endpoint](/chat-client/mcp-endpoint)
- [Connect Claude Desktop](/chat-client/claude-desktop)
---
# DNS
URL: https://docs.hiveloom.cloud/deploy/dns
# DNS
Before Caddy (or any proxy) can obtain a Let's Encrypt certificate, your hostname
must resolve to the VPS. Do this first — ACME fails hard if DNS is wrong.
## 1. Create an A record
At your DNS provider, create an `A` record (and optionally an `AAAA` record for
IPv6) for the hostname you want to use. The value is your VPS's public IP.
Replace `hiveloom.example.com` below with your hostname.
## 2. Wait for propagation
Most providers propagate within seconds; some take minutes. Verify from a machine
that is **not** the VPS:
```bash
dig +short hiveloom.example.com
# Must print your VPS's public IP.
```
If `dig` returns nothing, wait another minute and try again. If it returns the
wrong IP, fix the record before proceeding.
From a different vantage point — useful to rule out local DNS caching:
```bash
dig +short hiveloom.example.com @1.1.1.1
dig +short hiveloom.example.com @8.8.8.8
```
All three should agree.
## 3. AAAA is optional but recommended
If your VPS has an IPv6 address, add an `AAAA` record pointing at it. Let's Encrypt
will validate whichever family responds first; having both means your site is
reachable from IPv6-only clients.
## Common failure modes
- **Wrong IP**: you put the gateway/NAT address instead of the VPS's public address.
`curl ifconfig.me` from the VPS prints the right value.
- **Proxied by Cloudflare**: if the record shows the orange cloud icon in
Cloudflare's dashboard, Let's Encrypt will see Cloudflare's IP, not yours, and
ACME HTTP-01 will fail. Set the record to "DNS only" (grey cloud) for the initial
issuance, or use DNS-01 challenges.
- **TTL too high**: if you previously pointed the name somewhere else, downstream
resolvers may still have the old record cached. Wait for TTL to expire, or lower
TTL the day before the change.
Next: [Firewall](/deploy/firewall).
---
# Firewall
URL: https://docs.hiveloom.cloud/deploy/firewall
# Firewall
Once Caddy terminates TLS, the only public ports you need are SSH, TCP/80
(for ACME HTTP-01 challenges and the HTTPS redirect), and TCP/443.
## UFW (Ubuntu / Debian default)
Before running `ufw enable` on a remote box:
- Double-check the SSH port number. An off-by-one here will cut your session.
- Keep a second SSH session open in another terminal as insurance.
- Confirm no other service you care about is listening on a port you're about to
block (`ss -tlnp`).
```bash
# Detect your actual SSH port first — don't assume 22
sudo grep -i '^Port' /etc/ssh/sshd_config /etc/ssh/sshd_config.d/*.conf 2>/dev/null
# Replace 22 below with whatever Port is set to.
sudo ufw default deny incoming
sudo ufw default allow outgoing
sudo ufw allow 22/tcp comment "ssh" # use YOUR actual ssh port
sudo ufw allow 80/tcp comment "caddy acme+redirect"
sudo ufw allow 443/tcp comment "caddy https"
sudo ufw enable
sudo ufw status numbered
```
## firewalld (RHEL / Fedora)
```bash
sudo firewall-cmd --permanent --add-service=ssh
sudo firewall-cmd --permanent --add-service=http
sudo firewall-cmd --permanent --add-service=https
sudo firewall-cmd --reload
```
## Cloud-provider security groups
Whatever firewall the VPS provider runs *in front of* your host (AWS Security
Groups, Hetzner Cloud Firewall, DigitalOcean Cloud Firewall, etc.) needs the same
rule shape: allow inbound `22` (or your SSH port), `80`, `443` from `0.0.0.0/0`
and `::/0`; deny the rest.
**UFW / firewalld only cover the host-level layer.** If port 80 is blocked at the
cloud edge, ACME challenges cannot complete and certificate issuance fails.
## Do not expose Hiveloom's upstream port
Hiveloom defaults to `:3000`. You do **not** want that open to the public; Caddy
sits in front of it and proxies from loopback. The [systemd](/deploy/systemd) page
covers binding Hiveloom to `127.0.0.1:3000`.
Next: [Reverse proxy](/deploy/reverse-proxy).
---
# Reverse proxy
URL: https://docs.hiveloom.cloud/deploy/reverse-proxy
# Reverse proxy (Caddy)
Caddy terminates TLS, obtains a free Let's Encrypt certificate, and proxies traffic
to Hiveloom running on localhost. One command to install on Debian/Ubuntu:
```bash
sudo apt install -y debian-keyring debian-archive-keyring apt-transport-https curl
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' \
| sudo gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg
curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' \
| sudo tee /etc/apt/sources.list.d/caddy-stable.list
sudo apt update
sudo apt install -y caddy
```
For other distros, see the [Caddy install docs](https://caddyserver.com/docs/install).
Confirm:
```bash
caddy version # should print v2.x
systemctl is-active caddy # should print "active"
```
## Drop in the Hiveloom Caddyfile
Hiveloom ships a helper that prints a ready-to-use Caddyfile for your hostname:
```bash
sudo mkdir -p /etc/caddy/Caddyfile.d
hiveloom tls render --host hiveloom.example.com --email you@example.com \
| sudo tee /etc/caddy/Caddyfile.d/hiveloom.caddy
sudo systemctl reload caddy
```
Replace `hiveloom.example.com` and `you@example.com`. The email is used by Let's
Encrypt for renewal notifications.
The first request to `https://hiveloom.example.com/` will trigger Caddy's ACME
flow. Give it 10–30 seconds, then go to [TLS](/deploy/tls) to verify.
## Already running Nginx, Traefik, or Cloudflare Tunnel?
You don't need Caddy. Point your existing proxy at Hiveloom on `127.0.0.1:3000`
and forward the right headers.
Required forwarded headers, a working Nginx config, and the Cloudflare-Tunnel route.
Next: [TLS](/deploy/tls).
---
# TLS
URL: https://docs.hiveloom.cloud/deploy/tls
# TLS / Let's Encrypt
If the previous pages are done — DNS points at the VPS, firewall allows 80+443,
Caddy is running with the Hiveloom Caddyfile — the first request to your hostname
triggers ACME issuance automatically.
## Trigger issuance
```bash
curl -I https://hiveloom.example.com/
```
Expect a first request to stall for 5–30 seconds while Caddy negotiates the
certificate. Subsequent requests are instant.
## Verify
```bash
curl -s https://hiveloom.example.com/healthz
# {"status":"ok"}
curl -s https://hiveloom.example.com/.well-known/oauth-authorization-server | jq
```
The OAuth metadata URLs **must** start with `https://`. If they start with
`http://`, the proxy isn't forwarding `X-Forwarded-Proto: https`. Revisit
[Reverse proxy](/deploy/reverse-proxy).
## Iterate without triggering rate limits
Let's Encrypt rate-limits heavily on repeat attempts for the same hostname. When
iterating, use staging:
```bash
hiveloom tls render --host hiveloom.example.com --email you@example.com --acme-env staging
```
The resulting cert is not publicly trusted; add `-k` to curl. Swap back to
production with a fresh render + reload once things look right.
## Port scan
From another host:
```bash
nmap -Pn -p 1-10000 hiveloom.example.com
```
Only SSH, 80, and 443 should answer. If port 3000 (or any other upstream) answers,
something is wrong — see [Firewall](/deploy/firewall) and
[systemd](/deploy/systemd).
## Common ACME failures
**`context deadline exceeded`** — two causes:
1. DNS doesn't resolve to this VPS from Let's Encrypt's vantage.
2. TCP/80 is blocked upstream (cloud security group, not UFW).
Port 80 must be reachable *from the internet*, not just from the VPS itself.
Check your cloud provider's security group as well as UFW.
**`rate limit exceeded`** — you've tried too many times. Wait an hour and use
`--acme-env staging` until you have a clean run, then switch back.
**Certificate issued but OAuth metadata still says `http://`** — the proxy isn't
forwarding `X-Forwarded-Proto: https`. Re-check
[Reverse proxy](/deploy/reverse-proxy).
Next: [systemd](/deploy/systemd).
---
# systemd
URL: https://docs.hiveloom.cloud/deploy/systemd
# systemd
With Caddy sitting in front, Hiveloom should bind to loopback only. The public
internet reaches Hiveloom **through Caddy**, not directly.
## 1. Create the service user (if not already)
```bash
sudo useradd --system --home-dir /var/lib/hiveloom --shell /usr/sbin/nologin hiveloom
sudo mkdir -p /var/lib/hiveloom
sudo chown hiveloom:hiveloom /var/lib/hiveloom
```
## 2. Write the unit file
Save as `/etc/systemd/system/hiveloom.service`:
```ini
[Unit]
Description=Hiveloom service
After=network-online.target
Wants=network-online.target
[Service]
Type=simple
User=hiveloom
Group=hiveloom
ExecStart=/usr/local/bin/hiveloom serve --host 127.0.0.1 --port 3000 --data-dir /var/lib/hiveloom
Restart=on-failure
RestartSec=5
Environment=HIVELOOM_DATA_DIR=/var/lib/hiveloom
# Hardening (optional but recommended)
NoNewPrivileges=yes
ProtectSystem=strict
ProtectHome=yes
ReadWritePaths=/var/lib/hiveloom
PrivateTmp=yes
[Install]
WantedBy=multi-user.target
```
Note the `--host 127.0.0.1` flag: Hiveloom listens on loopback only. Caddy
proxies from there.
If you installed Hiveloom with the one-line installer on Linux, you already
have a `hiveloom.service` unit. In that case, verify the current `ExecStart`
line before replacing anything:
```bash
sudo systemctl cat hiveloom
```
The installer-generated unit already binds to `127.0.0.1` by default because
`hiveloom serve` defaults to `--host 127.0.0.1`.
## 3. Enable and start
```bash
sudo systemctl daemon-reload
sudo systemctl enable --now hiveloom
sudo systemctl status hiveloom
```
Verify it's bound to loopback and not the public interface:
```bash
ss -tlnp | grep :3000
# Must show "127.0.0.1:3000", not "0.0.0.0:3000"
```
## 4. End-to-end smoke test
```bash
# Health over HTTPS — hits Caddy, Caddy hits Hiveloom on loopback
curl -s https://hiveloom.example.com/healthz
# {"status":"ok"}
# Metadata — URLs must be https://
curl -s https://hiveloom.example.com/.well-known/oauth-authorization-server | jq .issuer
# "https://hiveloom.example.com"
```
If both succeed, your VPS deployment is done. Time to create your first agent.
## Logs
Systemd captures Hiveloom's stderr/stdout:
```bash
sudo journalctl -u hiveloom -f # follow live
sudo journalctl -u hiveloom --since "10 min ago"
```
Hiveloom also writes its own structured logs — see
[`hiveloom logs`](/cli).
## Upgrading the binary
Stop the service, replace the binary, restart:
```bash
sudo systemctl stop hiveloom
sudo cp /path/to/new/hiveloom /usr/local/bin/hiveloom
sudo systemctl start hiveloom
sudo systemctl status hiveloom
```
For a more thorough upgrade path (backups, migration, rollback), see
[Operations](/operations).
Next: [Create your first agent](/first-agent/credentials).
---
# Bring your own reverse proxy
URL: https://docs.hiveloom.cloud/deploy/reverse-proxy/byo-proxy
# Bring your own reverse proxy
If you already run Nginx, Traefik, Cloudflare Tunnel, or a Kubernetes ingress,
you don't need Caddy. Point your proxy at Hiveloom bound to `127.0.0.1:3000`
(see [systemd](/deploy/systemd)) and make sure the following forwarded headers
are set.
## Required forwarded headers
| Header | Required value | Consumed by |
|---|---|---|
| `X-Forwarded-Proto` | `https` | Used to construct `https://` URLs in OAuth metadata. |
| `X-Forwarded-Host` | `` | Hostname in public URLs. |
| `Host` | `` | Fallback if `X-Forwarded-Host` is absent. |
| `X-Forwarded-For` | `` | Optional; logged on the Hiveloom side. |
If these are wrong, the OAuth metadata returns `http://` URLs and MCP clients
reject the auth server.
## Minimal Nginx snippet
```nginx
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name hiveloom.example.com;
# ssl_certificate / ssl_certificate_key — via certbot or your own CA
location / {
proxy_pass http://127.0.0.1:3000;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-Host $host;
proxy_set_header X-Forwarded-Proto https;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_read_timeout 300s;
}
}
```
## Cloudflare Tunnel
Cloudflare Tunnel can replace your reverse proxy entirely for the public edge.
Point the tunnel at `http://127.0.0.1:3000` and set `httpHostHeader` to your
public hostname so Hiveloom builds correct OAuth and MCP URLs. Cloudflare
forwards `X-Forwarded-Proto`, and Hiveloom already uses that header when
constructing public metadata.
Full working example: [Cloudflare Tunnel](/deploy/cloudflare-tunnel).
---
# Create your first agent
URL: https://docs.hiveloom.cloud/first-agent
# Create your first agent
By the end of this section you'll have a named agent answering your messages
from the CLI, backed by the LLM provider of your choice.
## The three steps
Stash an Anthropic or OpenAI key in the encrypted vault.
One command. Pick a model and a name.
`hiveloom chat ` — a streaming REPL.
## What you'll have at the end
A tenant-local agent reachable over the CLI. The next section,
[Connect a chat client](/chat-client), exposes the same agent over MCP so
Claude Desktop and Cursor can talk to it too.
---
# Store an LLM credential
URL: https://docs.hiveloom.cloud/first-agent/credentials
# Store an LLM credential
Agents need a language model to answer with. Hiveloom stores provider secrets in
an encrypted vault. Secrets **never** go on the command line as arguments.
## Quick rule
Hiveloom picks the provider from the agent's model ID:
- Models starting with `claude-` use the credential named `anthropic`.
- Everything else uses the credential named `openai`.
For the guided path, store **one** of these:
| Provider | Credential name | Model ID examples |
|---|---|---|
| Anthropic | `anthropic` | `claude-sonnet-4-20250514` |
| OpenAI | `openai` | `gpt-4o`, `gpt-4o-mini` |
## Set the credential
Three ways to feed the secret — never as a CLI argument:
```bash
# 1. From a file
hiveloom credential set anthropic --from-file ~/anthropic.key
# 2. From an environment variable
hiveloom credential set anthropic --from-env ANTHROPIC_API_KEY
# 3. From stdin (convenient for one-off pipes)
echo "sk-ant-..." | hiveloom credential set anthropic
```
For OpenAI, replace `anthropic` with `openai`.
## Want OpenRouter, Groq, Ollama, or another OpenAPI-compatible provider?
Hiveloom ships an OpenAPI-spec client that can be pointed at any provider that
implements OpenAI's Chat Completions schema — same `openai` credential,
different base URL.
Set `HIVELOOM_OPENAI_BASE_URL`. Includes a 9-row matrix and the caveats.
## Verify
```bash
hiveloom credential list
```
The table shows credential names and metadata but never the values.
## Rotate or remove
```bash
cat ~/new-anthropic.key | hiveloom credential rotate anthropic
hiveloom credential remove anthropic
```
## Troubleshooting
- **`credential 'anthropic' not found`** — the agent model starts with
`claude-`, so Hiveloom is looking for the credential named `anthropic`.
- **`credential 'openai' not found`** — the agent model does **not** start with
`claude-`, so Hiveloom is looking for `openai`.
- **Auth failure at chat time** — the key is stored but invalid or revoked.
Rotate it.
Next: [Create the agent](/first-agent/create).
---
# Create the agent
URL: https://docs.hiveloom.cloud/first-agent/create
# Create the agent
With the credential in place, one command creates an agent:
```bash
hiveloom agent create --name support-bot
```
That creates an agent with:
- name `support-bot`
- model `claude-sonnet-4-20250514`
- scope mode `dual`
- system prompt `You are a helpful assistant.`
To pin a specific model and prompt:
```bash
hiveloom agent create \
--name support-bot \
--model claude-sonnet-4-20250514 \
--system-prompt "You are a friendly product-support agent for Hiveloom."
```
Use an OpenAI-family model if you stored the `openai` credential:
```bash
hiveloom agent create \
--name support-bot \
--model gpt-4o-mini \
--system-prompt "You are a friendly product-support agent for Hiveloom."
```
Provider selection is implicit:
- `claude-*` models use the `anthropic` credential.
- all other model IDs use the `openai` credential.
## Confirm
```bash
hiveloom agent list
```
Expect a table row for `support-bot` with its ID, model, and creation timestamp.
Inspect details:
```bash
hiveloom agent show
```
## Interactive alternative
If you prefer a wizard:
```bash
hiveloom
```
Choose **Setup** from the TUI and follow the five guided steps: service →
API key → agent → MCP → test chat.
## Common errors
- **`credential 'anthropic' not found`** — you skipped
[Store an LLM credential](/first-agent/credentials), or the model family and
stored credential do not match.
- **Provider rejects the model** — the model ID is syntactically accepted by
the CLI, but your provider account does not have access to it.
- **`tenant 'default' not found`** — the service is not running yet. Start it
with `hiveloom serve` or `sudo systemctl start hiveloom`.
## Edit, version, delete
```bash
hiveloom agent edit # change name, model, system prompt
hiveloom agent versions # list version history
hiveloom agent rollback --to-version N
hiveloom agent delete
```
Every change is versioned; rollback is safe.
Next: [Chat with it from the CLI](/first-agent/chat-cli).
---
# Chat with the agent
URL: https://docs.hiveloom.cloud/first-agent/chat-cli
# Chat with the agent
```bash
hiveloom chat support-bot
```
A stdin/stdout conversation loop. Type a message, press Enter, get a reply.
Conversation context is maintained across turns within the session.
Exit with `Ctrl-C`, `Ctrl-D`, or `/exit`.
## Example session
```text
$ hiveloom chat support-bot
Chatting with support-bot (Ctrl-C to exit)
you: Hi, what are you?
I'm a friendly product-support agent for Hiveloom. I can help you with
questions about installing, running, or configuring your Hiveloom instance.
How can I help?
you: How do I add a markdown skill?
…
```
## Interactive TUI alternative
From inside `hiveloom`:
```
Main menu → Chat → (pick agent) → type
```
Same agent, same conversation history, richer UI.
## Troubleshooting
- **Silent hang after `> `** — the LLM provider is slow or unreachable. Check
`hiveloom logs` for provider errors.
- **`401 unauthorized`** from the provider — credential is invalid. Rotate it
per the [Credentials page](/first-agent/credentials#rotate-or-remove).
- **`context length exceeded`** — your conversation is longer than the model's
window. Start a new session, or inspect
[`hiveloom compaction-log`](/operations) for compaction signals.
Your first agent is running. Next: connect it to an MCP client so Claude Desktop
and Cursor can use it as a tool source.
Next: [Discover the MCP endpoint](/chat-client/mcp-endpoint).
---
# Use a different model provider
URL: https://docs.hiveloom.cloud/first-agent/credentials/providers
# Use a different model provider (OpenAPI-spec compatible)
Hiveloom only ships two HTTP clients in the binary: one that speaks the
Anthropic Messages protocol, and one that speaks the **OpenAI Chat
Completions protocol** as published in OpenAI's OpenAPI spec
(`github.com/openai/openai-openapi`). Anything that conforms to that same
OpenAPI spec — OpenRouter, Groq, Together, DeepSeek, Mistral, Cerebras,
Fireworks, vLLM, LiteLLM, Ollama, etc. — works without touching the binary.
You point the OpenAI client at a different base URL and store the upstream
key under the `openai` credential name.
## One env var
`hiveloom serve` reads `HIVELOOM_OPENAI_BASE_URL` (alias:
`HIVELOOM_OPENAI_COMPAT_BASE_URL`) and uses it instead of
`https://api.openai.com/v1` for every non-`claude-*` model. Set it on the
serve process:
```ini
# /etc/systemd/system/hiveloom.service.d/30-llm-base-url.conf
[Service]
Environment=HIVELOOM_OPENAI_BASE_URL=https://openrouter.ai/api/v1
```
```bash
sudo systemctl daemon-reload && sudo systemctl restart hiveloom
```
The base URL must include the `/v1` (or equivalent) path prefix — Hiveloom
appends `/chat/completions` to whatever you give it.
## Worked examples
All of these only need `--name openai` for the credential and the upstream's
own model IDs in `hiveloom agent create --model …`. The credential is still
named `openai` because it goes through the OpenAI-spec client; the actual
upstream is decided by the base URL.
| Provider | `HIVELOOM_OPENAI_BASE_URL` | Example `--model` |
|---|---|---|
| OpenAI (default) | unset | `gpt-4o-mini` |
| OpenRouter | `https://openrouter.ai/api/v1` | `anthropic/claude-sonnet-4`, `meta-llama/llama-3.1-70b-instruct` |
| Groq | `https://api.groq.com/openai/v1` | `llama-3.1-70b-versatile` |
| Together | `https://api.together.xyz/v1` | `meta-llama/Llama-3.1-70B-Instruct-Turbo` |
| DeepSeek | `https://api.deepseek.com/v1` | `deepseek-chat` |
| Mistral | `https://api.mistral.ai/v1` | `mistral-large-latest` |
| Ollama (local) | `http://127.0.0.1:11434/v1` | `llama3:70b` |
| LiteLLM proxy | your proxy's `/v1` URL | whatever LiteLLM exposes |
| vLLM | `http://:/v1` | the model you launched vLLM with |
For Ollama and other unauthenticated local servers, store any non-empty
string under the `openai` credential — most local runners ignore the
`Authorization` header but the binary still sends it.
## Caveats
- One base URL per `hiveloom serve` process. You can't route different
agents to different upstreams from a single instance — run a separate
Hiveloom instance (or a LiteLLM proxy in front) if you need that.
- The router decides Anthropic vs. OpenAPI client purely by `claude-` model
prefix. To call Claude through OpenRouter, ask for the OpenAI-spec model
name (e.g. `anthropic/claude-sonnet-4`) — that doesn't start with
`claude-`, so it goes through the OpenAPI client and hits OpenRouter.
- Tool-calling and streaming behavior depend on the upstream's spec
conformance. Most of the providers above implement function/tool calls
faithfully; a few don't, in which case the agent loop will skip tool use
for that model.
## Back to the basics
Return to [Store an LLM credential](/first-agent/credentials) for the
Anthropic and OpenAI quickstart and the rotate/remove commands.
---
# Connect a chat client
URL: https://docs.hiveloom.cloud/chat-client
# Connect a chat client
Your agent already works from the CLI. This section connects it to the chat
clients people actually use — Claude Desktop, Cursor, and any other MCP client —
over the public HTTPS endpoint you set up in [Deploy](/deploy).
## Steps
Generate the URL and the per-client setup code.
Paste the URL, paste the setup code, you're done.
Same flow on the IDE surface.
Start with [Discover the MCP endpoint](/chat-client/mcp-endpoint) — both client
pages reference its output.
---
# Discover the MCP endpoint
URL: https://docs.hiveloom.cloud/chat-client/mcp-endpoint
# Discover the MCP endpoint
Hiveloom exposes every agent as a Model Context Protocol server at a stable,
per-tenant URL. External clients (Claude Desktop, Cursor, ChatGPT, custom
scripts) connect to that URL and authenticate via OAuth.
## HTTPS matters
For local testing, `http://127.0.0.1:3000/...` is fine.
For remote MCP clients and Claude's hosted MCP connector flow, use a **public**
`https://` URL. Anthropic's MCP connector docs require the server URL to start
with `https://` and say the server must be publicly exposed over HTTP.
The two supported paths in these docs are:
- [Deploy on a VPS](/deploy) with Caddy and Let's Encrypt
- [Cloudflare Tunnel](/deploy/cloudflare-tunnel) for outbound-only HTTPS
## One command
```bash
hiveloom mcp-identity create \
--tenant default \
--name my-desktop \
--agent support-bot
```
Output:
```text
Created MCP identity 'my-desktop' (abc123…)
Setup code: 7f3ab2c19e4d0815…
MCP URL: https://hiveloom.example.com/mcp/default/support-bot
Add the URL to your MCP client (Claude Desktop, Cursor, etc.).
Enter the setup code in the browser when prompted.
```
Write down the **Setup code** and the **MCP URL**. You'll paste them into
[Claude Desktop](/chat-client/claude-desktop) or [Cursor](/chat-client/cursor)
on the next page.
Setup codes expire after 24 hours. If you lose the code, issue a new one:
```bash
hiveloom mcp-identity reissue-setup-code --tenant default
```
## URL shape
```text
https:///mcp//
```
If you're running locally without TLS, replace `https://` with
`http://127.0.0.1:3000` — the path is the same.
## Manage identities
```bash
hiveloom mcp-identity list --tenant default
hiveloom mcp-identity show --tenant default
hiveloom mcp-identity revoke --tenant default
```
Each identity represents a single client connection. You can create one per
device / user / workflow and revoke them individually.
## Map to a person (optional)
If you're multi-user, map an identity to a person ID so conversations are
attributed:
```bash
hiveloom mcp-identity map --tenant default --person-id alice
```
## Next
Pick your client:
- [Claude Desktop](/chat-client/claude-desktop)
- [Cursor](/chat-client/cursor)
Or use it from any MCP-speaking client — the URL and setup code are all they
need.
---
# Connect Claude Desktop
URL: https://docs.hiveloom.cloud/chat-client/claude-desktop
# Connect Claude Desktop
Before you start: you need the MCP URL and setup code from
[Discover the MCP endpoint](/chat-client/mcp-endpoint).
## 1. Edit Claude Desktop's config
Claude Desktop reads its MCP config from a JSON file:
- **macOS**: `~/Library/Application Support/Claude/claude_desktop_config.json`
- **Windows**: `%APPDATA%\Claude\claude_desktop_config.json`
- **Linux**: `~/.config/Claude/claude_desktop_config.json`
Add (or merge into) the `mcpServers` section:
```json
{
"mcpServers": {
"hiveloom-support-bot": {
"url": "https://hiveloom.example.com/mcp/default/support-bot"
}
}
}
```
Replace the URL with the one `hiveloom mcp-identity create` printed.
## 2. Restart Claude Desktop
Fully quit and reopen the app. On first connect, Claude Desktop:
1. Hits Hiveloom's `/.well-known/oauth-authorization-server` for discovery.
2. Opens a browser window to the `/oauth/authorize` endpoint.
3. Prompts you for the **setup code**.
4. Paste the code. Submit. The browser window closes and Claude Desktop is
connected.
## 3. Verify
In a conversation, Claude Desktop should now show three tools from the
Hiveloom agent:
- **chat** — send a message to the agent.
- **memory** — search stored memories.
- **list_conversations** — list prior conversations.
Ask Claude to use the `chat` tool:
> "Use the hiveloom-support-bot `chat` tool to say hello."
Claude routes the request to the Hiveloom agent and returns the agent's reply.
## Troubleshooting
- **Tools don't appear after restart** — Claude Desktop failed to reach the
MCP URL. Open the MCP log in Claude Desktop's settings and look for 4xx/5xx
errors. Most common cause: TLS misconfig (revisit [TLS](/deploy/tls)).
- **OAuth browser page fails with `http://` URLs in discovery** — the reverse
proxy isn't forwarding `X-Forwarded-Proto: https`. Revisit
[Reverse proxy](/deploy/reverse-proxy).
- **Setup code says "expired"** — issue a new one:
```bash
hiveloom mcp-identity reissue-setup-code --tenant default
```
Next: [Connect Cursor](/chat-client/cursor) for the IDE surface, or jump to
[Write your first skill](/skills/first-skill) to customise the agent.
---
# Connect Cursor
URL: https://docs.hiveloom.cloud/chat-client/cursor
# Connect Cursor
Same setup code, same MCP URL — different client config location.
## 1. Open Cursor's MCP settings
- **Cursor → Settings → Cursor Settings → Features → MCP Servers**.
- Or edit `~/.cursor/mcp.json` directly.
Add:
```json
{
"mcpServers": {
"hiveloom-support-bot": {
"url": "https://hiveloom.example.com/mcp/default/support-bot"
}
}
}
```
Replace the URL with the one from
[Discover the MCP endpoint](/chat-client/mcp-endpoint).
## 2. Authorize
Reload Cursor. It will open a browser to Hiveloom's OAuth flow. Paste the setup
code when prompted. The window closes and the server appears as connected in
Cursor's MCP panel.
## 3. Use it in Composer
In a Cursor Composer chat, reference the tool:
> "@hiveloom-support-bot chat: What can you do?"
Cursor routes the call to the Hiveloom agent and streams the reply back into
the Composer.
## Troubleshooting
Same failure modes as [Claude Desktop](/chat-client/claude-desktop) — the OAuth
flow is identical because both clients follow the MCP spec. Re-read the
troubleshooting section there if things don't connect.
## You're done with the MVP
If you've reached this page, you have:
1. A Hiveloom instance reachable over HTTPS with a valid certificate.
2. A credential stored and an agent created.
3. That agent connected to Claude Desktop and Cursor over MCP.
Next: teach the agent something specific to your domain by writing a skill.
Next: [Write your first skill](/skills/first-skill).
---
# Write your first skill
URL: https://docs.hiveloom.cloud/skills
# Write your first skill
A skill is any markdown file. Hiveloom stores it verbatim and exposes it to the
agent through **progressive loading**: every turn the agent sees a one-line
summary of each installed skill, and fetches the full body only when it
decides the skill applies. A hundred installed skills cost roughly a hundred
summary lines per turn, not a hundred full bodies.
## Where to start
A short walkthrough: file, install, test.
File format, discovery rules, lifecycle.
Three annotated worked skills you can adapt.
---
# Write your first skill
URL: https://docs.hiveloom.cloud/skills/first-skill
# Write your first skill
A **skill** is a markdown file attached to an agent. The agent sees a one-line
summary of every installed skill in its system prompt and pulls in the full
body on demand via the built-in `hiveloom_load_skill` tool — Hiveloom's
"progressive loading". No HTTP endpoint, no plugin runtime — it's just
markdown.
## 1. Author the skill
Save the following as `pirate.md`:
```markdown
# Pirate voice
Always load this skill before replying — it defines the required pirate persona
that must shape every response.
You are a pirate. Reply in the voice of a 17th-century Caribbean pirate. Use
"arr", "matey", "ye landlubber", and similar phrasing. Keep answers short
and colorful. Never break character — if asked whether you are an AI, reply
that you are a parrot who has seen things.
```
The first paragraph is what the agent sees in its system prompt every turn —
make it tell the agent **when to load** this skill. The detailed instructions
land in context only when `hiveloom_load_skill` is called.
## 2. Install it on the agent
```bash
hiveloom capability add support-bot \
--name pirate \
--description "Always load before replying — pirate persona that shapes every response." \
--from-file pirate.md
```
The `--description` you pass here is the per-turn summary the agent reads to
decide whether to load the full body. Make it specific, and — for skills that
must apply unconditionally — say so.
Verify:
```bash
hiveloom capability list support-bot
```
The `pirate` skill should appear in the table.
## 3. Observe the change
```bash
hiveloom chat support-bot
```
> `Hi, what can you do?`
**Before** the skill: a generic support-bot reply.
**After** the skill: the agent calls `hiveloom_load_skill({"skill_name":"pirate"})`
on the first turn, then answers in pirate voice. Same agent, same conversation
UI, new behavior. The tool call is visible in `hiveloom logs` and in the
`conversation_turns` table if you want to confirm.
## 4. Iterate
Edit `pirate.md`, then reinstall:
```bash
hiveloom capability edit support-bot pirate --from-file pirate.md
```
Start a new chat session to pick up the change.
## 5. Disable without deleting
Disable:
```bash
hiveloom capability remove support-bot pirate
```
(A toggle that leaves the file in place but inactive is on the roadmap;
for now "remove" then re-add is the workflow.)
## Next
See the [Skills reference](/skills/reference) for the full file format and
lifecycle, or browse [annotated examples](/skills/examples) covering
style/persona, tool-calling, and retrieval patterns.
---
# Skills reference
URL: https://docs.hiveloom.cloud/skills/reference
# Skills reference
A skill is any markdown file. Hiveloom stores the file verbatim and exposes it
to the agent through **progressive loading**: every turn the agent sees a one-line
summary of each installed skill in its system prompt, alongside an internal
`hiveloom_load_skill` tool. The full body is fetched only when the agent calls
that tool — typically because the user asked for a procedure the summary hints at.
This keeps the standing context small and predictable: a hundred skills cost
roughly a hundred summary lines per turn, not a hundred full bodies.
## File format
There is no required structure, but the **first paragraph (or the `--description`
you pass to `capability add`) is what the agent sees by default**. Make it
specific enough that the model knows when to reach for the skill — then put the
detailed procedure further down.
```markdown
# Skill name
Short description of when this skill applies. Becomes the per-turn summary.
## Instructions
Specific directives to the agent: how to behave, what to emphasise, what to
avoid.
## Examples (optional)
Concrete input → output pairs that the agent uses as few-shot grounding.
```
Frontmatter is **not** parsed by Hiveloom. The `--description` flag on
`capability add` (or, failing that, the first non-empty line of the body —
truncated to 240 characters) is what becomes the per-turn summary. The rest of
the file is delivered verbatim only when `hiveloom_load_skill` is called.
## Discovery
Skills are registered explicitly via `hiveloom capability add`; Hiveloom does
**not** scan a folder. This means:
- You can store skill files anywhere (git repo, shared drive, `/etc/hiveloom/skills`).
- `hiveloom capability add ... --from-file PATH` imports the file's contents
into the tenant store. Moving or deleting the source file after import has
no effect on the installed skill.
- To re-sync after editing the source file, use
`hiveloom capability edit --from-file PATH`.
Lifecycle — all capability commands
| Command | Effect |
|---|---|
| `hiveloom capability add --from-file PATH.md --name N` | Install. |
| `hiveloom capability list ` | Show installed skills. |
| `hiveloom capability show ` | Show one skill's content. |
| `hiveloom capability edit ` | Edit (interactive or `--from-file`). |
| `hiveloom capability remove ` | Uninstall. |
## Size and performance
Progressive loading means a skill's body costs context only on the turn the
agent loads it, not every turn. The standing cost is one summary line per
installed skill. So:
- **Summaries are cheap, but visible every turn.** Make them concise and
trigger-worthy ("Use this when …") — that's what the agent reads to decide
whether to load the full body.
- **Bodies can be longer than they used to be**, since they don't compete with
conversation history on every turn. They only land in context when actually
needed.
- **Very long skills (>2000 tokens)** still hurt the turn that loads them, so
prefer one focused skill per task over a single sprawling document.
See [Operations](/operations) for how Hiveloom manages context overall.
### Forcing a skill to apply every turn
Progressive loading means an agent **may not load a skill** if the summary
doesn't make the connection clear. If a skill needs to apply unconditionally
(e.g. a persona that should colour every reply), say so explicitly in the
summary — for example: "Always load this skill before replying. Defines the
required Acme house style." The agent will then call `hiveloom_load_skill`
on each turn.
## What skills are not
- **Skills are not tools.** They don't invoke external APIs. They only shape
what the agent says. For tool-calling, see the `capability add
--cap-endpoint URL` path in the [CLI reference](/cli).
- **Skills are not fine-tuning.** They're per-request prompt injection. If
you uninstall one, the effect disappears on the next turn.
- **Skills are not tenant-global.** They're attached to a specific agent.
Cloning a skill to another agent means running `capability add` again.
## Next
[Examples](/skills/examples) — three annotated worked skills you can adapt.
---
# Skill examples
URL: https://docs.hiveloom.cloud/skills/examples
# Skill examples
Three worked examples. Copy-paste, edit, and run `hiveloom capability add`
with each.
## 1. Persona / style
Shape the agent's tone and voice. Persona skills must apply on every turn,
so the summary is written to force a load before each reply.
```markdown
# House style
Always load this skill before composing any reply. Defines the mandatory
Acme Corp brand voice that every message must follow.
You are writing on behalf of Acme Corp. Use active voice, British spelling,
and address the reader as "you". Avoid marketing clichés: no "seamless",
"robust", or "cutting-edge". Prefer specifics over superlatives — say
"responds in under 200ms" rather than "blazingly fast".
## Forbidden
- Em dashes (use commas or semicolons).
- Exclamation marks outside of quoted material.
- First-person plural ("we"). Use "Acme" instead.
## Sign-off
Never sign messages. The agent doesn't have a name.
```
Install:
```bash
hiveloom capability add support-bot \
--name house-style \
--description "Always load before replying — defines mandatory Acme brand voice" \
--from-file house-style.md
```
The pointed `--description` is what the agent sees every turn. Without the
"always load before replying" phrasing the model may skip loading on tone-only
turns and lose the persona.
## 2. Tool invocation
Teach the agent when to use a specific capability. (Adds to a tool-calling
capability you've already installed via `--cap-endpoint`.)
```markdown
# Tickets-database lookups
When the user mentions a ticket number (format `T-` followed by digits,
e.g. `T-12345`), call the `tickets.lookup` tool with `{"id": ""}`
before replying. Use the returned status, assignee, and last-update fields
to ground your answer.
If the tool returns `null`, tell the user the ticket doesn't exist — do not
invent details.
## Examples
User: "What's the status of T-8821?"
→ Call tickets.lookup({"id": "8821"}). Reply with `status` and `assignee`.
User: "Is my account active?"
→ Do not call tickets.lookup — this is not a ticket question.
```
## 3. Retrieval-style knowledge
Inline a small corpus of facts.
```markdown
# Hiveloom product facts
Use these facts when answering questions about Hiveloom itself.
## Pricing
Hiveloom is open-source and free. There is no managed offering in the OSS
distribution.
## License
MIT. Source code will be published publicly in a future release.
## Deployment target
Designed to run on a single small VPS (2 vCPU / 4 GB RAM). Kubernetes is
not required.
## Supported LLM providers
Anthropic Messages API natively. Anything that implements the OpenAI Chat
Completions OpenAPI spec via `HIVELOOM_OPENAI_BASE_URL`: OpenAI itself,
OpenRouter, Groq, Together, DeepSeek, Mistral, Cerebras, Fireworks, vLLM,
LiteLLM, and local runners like Ollama.
## What it is not
- A no-code chatbot builder. Primary interface is the CLI.
- A managed cloud. Self-hosted only.
```
For larger knowledge bases, prefer a tool-calling capability backed by a
retrieval service. Progressive loading keeps inline skills off the standing
context, but a multi-MB knowledge dump still hurts the turn that loads it.
## Install all three
```bash
hiveloom capability add support-bot --name house-style --description "Brand voice" --from-file house-style.md
hiveloom capability add support-bot --name tickets-lookup --description "Ticket lookups" --from-file tickets-lookup.md
hiveloom capability add support-bot --name product-facts --description "Product facts" --from-file product-facts.md
hiveloom capability list support-bot
```
That's the whole skill system. More examples arrive as we build out
community-contributed skills.
---
# CLI reference
URL: https://docs.hiveloom.cloud/cli
# CLI reference
One page per top-level `hiveloom` subcommand. Pick a group below; each card
goes to the per-command reference. `hiveloom --help` prints the same
content from the binary.
## Service & operator
Start the HTTP service.
Liveness + readiness probe.
Per-tenant runtime snapshot.
Self-diagnostic.
Recent agent activity.
Live process / scheduler view.
Render the Caddyfile.
## Tenants & identity
Create / list / delete tenants.
Bearer-token issuance.
Per-client MCP credentials.
## Agents
Create / edit / version agents.
Attach skills + tool capabilities.
Store provider keys.
Stream a conversation.
YAML-manifest apply.
## Scheduling & events
Cron-driven agent runs.
Event-driven agent runs.
## Backup, upgrade, observability
Create / list / restore archives.
In-place binary upgrade.
Revert to the previous binary.
Token-budget audit trail.
## Global flags
Available on most commands:
| Flag | Description |
|---|---|
| `--tenant ` | Tenant (default: `default`) |
| `--endpoint ` | API endpoint (default: auto-detected) |
| `--token ` | Bearer token for remote access |
| `--json` | Output as JSON instead of a human table |
## Secrets contract
Secrets are **never** passed as CLI arguments. All credential-accepting commands
take:
- `--from-env VAR` — read from an environment variable, or
- `--from-file PATH` — read from a file, or
- stdin via `echo "secret" | hiveloom `.
---
# hiveloom agent
URL: https://docs.hiveloom.cloud/cli/agent
# `hiveloom agent`
The primary command for managing agent definitions. Each agent is a named,
versioned bundle of: a system prompt, a model selection, a credential, and
optional capabilities (skills) and chat-surface bindings.
## Synopsis
```bash
hiveloom agent [GLOBAL FLAGS]
```
### Global flags
| Flag | Default | Description |
|---|---|---|
| `--tenant ` | `default` | Target tenant. |
| `--endpoint ` | — | API endpoint. |
| `--token ` | — | Bearer token for remote access. |
| `--json` | — | JSON output. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `create` | Create a new agent. |
| `list` | List agents in the tenant. |
| `show` | Show one agent (system prompt, model, capabilities). |
| `edit` | Edit an existing agent in `$EDITOR`. Bumps the version. |
| `delete` | Delete an agent. |
| `versions` | List historical versions for an agent. |
| `rollback` | Roll an agent back to a previous version. |
| `export` | Export the agent definition as a YAML manifest. |
| `reflect` | Trigger agent self-reflection (placeholder). |
| `bind` | Bind the agent to a chat surface. |
| `compaction` | View or update the agent's compaction config. |
Run `hiveloom agent --help` for arguments.
## Examples
Create an agent and verify:
```bash
hiveloom agent create \
--name pirate \
--model anthropic:claude-sonnet-4-6 \
--credential anthropic-default \
--system "Reply only in pirate-speak."
hiveloom agent list
```
Show the full definition:
```bash
hiveloom agent show pirate --json
```
Edit and roll back if it goes wrong:
```bash
hiveloom agent edit pirate # opens $EDITOR; bumps version
hiveloom agent versions pirate # lists v1, v2, ...
hiveloom agent rollback pirate v1
```
Export for source control or to recreate elsewhere:
```bash
hiveloom agent export pirate > pirate.yaml
hiveloom apply --file pirate.yaml --tenant acme
```
## Versioning
Every `edit` (or `apply` of a changed manifest) creates a new version of
the agent and atomically promotes it. `rollback` switches the active
version pointer; older versions are retained until they are explicitly
purged.
## See also
- [Create your first agent](/first-agent/create) — guided walkthrough.
- [`hiveloom apply`](/cli/apply) — manifest-driven create/update.
- [`hiveloom capability`](/cli/capability) — attach skills to an agent.
---
# hiveloom apply
URL: https://docs.hiveloom.cloud/cli/apply
# `hiveloom apply`
Reads a manifest file and reconciles the tenant's agent set against it.
Use this to manage agents in source control or to recreate a tenant's
state on a fresh instance.
## Synopsis
```bash
hiveloom apply --file [OPTIONS]
```
## Options
| Flag | Default | Description |
|---|---|---|
| `-f, --file ` | — *(required)* | Path to the manifest. YAML or JSON. |
| `--tenant ` | `default` | Target tenant. |
| `--endpoint ` | — | API endpoint. |
| `--token ` | — | Bearer token. |
| `--prune` | off | Delete agents present in the tenant but absent from the manifest. |
| `--json` | — | JSON-formatted result. |
## Manifest shape
The manifest lists the desired agents:
```yaml
agents:
- name: pirate
model: anthropic:claude-sonnet-4-6
credential: anthropic-default
system: |
Reply only in pirate-speak.
capabilities:
- file: ./skills/pirate-voice.md
- name: reviewer
model: anthropic:claude-sonnet-4-6
credential: anthropic-default
system: |
Code reviewer focussed on diff hygiene.
```
Generate one from an existing agent with
[`hiveloom agent export`](/cli/agent).
## Examples
Apply without pruning (safe default):
```bash
hiveloom apply --file ./agents.yaml --tenant acme
```
Reconcile destructively (deletes anything not in the manifest):
```bash
hiveloom apply --file ./agents.yaml --tenant acme --prune
```
Capture the result for CI:
```bash
hiveloom apply --file ./agents.yaml --json | tee apply-result.json
```
## What changes
- New agents are created at version 1.
- Existing agents whose definition differs get a new version.
- Without `--prune`, agents not in the manifest are left untouched. With
`--prune`, they are deleted.
- Capability files are read at apply time and inlined into the agent's
capability bindings.
---
# hiveloom auth
URL: https://docs.hiveloom.cloud/cli/auth
# `hiveloom auth`
Manages bearer tokens for the admin API. Use these tokens with
`--token "$TOKEN"` (or `HIVELOOM_TOKEN` env var) on any other CLI command
that hits a remote instance, and as the `Authorization: Bearer` header
when calling the admin API directly.
## Synopsis
```bash
hiveloom auth [GLOBAL FLAGS]
```
### Global flags
| Flag | Description |
|---|---|
| `--endpoint ` | API endpoint. |
| `--token ` | Existing bearer token for remote access. (You need a token to create more.) |
| `--json` | JSON output. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `token-create` | Mint a new bearer token. The plaintext is shown **once** — capture it now. |
| `token-list` | List token IDs and metadata (never the secret). |
| `token-revoke` | Revoke a token by ID. |
Run `hiveloom auth --help` for arguments.
## Examples
Create a token on a fresh local instance (no auth required for the first
token):
```bash
hiveloom auth token-create --name "ops-laptop"
# Output includes the token plaintext exactly once. Store it in a secret manager.
```
Use the token from another machine:
```bash
export HIVELOOM_TOKEN="hlk_..."
hiveloom status --endpoint https://hiveloom.example.com --token "$HIVELOOM_TOKEN"
```
List token IDs so you can revoke a leaked one:
```bash
hiveloom auth token-list
hiveloom auth token-revoke tok_abc123
```
## Security model
- Tokens are stored hashed; the plaintext only exists in your hands.
- Revocation is immediate — Hiveloom checks the hash on every request.
- A revoked token cannot be re-issued; create a new one.
- A leaked token grants the same admin scope as the issuer; rotate by
creating a new token before revoking the old.
---
# hiveloom backup
URL: https://docs.hiveloom.cloud/cli/backup
# `hiveloom backup`
Captures and restores a complete instance archive: platform DB, every
tenant DB, and the encryption master key. Run this **before every
upgrade**, and on a schedule for disaster recovery.
## Synopsis
```bash
hiveloom backup [GLOBAL FLAGS]
```
### Global flags
| Flag | Description |
|---|---|
| `--endpoint ` | API endpoint. |
| `--token ` | Bearer token. |
| `--json` | JSON output. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `create` | Create a new backup archive. |
| `list` | List available backups. |
| `restore` | Restore from a backup archive. |
## Examples
Create a dated archive:
```bash
hiveloom backup create --output /var/backups/hiveloom-$(date +%F).tar.gz
```
List existing archives:
```bash
hiveloom backup list
```
Restore (stops the service, replaces the data dir, restarts it):
```bash
hiveloom backup restore --input /var/backups/hiveloom-2026-04-25.tar.gz
```
## What's in the archive
| File | What it is |
|---|---|
| `platform.db` | Tenant + agent metadata. |
| `tenants/.db` | Per-tenant agent state, conversations, capabilities. |
| `master.key` | Symmetric key used to encrypt credentials at rest. |
The master key is in the archive so a restore is self-contained. Treat
the archive as a top-tier secret: anyone with it can decrypt every stored
credential. Store it in encrypted off-VPS storage.
## See also
- [Upgrade](/operations/upgrade) — backup is step 1.
- [Backup & restore](/operations/backup-restore) — full disaster-recovery
walkthrough.
---
# hiveloom capability
URL: https://docs.hiveloom.cloud/cli/capability
# `hiveloom capability`
Skills are markdown documents (with frontmatter) that extend an agent's
behaviour. `capability` attaches them to a specific agent and manages the
binding.
## Synopsis
```bash
hiveloom capability [GLOBAL FLAGS]
```
### Global flags
| Flag | Default | Description |
|---|---|---|
| `--tenant ` | `default` | Tenant slug. |
| `--endpoint ` | — | API endpoint. |
| `--token ` | — | Bearer token. |
| `--json` | — | JSON output. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `add` | Attach a skill file to an agent. |
| `list` | List capabilities attached to an agent. |
| `show` | Show one capability's metadata + body. |
| `edit` | Edit a capability in `$EDITOR`. |
| `remove` | Detach a capability from an agent. |
## Examples
Attach a markdown skill file:
```bash
hiveloom capability add \
--agent pirate \
--file ./skills/pirate-voice.md
```
List attached skills:
```bash
hiveloom capability list --agent pirate
```
Inspect one:
```bash
hiveloom capability show cap_abc123 --agent pirate
```
Edit live (bumps the agent's version on save):
```bash
hiveloom capability edit cap_abc123 --agent pirate
```
Detach without deleting the source file:
```bash
hiveloom capability remove cap_abc123 --agent pirate
```
## Skill file format
See [Skills reference](/skills/reference) for frontmatter fields and the
on-disk layout. Three worked examples (persona, tool-calling,
retrieval-style knowledge injection) are at
[Skill examples](/skills/examples).
---
# hiveloom chat
URL: https://docs.hiveloom.cloud/cli/chat
# `hiveloom chat`
Sends a message to an agent and streams the reply. Use it as a smoke test
before connecting Claude Desktop or Cursor, or as a scratchpad while
iterating on a system prompt or skill.
## Synopsis
```bash
hiveloom chat [OPTIONS]
```
## Arguments
| Argument | Description |
|---|---|
| `` | Agent name or ID. |
## Options
| Flag | Default | Description |
|---|---|---|
| `--tenant ` | `default` | Tenant slug. |
| `--endpoint ` | — | API endpoint for a remote instance. |
| `--token ` | — | Bearer token for remote access. |
## Examples
Interactive session against the default tenant:
```bash
hiveloom chat pirate
> Tell me a joke.
< Arrr, ye landlubber...
> /exit
```
One-shot, piped from another command:
```bash
echo "Summarise the last commit" | hiveloom chat reviewer
```
Remote instance:
```bash
hiveloom chat pirate \
--endpoint https://hiveloom.example.com \
--token "$HIVELOOM_TOKEN"
```
## Tips
- Use `Ctrl-D` (EOF) or type `/exit` to leave an interactive session.
- The first chat against an agent will fail clearly if the agent's
credential is invalid; rotate it with
[`hiveloom credential rotate`](/cli/credential).
- For a richer interactive experience (multi-agent, history), use
[`hiveloom interactive`](/cli) — note: not yet covered by a dedicated
reference page.
---
# hiveloom compaction-log
URL: https://docs.hiveloom.cloud/cli/compaction-log
# `hiveloom compaction-log`
Reads the compaction event log — a record of every time an agent's
context was summarised to fit the model's token budget. Use this when an
agent's replies get vague or short and you suspect aggressive
compaction.
## Synopsis
```bash
hiveloom compaction-log [OPTIONS]
```
## Options
| Flag | Default | Description |
|---|---|---|
| `--agent ` | — | Filter by agent (ID or name). |
| `--tenant ` | `default` | Tenant slug. |
| `--since ` | `24h` | Show events from the last N. Examples: `1h`, `6h`, `24h`, `7d`, `30d`. |
| `--limit ` | `50` | Maximum events. |
| `--json` | — | JSON output. |
| `--endpoint ` | — | API endpoint. |
| `--token ` | — | Bearer token. |
## Examples
Last 24 hours, default tenant, all agents:
```bash
hiveloom compaction-log
```
Last week for one agent, JSON, fed to `jq`:
```bash
hiveloom compaction-log \
--agent pirate \
--since 7d \
--limit 500 \
--json \
| jq 'select(.trigger=="token-budget")'
```
## What's logged
Each event records: when it fired, which agent, what triggered it
(`token-budget`, `manual`, `tool-result-truncation`), how many tokens
were retained, and how many were summarised. Per-agent compaction
behaviour is configured via
[`hiveloom agent compaction`](/cli/agent).
## When to use it
- An agent's replies suddenly look vague → check if compaction is firing
too aggressively.
- An agent stops referencing earlier turns → compaction may be dropping
the relevant context. Tune the agent's compaction config or raise the
budget.
- After a release that changed compaction defaults → confirm the new
behaviour matches expectations on a representative agent.
---
# hiveloom credential
URL: https://docs.hiveloom.cloud/cli/credential
# `hiveloom credential`
Stores the provider API keys that agents use for inference. Hiveloom
**never accepts a secret as a CLI flag** — every value is read from an
environment variable, a file, or stdin.
## Synopsis
```bash
hiveloom credential [GLOBAL FLAGS]
```
### Global flags
| Flag | Default | Description |
|---|---|---|
| `--tenant ` | `default` | Tenant slug. |
| `--endpoint ` | — | API endpoint. |
| `--token ` | — | Bearer token for remote access. |
| `--json` | — | JSON output. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `set` | Store a credential. |
| `list` | List credential names. **Never** prints values. |
| `rotate` | Replace the secret value of an existing credential. |
| `remove` | Delete a credential. |
## Storing a credential
The value comes from one of three sources:
| Source | Flag |
|---|---|
| Environment variable | `--from-env ` |
| File on disk | `--from-file ` |
| stdin | (omit both flags; pipe the secret in) |
### Examples
From an env var (most common):
```bash
export ANTHROPIC_API_KEY="sk-ant-..."
hiveloom credential set --name anthropic-default --from-env ANTHROPIC_API_KEY
unset ANTHROPIC_API_KEY # don't leave it in your shell history
```
From a file (e.g. piped from a secret manager):
```bash
op read "op://Engineering/anthropic/api-key" > /tmp/anth.key
hiveloom credential set --name anthropic-default --from-file /tmp/anth.key
shred -u /tmp/anth.key
```
From stdin:
```bash
echo "sk-ant-..." | hiveloom credential set --name anthropic-default
```
## List, rotate, remove
```bash
hiveloom credential list # names only
hiveloom credential rotate --name anthropic-default --from-env ANTHROPIC_API_KEY
hiveloom credential remove --name anthropic-default
```
## Credential names and model IDs
Hiveloom picks the HTTP client purely from the model ID: IDs starting with
`claude-` use the credential named `anthropic`; everything else uses `openai`.
For OpenRouter, Groq, Ollama, vLLM, LiteLLM, and other OpenAPI-spec-compatible
upstreams, redirect the client by setting `HIVELOOM_OPENAI_BASE_URL` on the
serve process.
[Use a different model provider →](/first-agent/credentials/providers)
## Storage
Credentials are encrypted at rest with the per-instance master key under
`/master.key`. Plaintext only exists in memory at request time
and is scrubbed from logs. They never leave the tenant container.
---
# hiveloom doctor
URL: https://docs.hiveloom.cloud/cli/doctor
# `hiveloom doctor`
Inspects the on-disk state of a Hiveloom installation and prints a list of
findings. Unlike `health` and `status`, `doctor` runs against the data
directory directly — it does not require the service to be running.
## Synopsis
```bash
hiveloom doctor [OPTIONS]
```
## Options
| Flag | Default | Description |
|---|---|---|
| `--data-dir ` | `/var/lib/hiveloom` | Data directory to inspect. |
| `--json` | — | JSON output. |
## Examples
Default install:
```bash
hiveloom doctor
```
Custom data directory:
```bash
hiveloom doctor --data-dir /srv/hiveloom
```
Pipe into a diagnostics tarball:
```bash
hiveloom doctor --json > doctor.json
```
## What it checks
- Data directory exists with sensible permissions.
- Platform DB (`platform.db`) is openable and passes `PRAGMA integrity_check`.
- Master key file is present and readable by the running user.
- Per-tenant databases exist for every active tenant in the platform DB.
- WAL/SHM files are not orphaned from a prior crash.
Each finding is tagged `ok`, `warn`, or `fail`. Run `doctor` first when
[`/healthz`](/cli/health) is returning 503 and you don't yet know whether
the issue is config, data, or runtime.
---
# hiveloom event
URL: https://docs.hiveloom.cloud/cli/event
# `hiveloom event`
Manages event subscriptions: when a platform event matches a subscription,
the subscribed agent is invoked with a templated prompt. Use this for
inbox-style flows where the agent reacts to inbound traffic instead of
being chatted with directly.
## Synopsis
```bash
hiveloom event [GLOBAL FLAGS]
```
### Global flags
| Flag | Default | Description |
|---|---|---|
| `--tenant ` | `default` | Tenant slug. |
| `--endpoint ` | — | API endpoint. |
| `--token ` | — | Bearer token. |
| `--json` | — | JSON output. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `subscribe` | Create an event subscription on an agent. |
| `list` | List subscriptions for an agent. |
| `show` | Show one subscription. |
| `enable` | Enable a disabled subscription. |
| `disable` | Disable temporarily; subscription is kept. |
| `delete` | Delete the subscription. |
## Examples
Subscribe a `triage` agent to a custom inbound event type, with a shared
secret used to authenticate webhook deliveries:
```bash
hiveloom event subscribe triage \
--event-type ticket.created \
--auth-token "$INBOUND_TOKEN"
```
The agent is the positional argument (name or ID). `--auth-token` is required;
inbound webhooks must present its SHA-256-matching value or they're dropped.
Optionally narrow the subscription with `--source-filter `, which
matches the event payload's top-level `source` field.
List + inspect:
```bash
hiveloom event list triage
hiveloom event show triage
```
Toggle:
```bash
hiveloom event disable triage
hiveloom event enable triage
```
Delete:
```bash
hiveloom event delete triage
```
## What happens at delivery time
When a webhook delivers an event whose type matches an active subscription,
Hiveloom verifies the auth token, checks the source filter (if any), and
runs the subscribed agent through the full agent loop with a synthetic
internal conversation. The conversation surface is
`internal` / `event::`. A failure on one
subscription is logged and does not block other subscribers from running.
## Event types
Event types are free-form strings: pick whatever makes sense for the source
system (e.g. `ticket.created`, `pagerduty.incident`, `github.issue.opened`).
Hiveloom doesn't enforce a registry — the type is just the routing key.
## See also
- [`hiveloom schedule`](/cli/schedule) — clock-driven invocations.
---
# hiveloom health
URL: https://docs.hiveloom.cloud/cli/health
# `hiveloom health`
Hits the instance's `/healthz` endpoint and prints a one-line summary. This is
the same check Caddy and uptime monitors should use.
## Synopsis
```bash
hiveloom health [OPTIONS]
```
## Options
| Flag | Description |
|---|---|
| `--endpoint ` | API endpoint. Defaults to the local socket Hiveloom is bound to. |
| `--token ` | Bearer token for remote API access. Not required for `/healthz`. |
| `--json` | Emit a JSON object instead of a human line. |
## Examples
Local instance:
```bash
hiveloom health
```
Remote instance over HTTPS:
```bash
hiveloom health --endpoint https://hiveloom.example.com
```
Machine-readable, for cron or CI:
```bash
hiveloom health --json
```
## What "healthy" means
`/healthz` returns 200 when the service is up and the platform DB is
reachable. If you see 503, the platform DB or master key is unavailable;
check `journalctl -u hiveloom`. If you see a connection error, the service
isn't running — start it with `sudo systemctl start hiveloom`.
For a deeper check that exercises the agent runtime and provider keys, run
[`hiveloom doctor`](/cli/doctor) instead.
---
# hiveloom logs / tail
URL: https://docs.hiveloom.cloud/cli/logs
# `hiveloom logs` and `hiveloom tail`
Two related commands that read the same agent log stream:
- `hiveloom logs` prints the last N entries and exits.
- `hiveloom tail` streams new entries until you Ctrl-C.
Both filter to a single tenant and optionally a single agent.
## Synopsis
```bash
hiveloom logs [OPTIONS]
hiveloom tail [OPTIONS]
```
## Options
| Flag | Applies to | Default | Description |
|---|---|---|---|
| `--tenant ` | both | `default` | Tenant slug. |
| `--agent ` | both | — | Filter by agent ID. Omit to see all agents in the tenant. |
| `--limit ` | `logs` only | `50` | Maximum entries to show. |
| `--endpoint ` | both | — | API endpoint for a remote instance. |
| `--token ` | both | — | Bearer token for remote access. |
| `--json` | both | — | One JSON object per line. |
## Examples
Last 50 entries across all agents in the default tenant:
```bash
hiveloom logs
```
Last 200 entries for a specific agent, JSON for `jq`:
```bash
hiveloom logs --agent agt_abc123 --limit 200 --json | jq 'select(.level=="error")'
```
Live-stream a remote instance:
```bash
hiveloom tail \
--endpoint https://hiveloom.example.com \
--token "$ADMIN_TOKEN" \
--agent agt_abc123
```
## Where these logs come from
These are the **agent runtime** logs — tool invocations, model calls,
compaction events. For the HTTP service's stdout/stderr (request lines,
panics), use `journalctl -u hiveloom` instead.
Credentials never appear in this stream by design; if you ever spot one,
treat it as a bug and report it.
---
# hiveloom mcp-identity
URL: https://docs.hiveloom.cloud/cli/mcp-identity
# `hiveloom mcp-identity`
An MCP identity is a per-person credential that a chat client (Claude
Desktop, Cursor) uses to talk to one of your agent's MCP endpoints.
Identities are created server-side, then claimed by the user via a
short-lived setup code.
## Synopsis
```bash
hiveloom mcp-identity [GLOBAL FLAGS]
```
### Global flags
| Flag | Description |
|---|---|
| `--endpoint ` | API endpoint. |
| `--token ` | Bearer token for remote access. |
| `--json` | JSON output. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `create` | Issue a new MCP identity. Returns a setup code the user redeems. |
| `list` | List MCP identities for a tenant. |
| `show` | Show details for one identity. |
| `map` | Map an identity to a person (free-form label). |
| `unmap` | Remove the person mapping. |
| `revoke` | Revoke an identity. Existing client connections stop working. |
| `reissue-setup-code` | Generate a fresh setup code if the original expired. |
## Examples
Issue an identity for Alice and hand her the setup code:
```bash
hiveloom mcp-identity create \
--tenant acme \
--label "alice@acme.com"
# Output: setup code AB12-CD34-EF56 (valid 24h)
```
Alice pastes the setup code into her chat client. After redemption:
```bash
hiveloom mcp-identity list --tenant acme
hiveloom mcp-identity show id_xyz789 --tenant acme
```
Revoke when Alice leaves:
```bash
hiveloom mcp-identity revoke id_xyz789 --tenant acme
```
If the setup code expired before redemption:
```bash
hiveloom mcp-identity reissue-setup-code id_xyz789 --tenant acme
```
## See also
- [Connect Claude Desktop](/chat-client/claude-desktop) — end-to-end setup
including how the setup code is used by the client.
- [MCP endpoint](/chat-client/mcp-endpoint) — how to find the URL the
identity authenticates against.
---
# hiveloom rollback
URL: https://docs.hiveloom.cloud/cli/rollback
# `hiveloom rollback`
Reverts the binary to the version installed before the most recent
`hiveloom upgrade`. Use this when an upgrade introduces a regression and
you need to be back online fast.
## Synopsis
```bash
hiveloom rollback [OPTIONS]
```
## Options
| Flag | Description |
|---|---|
| `--json` | JSON output. |
There are no other flags — rollback is "go back to the previous version,"
nothing more.
## Examples
```bash
sudo systemctl stop hiveloom
sudo hiveloom rollback
sudo systemctl start hiveloom
hiveloom health
```
## Behaviour
- Restores the prior `hiveloom` binary saved by the previous upgrade.
- Does **not** roll back the data directory — schema migrations applied
by the failed upgrade remain. If the new schema is incompatible with
the old binary, restore from a backup instead:
[`hiveloom backup restore`](/cli/backup).
- Has no effect if no prior version is recorded (e.g. the current binary
was installed by the one-line installer rather than by `upgrade`).
## See also
- [Upgrade](/operations/upgrade) — the full procedure including when to
rollback vs. when to restore from backup.
---
# hiveloom schedule
URL: https://docs.hiveloom.cloud/cli/schedule
# `hiveloom schedule`
Creates and manages cron-style scheduled jobs for an agent. Each job has a
prompt template and a schedule expression; at each fire, Hiveloom runs
the prompt against the agent and persists the result.
## Synopsis
```bash
hiveloom schedule [GLOBAL FLAGS]
```
### Global flags
| Flag | Default | Description |
|---|---|---|
| `--tenant ` | `default` | Tenant slug. |
| `--endpoint ` | — | API endpoint. |
| `--token ` | — | Bearer token. |
| `--json` | — | JSON output. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `create` | Create a scheduled job. |
| `list` | List jobs for an agent. |
| `show` | Show one job, including last run + next fire. |
| `pause` | Pause a job (kept on disk; stops firing). |
| `resume` | Resume a paused job. |
| `delete` | Delete a job. |
## Examples
Daily 09:00 summary fed to a `summariser` agent:
```bash
hiveloom schedule create summariser \
--cron "0 9 * * *" \
--context "Summarise yesterday's events."
```
The agent is the positional argument; you can pass either its name or its ID.
The initial prompt is `--context`; if omitted, a default "you are running as a
scheduled autonomous agent" message is used.
Inspect:
```bash
hiveloom schedule list summariser
hiveloom schedule show summariser
```
Pause for the weekend, resume Monday:
```bash
hiveloom schedule pause summariser
hiveloom schedule resume summariser
```
Delete:
```bash
hiveloom schedule delete summariser
```
## Cron format
Two formats are accepted:
- 5-field standard cron: `minute hour dom month dow` (e.g. `0 9 * * *`).
- 6-field cron with seconds: `second minute hour dom month dow` (e.g.
`*/30 * * * * *` — every 30 seconds).
Schedules are evaluated in the timezone passed via `--timezone` at create
time, defaulting to `UTC`. Use `--one-time-at ` instead of `--cron`
for a single fire.
## What happens at fire time
The in-process scheduler in `hiveloom serve` (see
[`serve --no-scheduler`](/cli/serve)) polls due jobs once per second. When a
job fires, Hiveloom:
1. Loads the agent's current version, capabilities, and the LLM credential
matching its model family (`anthropic` for `claude-*`, otherwise `openai`).
2. Opens an internal conversation with `surface_type=internal` and
`surface_ref=scheduled-job:`.
3. Runs the agent loop with the supplied `--context` (or the default message)
as the first user turn, decrypting the credential through the vault.
4. Marks the conversation `concluded` once the loop returns.
Per-agent concurrency is enforced: a second tick for the same agent is
skipped while a previous run is still in flight. Jobs without a matching
credential fail the run; the rest of the schedule keeps ticking.
## See also
- [`hiveloom event`](/cli/event) — react to *events* rather than fire on
a clock.
---
# hiveloom serve
URL: https://docs.hiveloom.cloud/cli/serve
# `hiveloom serve`
Starts the long-running Hiveloom service. This is the process that owns the
admin API, every per-agent MCP endpoint, and the in-process agent runtime. In
production it runs under systemd; locally it runs in a terminal.
## Synopsis
```bash
hiveloom serve [OPTIONS]
```
## Options
| Flag | Default | Description |
|---|---|---|
| `--host ` | `127.0.0.1` | Bind address. Use `0.0.0.0` only behind a reverse proxy. |
| `--port ` | `3000` | Listen port for HTTP. |
| `--data-dir ` | `/var/lib/hiveloom` | Where the platform DB, tenant DBs, and master key live. Also reads `HIVELOOM_DATA_DIR`. |
| `--no-scheduler` | (off) | Disable the in-process job scheduler. The HTTP API still serves chat and admin requests, but cron-driven `schedule create` jobs do not fire. Useful for ephemeral local sessions or when a separate scheduler process owns that role. |
## Examples
Run on a developer laptop, default settings:
```bash
hiveloom serve
```
Run a production VPS bound to localhost (Caddy terminates TLS in front):
```bash
hiveloom serve --host 127.0.0.1 --port 3000 --data-dir /var/lib/hiveloom
```
Override the data directory via env:
```bash
HIVELOOM_DATA_DIR=/srv/hiveloom hiveloom serve
```
## Environment variables
| Variable | Purpose |
|---|---|
| `HIVELOOM_DATA_DIR` | Same as `--data-dir`. Wins if both are set. |
| `HIVELOOM_OPENAI_BASE_URL` (alias `HIVELOOM_OPENAI_COMPAT_BASE_URL`) | Redirect the OpenAI-spec HTTP client at any other endpoint that implements the OpenAI Chat Completions OpenAPI spec — OpenRouter, Groq, Together, DeepSeek, Mistral, vLLM, LiteLLM, Ollama, … Must include `/v1` (or the upstream's equivalent prefix). Process-wide; one Hiveloom instance ⇒ one upstream. See [Use a different model provider](/first-agent/credentials/providers). |
| `HIVELOOM_MEMORY_CURATION_INTERVAL_TURNS` | Cadence of the per-agent automatic memory curator. Default `8`; `0` disables the periodic pass (explicit "remember that …" requests still trigger). |
## What gets exposed
| Path | Purpose |
|---|---|
| `/healthz` | Liveness check used by Caddy and uptime monitors. |
| `/api/admin/...` | Admin API for tenants, agents, credentials, etc. |
| `/api/agents//mcp` | Per-agent MCP endpoint for chat clients. |
## In-process workers
`hiveloom serve` runs three things in the same process:
- the HTTP API,
- the **agent runtime** (executes chat turns, scheduled-job runs, and event-routed runs against the configured LLM provider),
- the **scheduler**, which polls `scheduled_jobs` once per second and fires due cron entries through the agent runtime.
The scheduler can be disabled with `--no-scheduler`. The agent runtime is
always on — it has no flag, since chat would not work without it.
## Operating it
- Under systemd: `sudo systemctl start hiveloom` (see [systemd setup](/deploy/systemd)).
- Verify it's healthy: `hiveloom health`.
- Tail logs: `journalctl -u hiveloom -f` or `hiveloom tail`.
Never expose port 3000 directly to the public internet — terminate TLS with
Caddy and proxy from `:443` to `127.0.0.1:3000`. See
[Reverse proxy](/deploy/reverse-proxy).
---
# hiveloom status
URL: https://docs.hiveloom.cloud/cli/status
# `hiveloom status`
A higher-fidelity summary than [`hiveloom health`](/cli/health): version,
uptime, listening address, data directory, and tenant/agent counts. Useful as
the first command after a deploy or upgrade.
## Synopsis
```bash
hiveloom status [OPTIONS]
```
## Options
| Flag | Description |
|---|---|
| `--endpoint ` | API endpoint. Auto-detected for a local instance. |
| `--token ` | Bearer token for remote access. |
| `--json` | JSON output for scripts. |
## Examples
```bash
hiveloom status
```
```bash
hiveloom status --endpoint https://hiveloom.example.com --token "$ADMIN_TOKEN"
```
JSON, piped to `jq`:
```bash
hiveloom status --json | jq '.version, .tenants, .uptime_seconds'
```
## When to use it vs. `health`
- `health` answers "is the process up?" — fast, no auth, suitable for an
uptime probe.
- `status` answers "what version and how much is running?" — needs admin
auth on remote instances, gives you the numbers you'd want post-deploy.
For deeper diagnostics (DB integrity, file permissions, provider key
reachability), use [`hiveloom doctor`](/cli/doctor).
---
# hiveloom tenant
URL: https://docs.hiveloom.cloud/cli/tenant
# `hiveloom tenant`
Manages the tenant boundary inside a Hiveloom instance. Every agent,
credential, and conversation lives inside exactly one tenant; the
default tenant is named `default` and is the implicit target of every
other CLI command unless you pass `--tenant`.
## Synopsis
```bash
hiveloom tenant [GLOBAL FLAGS]
```
### Global flags
| Flag | Description |
|---|---|
| `--endpoint ` | API endpoint. Auto-detected for a local instance. |
| `--token ` | Bearer token for remote access. |
| `--json` | JSON output instead of a table. |
### Subcommands
| Subcommand | Purpose |
|---|---|
| `create` | Create a new tenant. |
| `list` | List all tenants. |
| `show` | Show details for one tenant. |
| `enable` | Enable a previously disabled tenant. |
| `disable` | Disable a tenant (kept on disk; agents stop responding). |
| `delete` | Soft-delete a tenant (kept on disk for restore; agents stop). |
Run `hiveloom tenant --help` for the per-subcommand argument
list.
## Examples
```bash
# Create a new tenant
hiveloom tenant create --slug acme --name "Acme Corp"
# List tenants in a human table
hiveloom tenant list
# Show details for one
hiveloom tenant show acme --json | jq
# Disable temporarily; can be re-enabled later
hiveloom tenant disable acme
# Soft-delete; stops responding but data is retained on disk
hiveloom tenant delete acme
```
## Behaviour notes
- Tenant data lives in a per-tenant SQLite database under
`/tenants/.db`. Soft-delete leaves the file intact.
- Disabling a tenant rejects new admin and MCP requests for that tenant
but does not destroy data.
- Hard removal must be done by deleting the tenant DB file by hand after a
soft-delete — there is no `--force` flag.
---
# hiveloom tls
URL: https://docs.hiveloom.cloud/cli/tls
# `hiveloom tls`
Hiveloom does not run a TLS terminator itself. Instead, this command emits
configuration for an external proxy. Today there is one subcommand:
- `hiveloom tls render` — print a complete Caddyfile.
## Synopsis
```bash
hiveloom tls
```
## `hiveloom tls render`
Prints a Caddyfile to stdout that:
- Terminates TLS for `--host` using Let's Encrypt.
- Proxies all traffic to `127.0.0.1:<--upstream-port>`.
- Forwards `X-Forwarded-Proto` so OAuth metadata renders `https://` URLs.
It does **not** install Caddy and does **not** apply the configuration —
you pipe the output where you want it.
### Synopsis
```bash
hiveloom tls render --host --email [OPTIONS]
```
### Options
| Flag | Default | Description |
|---|---|---|
| `--host ` | — *(required)* | Public hostname. DNS must already point at the VPS — Hiveloom does not verify this. |
| `--email ` | — *(required)* | Contact email used by Let's Encrypt for renewal notices. |
| `--acme-env ` | `production` | Use `staging` while testing to avoid Let's Encrypt rate limits. |
| `--upstream-port ` | `3000` | Hiveloom upstream port Caddy should proxy to. |
### Examples
Render and write directly into Caddy's config directory:
```bash
hiveloom tls render \
--host hiveloom.example.com \
--email ops@example.com \
| sudo tee /etc/caddy/Caddyfile
sudo systemctl reload caddy
```
Stage a certificate first to avoid Let's Encrypt rate limits:
```bash
hiveloom tls render \
--host hiveloom.example.com \
--email ops@example.com \
--acme-env staging \
> /tmp/Caddyfile.staging
```
Once staging works, re-render with `--acme-env production`.
## See also
- [Reverse proxy](/deploy/reverse-proxy) — full guided setup.
- [TLS](/deploy/tls) — common Let's Encrypt failure modes.
---
# hiveloom top
URL: https://docs.hiveloom.cloud/cli/top
# `hiveloom top`
A live, terminal-based dashboard. Refreshes on an interval and shows
per-agent activity: in-flight chat sessions, request rate, last error.
Quit with `q` or Ctrl-C.
## Synopsis
```bash
hiveloom top [OPTIONS]
```
## Options
| Flag | Default | Description |
|---|---|---|
| `--endpoint ` | — | API endpoint for a remote instance. |
| `--token ` | — | Bearer token for remote access. |
| `--interval ` | `2` | Refresh interval. |
## Examples
Local instance, default 2 s refresh:
```bash
hiveloom top
```
Slower refresh on a busy production VPS:
```bash
hiveloom top --interval 10
```
Remote:
```bash
hiveloom top \
--endpoint https://hiveloom.example.com \
--token "$ADMIN_TOKEN"
```
## When to use it
`top` is for live observation while you reproduce a bug or watch a deploy
land. For after-the-fact analysis use [`hiveloom logs`](/cli/logs); for
batch metrics, scrape `--json` output of `hiveloom status` on a schedule.
---
# hiveloom upgrade
URL: https://docs.hiveloom.cloud/cli/upgrade
# `hiveloom upgrade`
Upgrades the `hiveloom` binary in place. Downloads the matching release
artifact, swaps the binary, and restarts the service if it was running
under the install script's systemd unit.
## Synopsis
```bash
hiveloom upgrade [OPTIONS]
```
## Options
| Flag | Description |
|---|---|
| `--check` | Print whether an upgrade is available; do not install. |
| `--version ` | Target version (e.g. `0.3.0`). Defaults to `latest`. |
| `--json` | JSON output. |
## Examples
See whether a newer release exists:
```bash
hiveloom upgrade --check
```
Install the latest release:
```bash
sudo hiveloom upgrade
```
Pin a specific version:
```bash
sudo hiveloom upgrade --version 0.3.0
```
## Pre-upgrade checklist
Always:
1. `hiveloom backup create --output ...` — see [`hiveloom backup`](/cli/backup).
2. `sudo systemctl stop hiveloom` — if using a manual install path.
3. `hiveloom upgrade --version ` — replace the binary.
4. `sudo systemctl start hiveloom` — restart.
5. `hiveloom health` — verify.
If something goes wrong, fall back with [`hiveloom rollback`](/cli/rollback).
Schema migrations run automatically on the next `serve` after the
upgrade. Migrations are forward-compatible within a major version; cross-
major upgrades may require an explicit migration step — read the release
notes.
## See also
- [Upgrade](/operations/upgrade) — full operations walkthrough including
rollback recipes.
---
# Operations
URL: https://docs.hiveloom.cloud/operations
# Operations
Day-two concerns: keeping your instance safe, upgrading it, tuning what the
agent remembers, and diagnosing problems.
Per-tenant SQLite + master key, archived to disk.
Stop, swap binary, start, verify.
What gets remembered between turns, and how to tune the cadence.
Common symptoms and the fastest fix.
Deploy this docs site to Cloudflare Pages.
## Fast links
- `hiveloom backup create` before every upgrade
- `hiveloom health` after every restart
- `hiveloom logs` and `journalctl -u hiveloom -f` when something feels off
---
# Backup & restore
URL: https://docs.hiveloom.cloud/operations/backup-restore
# Backup & restore
A Hiveloom backup is a single tarball containing everything stateful in
your instance. Make one before every upgrade and on a schedule for
disaster recovery.
## Where the data lives
| Path | What it is |
|---|---|
| `/platform.db` | Tenants, agents, schedules, auth tokens. |
| `/tenants/.db` | Per-tenant agent state, conversations, capabilities. |
| `/master.key` | Symmetric key for credentials at rest. |
The default `` is `/var/lib/hiveloom`; check
`hiveloom doctor --data-dir ` if you've changed it.
## Create a backup
```bash
hiveloom backup create --output /var/backups/hiveloom-$(date +%F).tar.gz
hiveloom backup list
```
Both commands hit the running service and operate online — there is no
need to stop `hiveloom serve` first.
The archive contains all three pieces (platform DB, tenant DBs, master
key). **Treat it as top-tier secret material**: anyone with this archive
can decrypt every stored credential. Store it in encrypted off-VPS
storage, not on the same VPS.
### Recommended cadence
- Manual: before every upgrade.
- Automated: nightly `cron` running
`hiveloom backup create --output /hiveloom-$(date +%F).tar.gz`,
then a sync to off-VPS storage. Rotate to keep ~30 days locally.
## Restore from a backup
```bash
sudo systemctl stop hiveloom
hiveloom backup restore --input /var/backups/hiveloom-2026-04-25.tar.gz
sudo systemctl start hiveloom
hiveloom health
```
Restore is destructive: it replaces the running data directory's
contents with what's in the archive. Anything created after the archive
was made is lost.
## Verifying a backup
After creating an archive, do a dry-run integrity check on a non-prod
host:
1. Spin up a fresh VPS or container with the same Hiveloom binary
version.
2. `hiveloom backup restore --input ` against an empty
`/var/lib/hiveloom`.
3. `hiveloom doctor` — every check should be `ok`.
4. `hiveloom tenant list` and `hiveloom agent list --tenant ` —
confirm the tenant and agent counts match what you expect.
This is the only way to know the archive actually decrypts and
deserialises before you need it.
## Common failure modes
| Symptom | Cause |
|---|---|
| `master key missing` after restore | Restored only the DBs, not the master key. Restart from the full archive. |
| Credentials decrypt to gibberish | Master key is from a different instance; ensure the archive came from this exact instance. |
| `restore` reports "data dir not empty" | Stop the service and confirm `` is empty (or use a fresh path). |
---
# Cloudflare Pages
URL: https://docs.hiveloom.cloud/operations/cloudflare-pages
# Cloudflare Pages
This docs site is already configured for static export:
- `next.config.mjs` sets `output: 'export'`
- `npm run build` writes the site to `out/`
- `npm run postbuild` generates `llms.txt`, raw markdown, and `sitemap.xml`
That makes Cloudflare Pages a good fit for `docs.hiveloom.cloud`.
## Recommended: Git-integrated Pages
Cloudflare's current Next.js static export guide uses:
- framework preset: `Next.js (Static HTML Export)`
- build command: `npx next build`
- build output directory: `out`
For this repo, use the repo-local commands instead:
```bash
cd hiveloom-cloud/docs/nextra
npm ci
npm run build
npm run postbuild
npm run check-links
```
## One-time setup in Cloudflare
1. Push the docs source to the Git repo you want Cloudflare Pages to watch.
2. In Cloudflare, open `Workers & Pages` and create a new Pages project.
3. Choose `Import an existing Git repository`.
4. Select the repo and branch you want to deploy from, usually `main`.
5. Set these build settings:
```text
Framework preset: Next.js (Static HTML Export)
Root directory: hiveloom-cloud/docs/nextra
Build command: npm ci && npm run build && npm run postbuild
Build output directory: out
Node.js version: 20
```
6. Create the project and let the first build finish.
7. In the Pages project, open `Custom domains` and add `docs.hiveloom.cloud`.
8. If the DNS zone is already in the same Cloudflare account, Cloudflare can
create the DNS record for you.
## Pre-deploy checklist
Run this locally before pushing:
```bash
cd /root/github/hiveloom-app/hiveloom-cloud/docs/nextra
npm ci
npm run build
npm run postbuild
npm run check-links
```
That verifies:
- the static export succeeds
- `llms.txt` and the raw markdown artifacts are regenerated
- internal links still resolve
## What gets published
The built `out/` directory contains:
- HTML pages
- static assets from `public/`
- `llms.txt`
- `llms-full.txt`
- one `.md` file per page
- `sitemap.xml`
## Optional: direct upload
If you do not want Git integration, Cloudflare Pages also supports direct
upload of a prebuilt static directory. In that mode:
1. Build the site locally.
2. Use the generated `out/` directory as the upload artifact.
3. Create the Pages project as a direct-upload project.
Git integration is usually better for this repo because every push produces a
repeatable deployment and preview URLs.
## Exact settings for this repo
Copy these values as-is:
```text
Project name: hiveloom-docs
Production branch: main
Framework preset: Next.js (Static HTML Export)
Root directory: hiveloom-cloud/docs/nextra
Build command: npm ci && npm run build && npm run postbuild
Build output directory: out
Environment variable: NODE_VERSION=20
Custom domain: docs.hiveloom.cloud
```
## After the first deploy
Smoke test the live site:
```bash
curl -I https://docs.hiveloom.cloud/
curl -I https://docs.hiveloom.cloud/llms.txt
curl -I https://docs.hiveloom.cloud/install.md
```
If those return `200`, the static site is live and the AI-readable artifacts are
published too.
---
# Automatic memory curation
URL: https://docs.hiveloom.cloud/operations/memory-curation
# Automatic memory curation
After every chat turn the agent loop runs a lightweight curator pass that
asks the LLM whether the recent conversation contains durable, reusable
facts (user preferences, names, settings explicitly flagged with phrases
like "remember that", "keep in mind", etc.). When the curator decides yes,
it persists at most three short entries to the per-agent memory store via
the internal `hiveloom_memory_write` tool. Transient task details, tool
payloads, and secrets are explicitly excluded.
## Defaults
- The curator runs whenever a turn count is a multiple of **8** (so it isn't
invoked on every reply), or unconditionally on turns that contain an
explicit "remember" phrasing.
- Each write is capped at 120 chars for the key and 1000 chars for the
value; oversized writes are dropped with a warning.
- A failure in the curator is logged and never breaks the user-facing turn.
## Tune the cadence
```ini
# /etc/systemd/system/hiveloom.service.d/40-memory-curation.conf
[Service]
Environment=HIVELOOM_MEMORY_CURATION_INTERVAL_TURNS=8
```
Set to `0` to disable the periodic pass and only act on explicit "remember"
requests.
## Inspect what was saved
Via the `memory` MCP tool, or directly against the tenant database:
```bash
sqlite3 /var/lib/hiveloom/tenants//store.db \
"SELECT scope, key, value FROM memory_entries WHERE archived = 0"
```
---
# Troubleshooting
URL: https://docs.hiveloom.cloud/operations/troubleshooting
# Troubleshooting
Find the symptom, follow the fix. If your issue isn't here, run
[`hiveloom doctor`](/cli/doctor) first — most problems show up there.
## DNS
| Symptom | Most likely cause | Fix |
|---|---|---|
| `dig +short hiveloom.example.com` returns nothing | DNS A/AAAA record not set, or not yet propagated. | Add the record at your DNS provider; wait up to TTL (usually a few minutes). See [DNS](/deploy/dns). |
| Caddy logs `tls: no certificate available` | DNS resolves to a different IP than the VPS. | Verify the A record points at the VPS public IP; some providers use proxied/CDN records that hide it. |
## TLS / Let's Encrypt
| Symptom | Cause | Fix |
|---|---|---|
| Caddy logs `acme: error 429` (rate limited) | Too many cert requests for the same domain in 7 days. | Use `--acme-env staging` while iterating: `hiveloom tls render ... --acme-env staging`. See [TLS](/deploy/tls). |
| `acme: context deadline exceeded` | Cloud provider blocks port 80 at the edge (e.g. some VPS firewalls). | Open port 80 on the VPS firewall and the cloud provider's network firewall. See [Firewall](/deploy/firewall). |
| `acme: HTTP-01 challenge failed` | Caddy isn't reachable on port 80, or DNS is wrong. | `curl -I http://hiveloom.example.com/` should return a Caddy banner. |
## Reverse proxy
| Symptom | Cause | Fix |
|---|---|---|
| OAuth metadata shows `http://` URLs over HTTPS | Reverse proxy isn't forwarding `X-Forwarded-Proto`. | Use the Caddyfile from [`hiveloom tls render`](/cli/tls) — it sets the header correctly. |
| Browser hangs on `https://hiveloom.example.com/` | Caddy not running, or not bound to `:443`. | `sudo systemctl status caddy`, `sudo ss -tlnp \| grep :443`. |
| 502 Bad Gateway from Caddy | Hiveloom isn't running, or it's bound to a different port than Caddy expects. | `hiveloom health`; confirm `serve --port` matches Caddy's upstream port. |
## Service state
| Symptom | Cause | Fix |
|---|---|---|
| `/healthz` returns 503 | Platform DB unreachable, or master key missing. | `hiveloom doctor`. If master key is missing on a new install, the service writes one on first start; check file perms. |
| `hiveloom health` errors with "connection refused" | Service not running. | `sudo systemctl start hiveloom`; check `journalctl -u hiveloom`. |
| `hiveloom doctor` reports "WAL files orphaned" | Crashed in the middle of a write. | Restart cleanly; SQLite recovers. If the service won't start, restore from the most recent backup. |
## Provider credentials
| Symptom | Cause | Fix |
|---|---|---|
| Agent replies are empty / `provider authentication failed` | LLM provider key revoked, expired, or wrong. | `hiveloom credential rotate --name --from-env `. See [`hiveloom credential`](/cli/credential). |
| `model not found` errors | Plain model ID required (no `:` prefix), and the credential name must match: `anthropic` for `claude-*`, `openai` for everything else. | Check `hiveloom credential list` and `hiveloom agent show `. See [Store an LLM credential](/first-agent/credentials). |
| Non-OpenAI upstream (OpenRouter, Groq, Ollama, …) returns 404 / wrong model | `HIVELOOM_OPENAI_BASE_URL` not set on the serve process, or set without the `/v1` suffix. | Set it in a systemd drop-in and restart. See [Use a different model provider](/first-agent/credentials/providers). |
| Replies start truncating mid-sentence | Token budget hit; aggressive compaction. | Inspect `hiveloom compaction-log` and tune `hiveloom agent compaction`. |
## MCP / chat clients
| Symptom | Cause | Fix |
|---|---|---|
| Claude Desktop shows no tools for the agent | Wrong tenant/agent slug in MCP URL. | Re-derive with [`hiveloom mcp-identity show`](/cli/mcp-identity). |
| Setup code rejected as expired | 24h window passed. | `hiveloom mcp-identity reissue-setup-code `. |
| Cursor disconnects after every restart | Bearer header not persisted by the client config. | Re-check the MCP config snippet on [Cursor](/chat-client/cursor). |
## When all else fails
1. `hiveloom doctor --json > doctor.json` — captures the on-disk state.
2. `journalctl -u hiveloom -n 500 --no-pager > svc.log` — recent service
logs.
3. `hiveloom logs --limit 500 --json > agents.log` — recent agent
activity.
4. File an issue at
[github.com/FrancescoMrn/hiveloom-rust](https://github.com/FrancescoMrn/hiveloom-rust)
with `doctor.json` + `svc.log` + a redacted snippet of `agents.log`.
Strip any tenant slugs, hostnames, or user content you don't want
public.
---
# Upgrade
URL: https://docs.hiveloom.cloud/operations/upgrade
# Upgrade
Upgrades are deliberate: a backup, a binary swap, a controlled restart.
Schema migrations run automatically on the next `serve`.
## Procedure
1. **Back up.**
```bash
hiveloom backup create --output /var/backups/hiveloom-$(date +%F).tar.gz
```
See [Backup & restore](/operations/backup-restore) for what the archive
contains and where to store it.
2. **Stop the service.**
```bash
sudo systemctl stop hiveloom
```
3. **Replace the binary.** Either via the install script:
```bash
curl -fsSL https://bin.hiveloom.cloud/install.sh | bash -s -- --version 0.3.0
```
…or via the in-place upgrade subcommand:
```bash
sudo hiveloom upgrade --version 0.3.0
```
`hiveloom upgrade --check` tells you whether a newer release is
available before committing.
4. **Restart and migrate.**
```bash
sudo systemctl start hiveloom
```
Migrations run on first `serve` after the swap. They are forward-
compatible inside a major version. Cross-major upgrades will note any
manual step in the release notes.
5. **Verify.**
```bash
hiveloom health
hiveloom status
```
Both should be green. If `health` reports 503, see the troubleshooting
table below or [Troubleshooting](/operations/troubleshooting).
## Rollback
If the new binary regresses on something you depend on:
```bash
sudo systemctl stop hiveloom
sudo hiveloom rollback
sudo systemctl start hiveloom
hiveloom health
```
`rollback` swaps back to the previous binary saved by the most recent
upgrade. It does **not** undo schema migrations: if the new schema is
incompatible with the old binary, restore the pre-upgrade backup
instead:
```bash
sudo systemctl stop hiveloom
hiveloom backup restore --input /var/backups/hiveloom-pre-upgrade.tar.gz
sudo systemctl start hiveloom
```
## Time budget
Plan ~5 minutes of downtime for a routine point-release upgrade on a
single VPS. Cross-major upgrades may take longer if migrations need to
rewrite tables; back up first, watch `journalctl -u hiveloom` during the
restart.
## See also
- [`hiveloom upgrade`](/cli/upgrade) — flag reference.
- [`hiveloom rollback`](/cli/rollback).
- [`hiveloom backup`](/cli/backup).