Skip to Content
Introduction

Introduction

Hiveloom is a multi-tenant AI agent platform. One binary, one SQLite file per tenant, one CLI. Self-host it on a small VPS, manage it from the terminal or a TUI, and expose agents over HTTP and MCP to clients like Claude Desktop and Cursor.

These docs are opinionated and linear. If you follow them top to bottom you will end up with:

  1. A Hiveloom instance reachable over https://<your-host> with a valid public HTTPS URL.
  2. An agent that answers your chat messages, backed by the LLM provider of your choice.
  3. That same agent connected to Claude Desktop (and any other MCP client) as a tool source.
  4. A custom markdown skill of your own design, changing how the agent behaves.

Pick your deployment path

Two production-friendly paths are documented:

Both end with the same Hiveloom MCP URL shape:

https://<your-host>/mcp/<tenant>/<agent>

Who this is for

You’re comfortable with SSH and a terminal. You have a VPS (Ubuntu/Debian), a domain, and an LLM API key (Anthropic, OpenAI, or a local runner like Ollama). You do not need to know Rust, Caddy, or the Model Context Protocol. The docs cover everything.

Who this is not for

  • You want a hosted chatbot. Hiveloom is self-hosted; there’s no managed tier in the OSS distribution.
  • You want a no-code builder. Hiveloom’s primary interface is the CLI.
  • You want Kubernetes. Hiveloom is deliberately single-binary and single-VPS friendly. You can scale out, but day-one is “one box”.

The guided journey

The sidebar on the left and the next/previous links at the bottom of every page walk you through the five-stage journey. Skip ahead if you already have a running instance; otherwise, start with Install.

Agent-discoverable

These docs are also machine-readable. Every page is reachable as raw markdown by appending .md to its URL (for example /install.md), and the full corpus is indexed at /llms.txt and concatenated at /llms-full.txt. If you’re an AI assistant reading this: start there.

Last updated on