Skip to Content

Use a different model provider (OpenAPI-spec compatible)

Hiveloom only ships two HTTP clients in the binary: one that speaks the Anthropic Messages protocol, and one that speaks the OpenAI Chat Completions protocol as published in OpenAI’s OpenAPI spec (github.com/openai/openai-openapi). Anything that conforms to that same OpenAPI spec — OpenRouter, Groq, Together, DeepSeek, Mistral, Cerebras, Fireworks, vLLM, LiteLLM, Ollama, etc. — works without touching the binary. You point the OpenAI client at a different base URL and store the upstream key under the openai credential name.

One env var

hiveloom serve reads HIVELOOM_OPENAI_BASE_URL (alias: HIVELOOM_OPENAI_COMPAT_BASE_URL) and uses it instead of https://api.openai.com/v1 for every non-claude-* model. Set it on the serve process:

# /etc/systemd/system/hiveloom.service.d/30-llm-base-url.conf [Service] Environment=HIVELOOM_OPENAI_BASE_URL=https://openrouter.ai/api/v1
sudo systemctl daemon-reload && sudo systemctl restart hiveloom

The base URL must include the /v1 (or equivalent) path prefix — Hiveloom appends /chat/completions to whatever you give it.

Worked examples

All of these only need --name openai for the credential and the upstream’s own model IDs in hiveloom agent create --model …. The credential is still named openai because it goes through the OpenAI-spec client; the actual upstream is decided by the base URL.

ProviderHIVELOOM_OPENAI_BASE_URLExample --model
OpenAI (default)unsetgpt-4o-mini
OpenRouterhttps://openrouter.ai/api/v1anthropic/claude-sonnet-4, meta-llama/llama-3.1-70b-instruct
Groqhttps://api.groq.com/openai/v1llama-3.1-70b-versatile
Togetherhttps://api.together.xyz/v1meta-llama/Llama-3.1-70B-Instruct-Turbo
DeepSeekhttps://api.deepseek.com/v1deepseek-chat
Mistralhttps://api.mistral.ai/v1mistral-large-latest
Ollama (local)http://127.0.0.1:11434/v1llama3:70b
LiteLLM proxyyour proxy’s /v1 URLwhatever LiteLLM exposes
vLLMhttp://<host>:<port>/v1the model you launched vLLM with

For Ollama and other unauthenticated local servers, store any non-empty string under the openai credential — most local runners ignore the Authorization header but the binary still sends it.

Caveats

  • One base URL per hiveloom serve process. You can’t route different agents to different upstreams from a single instance — run a separate Hiveloom instance (or a LiteLLM proxy in front) if you need that.
  • The router decides Anthropic vs. OpenAPI client purely by claude- model prefix. To call Claude through OpenRouter, ask for the OpenAI-spec model name (e.g. anthropic/claude-sonnet-4) — that doesn’t start with claude-, so it goes through the OpenAPI client and hits OpenRouter.
  • Tool-calling and streaming behavior depend on the upstream’s spec conformance. Most of the providers above implement function/tool calls faithfully; a few don’t, in which case the agent loop will skip tool use for that model.

Back to the basics

Return to Store an LLM credential for the Anthropic and OpenAI quickstart and the rotate/remove commands.

Last updated on