Skip to content

API Reference

ClosedRouter is a fully OpenAI-compatible proxy. Point your existing SDK or HTTP client at our endpoint and route between every major AI provider through a single API key.

Quickstart

1

Get an API key

Sign up at accounts.closedrouter.net (invite-only beta). Your API key is generated automatically.

2

Make your first request

bash
curl https://api.closedrouter.net/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "messages": [
      {"role": "user", "content": "Hello! What model am I talking to?"}
    ]
  }'
Response
{
  "id": "chatcmpl-abc123",
  "object": "chat.completion",
  "model": "openai/gpt-4o-mini",
  "choices": [{
    "message": {
      "role": "assistant",
      "content": "You're talking to GPT-4o-mini, routed through ClosedRouter!"
    },
    "finish_reason": "stop"
  }],
  "usage": {
    "prompt_tokens": 12,
    "completion_tokens": 14,
    "total_tokens": 26
  }
}
3

Swap providers instantly

Change the model field to route to any provider. Same request shape, different model:

bash
# Google Gemini
curl https://api.closedrouter.net/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "google/gemini-2.5-flash", "messages": [{"role": "user", "content": "Hi"}]}'

# xAI Grok
curl https://api.closedrouter.net/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "x-ai/grok-3", "messages": [{"role": "user", "content": "Hi"}]}'

# Zhipu GLM
curl https://api.closedrouter.net/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"model": "z-ai/glm-5.1", "messages": [{"role": "user", "content": "Hi"}]}'

Authentication

Include your API key in the Authorization header as a Bearer token:

bash
Authorization: Bearer sk-xxxxxxxxxxxxxxxx
Keep your key secret. Never expose it in client-side code. Use environment variables or a secrets manager.

Models

Models are identified by provider/model-name. Browse all available models at closedrouter.net/models.

Model IDProviderContext
openai/gpt-4o-miniOpenAI128K
google/gemini-2.5-flashGoogle1M
x-ai/grok-3xAI131K
z-ai/glm-5.1Zhipu128K
deepseek/deepseek-chat-v3.1DeepSeek128K
moonshotai/kimi-k2.5Moonshot128K

List models

bash
curl https://api.closedrouter.net/v1/models \
  -H "Authorization: Bearer YOUR_API_KEY"

Chat completions

POST /v1/chat/completions

Drop-in replacement for the OpenAI chat completions endpoint. Supports all standard parameters.

ParameterTypeDescription
modelstringRequired. Model ID in provider/model format.
messagesarrayRequired. Array of message objects with role and content.
temperaturenumberSampling temperature (0-2). Default: 1.
max_tokensintegerMax tokens in the response.
top_pnumberNucleus sampling threshold.
streambooleanStream partial results via SSE. Default: false.
stopstring/arrayStop sequences.

Streaming

Set stream: true to receive partial results as server-sent events:

bash
curl https://api.closedrouter.net/v1/chat/completions \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{
    "model": "openai/gpt-4o-mini",
    "stream": true,
    "messages": [{"role": "user", "content": "Count to 5"}]
  }'
SSE response
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"1,"},"index":0}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"2,"},"index":0}]}
data: {"id":"chatcmpl-abc","object":"chat.completion.chunk","choices":[{"delta":{"content":"3,"},"index":0}]}
data: [DONE]

Using the OpenAI SDK

Set the base_url to ClosedRouter. Everything else stays the same.

Python

python
from openai import OpenAI

client = OpenAI(
    base_url="https://api.closedrouter.net/v1",
    api_key="YOUR_API_KEY"
)

response = client.chat.completions.create(
    model="openai/gpt-4o-mini",
    messages=[{"role": "user", "content": "Hello!"}]
)

print(response.choices[0].message.content)

TypeScript / Node

typescript
import OpenAI from "openai";

const client = new OpenAI({
  baseURL: "https://api.closedrouter.net/v1",
  apiKey: "YOUR_API_KEY",
});

const response = await client.chat.completions.create({
  model: "openai/gpt-4o-mini",
  messages: [{ role: "user", content: "Hello!" }],
});

console.log(response.choices[0].message.content);

For AI agents and tools

ClosedRouter works with any tool that accepts an OpenAI-compatible endpoint:

env
OPENAI_BASE_URL=https://api.closedrouter.net/v1
OPENAI_API_KEY=YOUR_API_KEY

Errors

StatusCodeMeaning
401authentication_errorInvalid or missing API key.
403forbiddenAccount does not have access to this model.
404model_not_foundThe requested model does not exist.
429rate_limit_exceededToo many requests. Retry after the header.
500upstream_errorThe upstream provider returned an error.
502bad_gatewayCould not reach the upstream provider.

Rate limits

During closed beta, rate limits are generous:

LimitValue
Requests per minute60
Tokens per minute200,000
Max context lengthModel-dependent

Rate limit headers are included in every response:

headers
x-ratelimit-limit: 60
x-ratelimit-remaining: 58
x-ratelimit-reset: 30