Skip to main content
LobeChat works well as a self-hosted chat frontend, a personal multi-model workspace, or a shared team chat surface. When connecting it to Crazyrouter, the most reliable route is LobeChat’s OpenAI path with the proxy URL pointed at Crazyrouter’s OpenAI-compatible base.

Overview

Using LobeChat’s OpenAI settings, you can route chat traffic through Crazyrouter:
  • Recommended protocol: OpenAI-compatible API
  • Recommended route: the LobeChat OpenAI provider
  • Base URL: https://crazyrouter.com/v1
  • Auth method: sk-... token
  • Recommended first validation model: gpt-5.4
If you self-host LobeChat, you can also preconfigure Crazyrouter as the default OpenAI upstream through environment variables.

Best For

  • teams or individuals who want a stable chat frontend
  • self-hosted AI chat workspaces
  • users who want conversation history with multiple model choices
  • admins who want to ship a default model configuration to internal users

Protocol Used

Recommended protocol: OpenAI-compatible API When connecting Crazyrouter in LobeChat, use this OpenAI-compatible base URL:
https://crazyrouter.com/v1
Do not enter:
  • https://crazyrouter.com
  • https://crazyrouter.com/v1/chat/completions
LobeChat documents OPENAI_PROXY_URL as the OpenAI API request base URL, and its common defaults and examples are /v1-style. The local http://127.0.0.1:4000/api/status payload also exposes an official example using {address}/v1. So for Crazyrouter, https://crazyrouter.com/v1 is the correct first-pass setup. If your own reverse proxy already adds /v1, adjust that layer to avoid a duplicated suffix.

Prerequisites

ItemDetails
Crazyrouter accountRegister first at crazyrouter.com
Crazyrouter tokenCreate a dedicated sk-... token for LobeChat
LobeChatHosted or self-hosted is fine; use a current stable build
Available modelsAllow at least one verified OpenAI-compatible chat model such as gpt-5.4
Recommended starting whitelist:
  • gpt-5.4
  • claude-sonnet-4-6
  • gemini-3-pro-preview
If you also plan to connect Crazyrouter to Cursor, Codex, or Claude Code, keep LobeChat on its own token. It makes cost tracking much easier.

5-Minute Quick Start

1

Create a dedicated LobeChat token

In the Crazyrouter dashboard, create a token named lobechat. For the first rollout, allow only the models you actually need, such as gpt-5.4 and claude-sonnet-4-6.
2

Open the language model settings

In LobeChat, open SettingsLanguage Model from the avatar menu or settings entry.
3

Configure the OpenAI path

In the OpenAI configuration, enter:
  • API Key: your sk-...
  • API Proxy URL: https://crazyrouter.com/v1
Also enable the custom proxy URL option if your version exposes it.
4

Pick one baseline model

Save the settings and choose gpt-5.4 as the default model first. Do not start with a large model list.
5

Run the first validation prompt

Start a new conversation and send Reply only OK. If it returns successfully and appears in the Crazyrouter logs, the integration is working.

Self-Hosted Quick Config

If you deploy LobeChat with Docker, you can preconfigure the default OpenAI path like this:
services:
  lobechat:
    image: lobehub/lobe-chat
    ports:
      - "3210:3210"
    environment:
      - OPENAI_API_KEY=sk-xxx
      - OPENAI_PROXY_URL=https://crazyrouter.com/v1
      - OPENAI_MODEL_LIST=gpt-5.4,claude-sonnet-4-6,gemini-3-pro-preview
If you do not want end users to replace the key freely, combine this with the config controls your deployed LobeChat version supports for hosted defaults or restricted client-side customization.
Use caseRecommended modelWhy
Default main chat modelgpt-5.4Verified successfully in production on March 23, 2026, and suited for the main LobeChat baseline
Higher-quality writing / code helpclaude-sonnet-4-6Strong long-form writing and reasoning
Gemini fallback pathgemini-3-pro-previewUseful as a second vendor-compatible validation path
Recommended order: get gpt-5.4 working first, then expand to claude-sonnet-4-6 and gemini-3-pro-preview.

Token Setup Best Practices

SettingRecommendationNotes
Dedicated tokenRequiredDo not share the same token with IDE or CLI tools
Model whitelistStrongly recommendedAllow only the models the chat frontend should use
IP restrictionRecommended for fixed self-hosted egressUse carefully on changing home or mobile networks
Quota capStrongly recommendedTeam chat frontends can burn through a shared token quickly
Environment separationRecommendedUse different tokens for staging and production
Default model controlRecommendedKeep premium models out of the default path unless needed

Verification Checklist

  • API Key is saved correctly
  • API Proxy URL is set to https://crazyrouter.com/v1
  • the custom proxy URL option is enabled if required by your version
  • the first model is selected successfully
  • the first chat request succeeds
  • streaming works normally
  • the request appears in the Crazyrouter logs
  • token quota and model whitelist match your plan

Common Errors and Fixes

SymptomLikely causeFix
settings cannot be saved or validation failswrong API key or wrong proxy URLrecheck the sk-... value and https://crazyrouter.com/v1
401 unauthorizedtoken expired, was removed, or is invalidgenerate a new token and replace it
403 / model not allowedselected model is missing from the token whitelistallow that model in Crazyrouter
404proxy URL was entered as the root domain or a full endpoint pathchange it to https://crazyrouter.com/v1
settings look correct but you still get 404syour own gateway, reverse proxy, or deployment layer already appends /v1 onceinspect the upstream chain and remove the duplicated suffix from one layer
model looks selectable but requests failLobeChat cached an old model list or the default model is not validswitch back to gpt-5.4, refresh, and reselect
chat works but streaming is unstableclient-version or model compatibility issueuse gpt-5.4 as the baseline and upgrade LobeChat
usage spikes when multiple users share the appone shared token has no quota capset a quota cap for the team frontend or split tokens
users can still switch away from Crazyrouter after self-hostingyou only set defaults, not restrictionsadd deployment-side controls or disable client customization where supported

Performance and Cost Tips

  • Keep only gpt-5.4 in the first rollout
  • Use a cheaper default model for team chat only if you have separately validated that route in your own deployment
  • If you also enable knowledge features, plugins, or long-context sessions, give LobeChat its own quota cap
  • Separate staging and production tokens so internal testing does not affect real usage budgets
  • If usage looks suspicious, check the Crazyrouter logs first for long sessions or multiple users sharing one token

FAQ

Which Base URL should I use in LobeChat?

Use https://crazyrouter.com/v1.

Why should I not enter only the root domain here?

Because LobeChat’s OpenAI proxy settings work best when you give it the OpenAI-compatible base directly, not just the root domain.

What if my own reverse proxy already appends /v1?

Then do not add another /v1 in the final URL you expose to LobeChat. Check the proxy chain first so you do not end up with a duplicated suffix.

Which model should I test first?

Start with gpt-5.4.

Can LobeChat use multiple models with Crazyrouter?

Yes, but it is better to validate one model first and expand after that.

Should I hardcode the token in environment variables for self-hosting?

If you want tighter control over upstream routing and cost, yes. If you want each user to bring their own key, keep the client-side entry open.
If your goal is a stable chat frontend and a shared workspace, LobeChat is a strong choice. If your goal is agentic coding and automated edits, Cursor, Claude Code, Codex, and Cline should still take priority.