OpenAI path with the proxy URL pointed at Crazyrouter’s OpenAI-compatible base.
Overview
Using LobeChat’s OpenAI settings, you can route chat traffic through Crazyrouter:- Recommended protocol:
OpenAI-compatible API - Recommended route: the LobeChat
OpenAIprovider - Base URL:
https://crazyrouter.com/v1 - Auth method:
sk-...token - Recommended first validation model:
gpt-5.4
Best For
- teams or individuals who want a stable chat frontend
- self-hosted AI chat workspaces
- users who want conversation history with multiple model choices
- admins who want to ship a default model configuration to internal users
Protocol Used
Recommended protocol:OpenAI-compatible API
When connecting Crazyrouter in LobeChat, use this OpenAI-compatible base URL:
https://crazyrouter.comhttps://crazyrouter.com/v1/chat/completions
LobeChat documents
OPENAI_PROXY_URL as the OpenAI API request base URL, and its common defaults and examples are /v1-style. The local http://127.0.0.1:4000/api/status payload also exposes an official example using {address}/v1. So for Crazyrouter, https://crazyrouter.com/v1 is the correct first-pass setup. If your own reverse proxy already adds /v1, adjust that layer to avoid a duplicated suffix.Prerequisites
| Item | Details |
|---|---|
| Crazyrouter account | Register first at crazyrouter.com |
| Crazyrouter token | Create a dedicated sk-... token for LobeChat |
| LobeChat | Hosted or self-hosted is fine; use a current stable build |
| Available models | Allow at least one verified OpenAI-compatible chat model such as gpt-5.4 |
gpt-5.4claude-sonnet-4-6gemini-3-pro-preview
5-Minute Quick Start
Create a dedicated LobeChat token
In the Crazyrouter dashboard, create a token named
lobechat. For the first rollout, allow only the models you actually need, such as gpt-5.4 and claude-sonnet-4-6.Open the language model settings
In LobeChat, open
Settings → Language Model from the avatar menu or settings entry.Configure the OpenAI path
In the
OpenAI configuration, enter:API Key: yoursk-...API Proxy URL:https://crazyrouter.com/v1
Pick one baseline model
Save the settings and choose
gpt-5.4 as the default model first. Do not start with a large model list.Self-Hosted Quick Config
If you deploy LobeChat with Docker, you can preconfigure the default OpenAI path like this:Recommended Model Setup
| Use case | Recommended model | Why |
|---|---|---|
| Default main chat model | gpt-5.4 | Verified successfully in production on March 23, 2026, and suited for the main LobeChat baseline |
| Higher-quality writing / code help | claude-sonnet-4-6 | Strong long-form writing and reasoning |
| Gemini fallback path | gemini-3-pro-preview | Useful as a second vendor-compatible validation path |
gpt-5.4 working first, then expand to claude-sonnet-4-6 and gemini-3-pro-preview.
Token Setup Best Practices
| Setting | Recommendation | Notes |
|---|---|---|
| Dedicated token | Required | Do not share the same token with IDE or CLI tools |
| Model whitelist | Strongly recommended | Allow only the models the chat frontend should use |
| IP restriction | Recommended for fixed self-hosted egress | Use carefully on changing home or mobile networks |
| Quota cap | Strongly recommended | Team chat frontends can burn through a shared token quickly |
| Environment separation | Recommended | Use different tokens for staging and production |
| Default model control | Recommended | Keep premium models out of the default path unless needed |
Verification Checklist
-
API Keyis saved correctly -
API Proxy URLis set tohttps://crazyrouter.com/v1 - the custom proxy URL option is enabled if required by your version
- the first model is selected successfully
- the first chat request succeeds
- streaming works normally
- the request appears in the Crazyrouter logs
- token quota and model whitelist match your plan
Common Errors and Fixes
| Symptom | Likely cause | Fix |
|---|---|---|
| settings cannot be saved or validation fails | wrong API key or wrong proxy URL | recheck the sk-... value and https://crazyrouter.com/v1 |
| 401 unauthorized | token expired, was removed, or is invalid | generate a new token and replace it |
| 403 / model not allowed | selected model is missing from the token whitelist | allow that model in Crazyrouter |
| 404 | proxy URL was entered as the root domain or a full endpoint path | change it to https://crazyrouter.com/v1 |
| settings look correct but you still get 404s | your own gateway, reverse proxy, or deployment layer already appends /v1 once | inspect the upstream chain and remove the duplicated suffix from one layer |
| model looks selectable but requests fail | LobeChat cached an old model list or the default model is not valid | switch back to gpt-5.4, refresh, and reselect |
| chat works but streaming is unstable | client-version or model compatibility issue | use gpt-5.4 as the baseline and upgrade LobeChat |
| usage spikes when multiple users share the app | one shared token has no quota cap | set a quota cap for the team frontend or split tokens |
| users can still switch away from Crazyrouter after self-hosting | you only set defaults, not restrictions | add deployment-side controls or disable client customization where supported |
Performance and Cost Tips
- Keep only
gpt-5.4in the first rollout - Use a cheaper default model for team chat only if you have separately validated that route in your own deployment
- If you also enable knowledge features, plugins, or long-context sessions, give LobeChat its own quota cap
- Separate staging and production tokens so internal testing does not affect real usage budgets
- If usage looks suspicious, check the Crazyrouter logs first for long sessions or multiple users sharing one token
FAQ
Which Base URL should I use in LobeChat?
Usehttps://crazyrouter.com/v1.
Why should I not enter only the root domain here?
Because LobeChat’s OpenAI proxy settings work best when you give it the OpenAI-compatible base directly, not just the root domain.What if my own reverse proxy already appends /v1?
Then do not add another /v1 in the final URL you expose to LobeChat. Check the proxy chain first so you do not end up with a duplicated suffix.
Which model should I test first?
Start withgpt-5.4.
Can LobeChat use multiple models with Crazyrouter?
Yes, but it is better to validate one model first and expand after that.Should I hardcode the token in environment variables for self-hosting?
If you want tighter control over upstream routing and cost, yes. If you want each user to bring their own key, keep the client-side entry open.If your goal is a stable chat frontend and a shared workspace, LobeChat is a strong choice. If your goal is agentic coding and automated edits, Cursor, Claude Code, Codex, and Cline should still take priority.