Overview
Using NextChat’s OpenAI settings, you can route requests through Crazyrouter:- Recommended protocol:
OpenAI-compatible API - Recommended route: the default NextChat OpenAI flow
- Base URL:
https://crazyrouter.com - Auth method:
sk-...token - Recommended first validation model:
gpt-5.4
NextChat’s official README documents
BASE_URL as the OpenAI API request base URL and uses root-domain style examples. The local http://127.0.0.1:4000/api/status response also exposes a root-style server_address. So for Crazyrouter, the safest first-pass setup is https://crazyrouter.com; only switch to a more specific path if your customized build explicitly requires it.Best For
- users who want a lightweight chat frontend quickly
- personal or small-team web chat with Crazyrouter
- admins who want to ship default model and upstream settings through env vars
- users who want the simplest possible OpenAI-compatible first setup
Protocol Used
Recommended protocol:OpenAI-compatible API
When connecting Crazyrouter in NextChat, start with:
https://crazyrouter.com/v1/chat/completionshttps://crazyrouter.com/v1/models
Prerequisites
| Item | Details |
|---|---|
| Crazyrouter account | Register first at crazyrouter.com |
| Crazyrouter token | Create a dedicated sk-... token for NextChat |
| NextChat | Hosted or self-hosted is fine; use a current stable version |
| Available models | Allow at least one verified OpenAI-compatible chat model such as gpt-5.4 |
gpt-5.4claude-sonnet-4-6gemini-3-pro-preview
5-Minute Quick Start
Create a dedicated NextChat token
In the Crazyrouter dashboard, create a token named
nextchat. For the first rollout, allow only baseline models such as gpt-5.4 and claude-sonnet-4-6.Open NextChat settings
In NextChat, open the
Settings panel from the bottom-left icon or settings entry.Enter the endpoint and key
In the OpenAI-related settings, enter:
API Key: yoursk-...Base URL:https://crazyrouter.com
Set the first model
In the
Model field, manually enter or select gpt-5.4. If your version supports a custom model list, add more models later.Self-Hosted Quick Config
A common Docker setup looks like this:| Variable | Recommended value | Notes |
|---|---|---|
OPENAI_API_KEY | sk-xxx | default Crazyrouter token |
BASE_URL | https://crazyrouter.com | use the root domain first, matching the official BASE_URL pattern for baseline validation |
CUSTOM_MODELS | +gpt-5.4,+claude-sonnet-4-6,+gemini-3-pro-preview | adds selectable models in the UI |
HIDE_USER_API_KEY | 1 | prevents end users from entering their own key |
Recommended Model Setup
| Use case | Recommended model | Why |
|---|---|---|
| Default main chat model | gpt-5.4 | Verified successfully in production on March 23, 2026, and suited for the main NextChat baseline |
| Higher-quality long-form and explanation | claude-sonnet-4-6 | Better for longer text and more complex explanations |
| Gemini fallback path | gemini-3-pro-preview | Useful as a second vendor-compatible validation path |
gpt-5.4 first, then expand the list with CUSTOM_MODELS.
Token Setup Best Practices
| Setting | Recommendation | Notes |
|---|---|---|
| Dedicated token | Required | Do not share it with LobeChat, Cursor, or Codex |
| Model whitelist | Strongly recommended | Allow only the models the frontend should expose |
| IP restriction | Consider it for fixed self-hosted egress | Use carefully on changing personal networks |
| Quota cap | Strongly recommended | Multi-user chat traffic can grow fast |
| Environment separation | Recommended | Use separate tokens for demo, staging, and production |
| User-supplied key entry | Disable by default | Set HIDE_USER_API_KEY=1 if you want centralized cost control |
Verification Checklist
-
API Keyis saved correctly -
Base URLis set tohttps://crazyrouter.com - the first model is set to
gpt-5.4 - the first chat request succeeds
-
CUSTOM_MODELSworks if you use self-hosted deployment - streaming output works normally
- the request appears in the Crazyrouter logs
- token quota and model whitelist match your rollout plan
Common Errors and Fixes
| Symptom | Likely cause | Fix |
|---|---|---|
| 401 unauthorized | token is wrong, expired, or pasted with extra spaces | create a new token and replace it |
| 403 / model not allowed | the model is not in the token whitelist | allow that model in Crazyrouter |
| 404 | you entered a full endpoint path, or your version expects a different env var name | switch back to https://crazyrouter.com and check whether your build uses BASE_URL or another setting name |
| the self-hosted UI does not show these settings | your deployment style fixes them in environment variables instead of the frontend settings panel | inspect OPENAI_API_KEY, BASE_URL, and CUSTOM_MODELS in the deployment config directly |
| no models appear in the UI | CUSTOM_MODELS is missing or old config is cached | validate with a manual gpt-5.4 entry first, then refresh and recheck env vars |
| request goes out but the model fails | default model name is wrong or unavailable | fall back to gpt-5.4 for baseline testing |
| users can still switch to their own key | HIDE_USER_API_KEY=1 was not set | add that variable in deployment config |
| usage grows too quickly | many users share one broad token | split tokens, reduce whitelist scope, and add quota caps |
Performance and Cost Tips
- Default to exposing only
gpt-5.4in the first rollout - Keep premium models out of the default list until you actually need them
- On public or semi-public deployments, hide user-supplied key entry whenever possible
- Separate demo traffic from production traffic with different tokens
- If usage looks abnormal, check Crazyrouter logs first for long sessions or many users sharing one key
FAQ
Which URL should I use in NextChat?
Start withhttps://crazyrouter.com.
Why does this guide not recommend /v1 first?
Because the official NextChat docs show BASE_URL in a root-domain style, and the local 4000 environment also exposes a root-style server_address. That makes the root domain the safest first-pass setup.
Which model should I test first?
Start withgpt-5.4.
Do I have to configure CUSTOM_MODELS?
No. You can manually enter gpt-5.4 first, validate the connection, and add a model list later.
Why do some self-hosted deployments only let me change env vars, not UI settings?
Because different NextChat deployment modes expose configuration differently. Some self-hosted setups pin upstream routing, keys, and models in environment variables, and the frontend only consumes those values.Should I hide user-provided keys in self-hosted deployments?
Yes, if you want centralized upstream routing and predictable cost control.If you want the lightest possible chat frontend with minimal deployment friction, NextChat is a strong fit. If you need richer collaboration or more advanced app-building features, LobeChat or Dify is usually the better next step.