Skip to main content
NextChat (formerly ChatGPT Next Web) is a good fit when you want a lightweight, fast, easy-to-deploy chat frontend. When connecting it to Crazyrouter, the recommended approach is to stay on NextChat’s default OpenAI-compatible route and point the service URL at the Crazyrouter root domain.

Overview

Using NextChat’s OpenAI settings, you can route requests through Crazyrouter:
  • Recommended protocol: OpenAI-compatible API
  • Recommended route: the default NextChat OpenAI flow
  • Base URL: https://crazyrouter.com
  • Auth method: sk-... token
  • Recommended first validation model: gpt-5.4
NextChat’s official README documents BASE_URL as the OpenAI API request base URL and uses root-domain style examples. The local http://127.0.0.1:4000/api/status response also exposes a root-style server_address. So for Crazyrouter, the safest first-pass setup is https://crazyrouter.com; only switch to a more specific path if your customized build explicitly requires it.

Best For

  • users who want a lightweight chat frontend quickly
  • personal or small-team web chat with Crazyrouter
  • admins who want to ship default model and upstream settings through env vars
  • users who want the simplest possible OpenAI-compatible first setup

Protocol Used

Recommended protocol: OpenAI-compatible API When connecting Crazyrouter in NextChat, start with:
https://crazyrouter.com
Do not start with:
  • https://crazyrouter.com/v1/chat/completions
  • https://crazyrouter.com/v1/models
If an older or customized NextChat build requires a more specific API host setting, adjust to that version after you validate the baseline setup with the root domain.
The exact entry point varies by deployment style: hosted usage often exposes a settings panel, while self-hosted usage often pushes the values through environment variables only. Regardless of the UI, keep first-pass validation minimal: API Key + root-domain BASE_URL + one model gpt-5.4.

Prerequisites

ItemDetails
Crazyrouter accountRegister first at crazyrouter.com
Crazyrouter tokenCreate a dedicated sk-... token for NextChat
NextChatHosted or self-hosted is fine; use a current stable version
Available modelsAllow at least one verified OpenAI-compatible chat model such as gpt-5.4
Recommended starting whitelist:
  • gpt-5.4
  • claude-sonnet-4-6
  • gemini-3-pro-preview

5-Minute Quick Start

1

Create a dedicated NextChat token

In the Crazyrouter dashboard, create a token named nextchat. For the first rollout, allow only baseline models such as gpt-5.4 and claude-sonnet-4-6.
2

Open NextChat settings

In NextChat, open the Settings panel from the bottom-left icon or settings entry.
3

Enter the endpoint and key

In the OpenAI-related settings, enter:
  • API Key: your sk-...
  • Base URL: https://crazyrouter.com
4

Set the first model

In the Model field, manually enter or select gpt-5.4. If your version supports a custom model list, add more models later.
5

Run the first validation prompt

Start a new chat and send Reply only OK. Once it works, add more models gradually.

Self-Hosted Quick Config

A common Docker setup looks like this:
services:
  nextchat:
    image: yidadaa/chatgpt-next-web
    ports:
      - "3000:3000"
    environment:
      - OPENAI_API_KEY=sk-xxx
      - BASE_URL=https://crazyrouter.com
      - CUSTOM_MODELS=+gpt-5.4,+claude-sonnet-4-6,+gemini-3-pro-preview
      - HIDE_USER_API_KEY=1
Common environment variables:
VariableRecommended valueNotes
OPENAI_API_KEYsk-xxxdefault Crazyrouter token
BASE_URLhttps://crazyrouter.comuse the root domain first, matching the official BASE_URL pattern for baseline validation
CUSTOM_MODELS+gpt-5.4,+claude-sonnet-4-6,+gemini-3-pro-previewadds selectable models in the UI
HIDE_USER_API_KEY1prevents end users from entering their own key
Use caseRecommended modelWhy
Default main chat modelgpt-5.4Verified successfully in production on March 23, 2026, and suited for the main NextChat baseline
Higher-quality long-form and explanationclaude-sonnet-4-6Better for longer text and more complex explanations
Gemini fallback pathgemini-3-pro-previewUseful as a second vendor-compatible validation path
Recommended order: validate gpt-5.4 first, then expand the list with CUSTOM_MODELS.

Token Setup Best Practices

SettingRecommendationNotes
Dedicated tokenRequiredDo not share it with LobeChat, Cursor, or Codex
Model whitelistStrongly recommendedAllow only the models the frontend should expose
IP restrictionConsider it for fixed self-hosted egressUse carefully on changing personal networks
Quota capStrongly recommendedMulti-user chat traffic can grow fast
Environment separationRecommendedUse separate tokens for demo, staging, and production
User-supplied key entryDisable by defaultSet HIDE_USER_API_KEY=1 if you want centralized cost control

Verification Checklist

  • API Key is saved correctly
  • Base URL is set to https://crazyrouter.com
  • the first model is set to gpt-5.4
  • the first chat request succeeds
  • CUSTOM_MODELS works if you use self-hosted deployment
  • streaming output works normally
  • the request appears in the Crazyrouter logs
  • token quota and model whitelist match your rollout plan

Common Errors and Fixes

SymptomLikely causeFix
401 unauthorizedtoken is wrong, expired, or pasted with extra spacescreate a new token and replace it
403 / model not allowedthe model is not in the token whitelistallow that model in Crazyrouter
404you entered a full endpoint path, or your version expects a different env var nameswitch back to https://crazyrouter.com and check whether your build uses BASE_URL or another setting name
the self-hosted UI does not show these settingsyour deployment style fixes them in environment variables instead of the frontend settings panelinspect OPENAI_API_KEY, BASE_URL, and CUSTOM_MODELS in the deployment config directly
no models appear in the UICUSTOM_MODELS is missing or old config is cachedvalidate with a manual gpt-5.4 entry first, then refresh and recheck env vars
request goes out but the model failsdefault model name is wrong or unavailablefall back to gpt-5.4 for baseline testing
users can still switch to their own keyHIDE_USER_API_KEY=1 was not setadd that variable in deployment config
usage grows too quicklymany users share one broad tokensplit tokens, reduce whitelist scope, and add quota caps

Performance and Cost Tips

  • Default to exposing only gpt-5.4 in the first rollout
  • Keep premium models out of the default list until you actually need them
  • On public or semi-public deployments, hide user-supplied key entry whenever possible
  • Separate demo traffic from production traffic with different tokens
  • If usage looks abnormal, check Crazyrouter logs first for long sessions or many users sharing one key

FAQ

Which URL should I use in NextChat?

Start with https://crazyrouter.com.

Why does this guide not recommend /v1 first?

Because the official NextChat docs show BASE_URL in a root-domain style, and the local 4000 environment also exposes a root-style server_address. That makes the root domain the safest first-pass setup.

Which model should I test first?

Start with gpt-5.4.

Do I have to configure CUSTOM_MODELS?

No. You can manually enter gpt-5.4 first, validate the connection, and add a model list later.

Why do some self-hosted deployments only let me change env vars, not UI settings?

Because different NextChat deployment modes expose configuration differently. Some self-hosted setups pin upstream routing, keys, and models in environment variables, and the frontend only consumes those values.

Should I hide user-provided keys in self-hosted deployments?

Yes, if you want centralized upstream routing and predictable cost control.
If you want the lightest possible chat frontend with minimal deployment friction, NextChat is a strong fit. If you need richer collaboration or more advanced app-building features, LobeChat or Dify is usually the better next step.