OpenAI provider mode, because it usually gives you the cleanest model management, streaming, and general chat compatibility.
Overview
Using Cherry Studio’s custom provider support, you can add Crazyrouter as an OpenAI-compatible upstream:- Recommended protocol:
OpenAI-compatible API - Recommended provider type:
OpenAI - API URL:
https://crazyrouter.com - Auth method:
sk-...token - Recommended first validation model:
gpt-5.4
Cherry Studio’s official docs explain that if the upstream endpoint is something like
https://xxx.com/v1/chat/completions, you usually enter only the root URL in API URL. For Crazyrouter, start with https://crazyrouter.com, not https://crazyrouter.com/v1.Best For
- desktop users who want one place for multiple models
- teams or individuals doing daily chat, writing, translation, and research
- users who want to combine Crazyrouter with MCP tools
- users who want fast switching between cheap and strong models
Protocol Used
Recommended protocol:OpenAI-compatible API
The recommended setup is to add a custom OpenAI provider and enter the Crazyrouter root domain in API URL:
https://crazyrouter.com/v1/chat/completionshttps://crazyrouter.com/v1/models
# in special non-standard routing scenarios. Normal Crazyrouter chat setup does not need that.
Prerequisites
| Item | Details |
|---|---|
| Crazyrouter account | Register first at crazyrouter.com |
| Crazyrouter token | Create a dedicated sk-... token for Cherry Studio |
| Cherry Studio | Use a current stable desktop build |
| Available models | Allow at least one verified chat model such as gpt-5.4 |
gpt-5.4claude-sonnet-4-6gemini-3-pro-preview
5-Minute Quick Start
Create a dedicated Cherry Studio token
In the Crazyrouter dashboard, create a new token named
cherry-studio. For the first pass, allow only 2 to 4 models you know you actually need.Add a custom provider
Open Cherry Studio, go to
Settings → Model Services, then click Add or + Add. Choose the provider type OpenAI and name it Crazyrouter.Enter the connection settings
In the provider settings, enter:
API Key: yoursk-...API URL:https://crazyrouter.com
Check to validate the token and URL.Fetch and add models
Open
Manage, fetch the model list, and add the models you want to use. Seeing a model in the popup does not always mean it is already enabled in the provider list, so you may still need to click + to add it. For first-run validation, start with only one baseline model: gpt-5.4.Recommended Model Setup
| Use case | Recommended model | Why |
|---|---|---|
| Default chat model | gpt-5.4 | Verified successfully in production on March 23, 2026, and suited for the OpenAI-compatible baseline |
| Higher-quality code / long-form help | claude-sonnet-4-6 | Strong reasoning, writing, and code explanation |
| Gemini fallback path | gemini-3-pro-preview | Useful as a second vendor-compatible validation path |
gpt-5.4 working first, then add claude-sonnet-4-6 and gemini-3-pro-preview.
Token Setup Best Practices
| Setting | Recommendation | Notes |
|---|---|---|
| Dedicated token | Required | Do not share the same token with Cursor, Claude Code, or Codex |
| Model whitelist | Strongly recommended | Allow only the models Cherry Studio will actually use |
| IP restriction | Depends on your environment | Fine for fixed office networks; risky on laptops that change networks often |
| Quota cap | Strongly recommended | Desktop chat apps can generate lots of multi-turn traffic |
| Dev / shared machine separation | Recommended | Use separate tokens for personal, shared, and demo devices |
| Multi-key rotation | Use carefully | Cherry Studio supports multiple keys, but debugging is easier with a single key first |
Verification Checklist
-
Checksucceeds in the provider settings -
API URLis set tohttps://crazyrouter.com - the provider toggle is enabled
- at least one model was added from
Manage - the first chat request succeeds
- streaming output renders normally
- MCP services work if you enabled them
- the request appears in the Crazyrouter logs
MCP Usage Tips
Cherry Studio supports MCP services, but the cleanest rollout is a two-step process:- verify the Crazyrouter chat model first
- add MCP services only after the model path is stable
- start with a model that handles tool use reliably
- attach only one MCP service for the first test
- use a simple read-only tool call before trying more complex actions
Common Errors and Fixes
| Symptom | Likely cause | Fix |
|---|---|---|
Check fails | wrong API key or pasted value contains spaces | regenerate the token and paste a clean sk-... value |
| 401 unauthorized | token expired, was deleted, or is invalid | create a new token in Crazyrouter and replace it |
| 403 / model not allowed | the selected model is not in the token whitelist | allow that model in the token settings |
| 404 | API URL was entered as a full endpoint path, or the provider type is wrong | switch back to the OpenAI provider type and set the URL to https://crazyrouter.com |
Check still fails even though the URL and key look right | Cherry Studio often validates against the last conversational model currently present in the provider list, and that model may be invalid | keep only gpt-5.4, remove suspicious models, and run Check again |
| model list does not load | wrong URL, network problem, or provider not enabled | run Check, then confirm the provider toggle and network connectivity |
| connection works but chat still fails | the added model is incompatible or unavailable | keep only gpt-5.4 for baseline testing |
| streaming behaves oddly | model-level or client-version compatibility issue | switch back to gpt-5.4 and upgrade Cherry Studio |
| MCP tool calls fail | the issue is in the MCP service, not the model route | disable MCP, confirm the model works, then fix MCP separately |
Performance and Cost Tips
- Start with only 1 to 2 models so the first rollout stays easy to debug
- Default to
gpt-5.4, then switch toclaude-sonnet-4-6for harder long-form and code tasks - If you keep long conversations open, give Cherry Studio its own quota cap
- Since Cherry Studio makes model switching easy, reserve expensive models for heavier work only
- If usage looks abnormal, check the Crazyrouter logs first for retries or long multi-turn sessions
FAQ
Which URL should I enter in Cherry Studio?
Start withhttps://crazyrouter.com.
Why is /v1 not the recommended first setting here?
Because Cherry Studio’s own docs explain that many OpenAI-compatible upstreams only need the root URL, and the client appends the standard path itself. Crazyrouter should be set up that way first.
When would I need a full endpoint path ending with #?
Only when your upstream is not using a standard OpenAI route pattern. For normal Crazyrouter chat setup, do not use that mode first.
Which model should I test first?
Start withgpt-5.4.
Can I configure many models right away?
Yes, but it is better not to do that on day one. Get one baseline model working first, then expand.Can Cherry Studio use MCP together with Crazyrouter?
Yes, but validate the chat model first and enable MCP after that. It makes troubleshooting much easier.If your main goal is desktop multi-model chat and light MCP workflows, Cherry Studio is a strong option. If your main goal is agentic coding inside an IDE or terminal, Cursor, Claude Code, Codex, or Cline should still be the higher priority tools.