Skip to main content
Cherry Studio is a desktop AI client that works well for multi-model chat, research notes, and MCP-assisted workflows. When connecting it to Crazyrouter, the most reliable path is Cherry Studio’s custom OpenAI provider mode, because it usually gives you the cleanest model management, streaming, and general chat compatibility.

Overview

Using Cherry Studio’s custom provider support, you can add Crazyrouter as an OpenAI-compatible upstream:
  • Recommended protocol: OpenAI-compatible API
  • Recommended provider type: OpenAI
  • API URL: https://crazyrouter.com
  • Auth method: sk-... token
  • Recommended first validation model: gpt-5.4
Cherry Studio’s official docs explain that if the upstream endpoint is something like https://xxx.com/v1/chat/completions, you usually enter only the root URL in API URL. For Crazyrouter, start with https://crazyrouter.com, not https://crazyrouter.com/v1.
Cherry Studio appends the standard remaining path automatically. You only need to enter a full custom path ending with # when the upstream is not using the normal .../v1/chat/completions route shape. Normal Crazyrouter chat setup does not need that.

Best For

  • desktop users who want one place for multiple models
  • teams or individuals doing daily chat, writing, translation, and research
  • users who want to combine Crazyrouter with MCP tools
  • users who want fast switching between cheap and strong models

Protocol Used

Recommended protocol: OpenAI-compatible API The recommended setup is to add a custom OpenAI provider and enter the Crazyrouter root domain in API URL:
https://crazyrouter.com
Do not start with:
  • https://crazyrouter.com/v1/chat/completions
  • https://crazyrouter.com/v1/models
Cherry Studio only needs a full custom path with a trailing # in special non-standard routing scenarios. Normal Crazyrouter chat setup does not need that.

Prerequisites

ItemDetails
Crazyrouter accountRegister first at crazyrouter.com
Crazyrouter tokenCreate a dedicated sk-... token for Cherry Studio
Cherry StudioUse a current stable desktop build
Available modelsAllow at least one verified chat model such as gpt-5.4
Recommended starting whitelist:
  • gpt-5.4
  • claude-sonnet-4-6
  • gemini-3-pro-preview

5-Minute Quick Start

1

Create a dedicated Cherry Studio token

In the Crazyrouter dashboard, create a new token named cherry-studio. For the first pass, allow only 2 to 4 models you know you actually need.
2

Add a custom provider

Open Cherry Studio, go to SettingsModel Services, then click Add or + Add. Choose the provider type OpenAI and name it Crazyrouter.
3

Enter the connection settings

In the provider settings, enter:
  • API Key: your sk-...
  • API URL: https://crazyrouter.com
Then click Check to validate the token and URL.
4

Fetch and add models

Open Manage, fetch the model list, and add the models you want to use. Seeing a model in the popup does not always mean it is already enabled in the provider list, so you may still need to click + to add it. For first-run validation, start with only one baseline model: gpt-5.4.
5

Enable the provider and send the first prompt

Turn on the provider toggle, go back to the chat screen, pick gpt-5.4, and send a simple prompt such as Reply only OK. If you get a normal reply, the setup is working.
Use caseRecommended modelWhy
Default chat modelgpt-5.4Verified successfully in production on March 23, 2026, and suited for the OpenAI-compatible baseline
Higher-quality code / long-form helpclaude-sonnet-4-6Strong reasoning, writing, and code explanation
Gemini fallback pathgemini-3-pro-previewUseful as a second vendor-compatible validation path
Recommended order: get gpt-5.4 working first, then add claude-sonnet-4-6 and gemini-3-pro-preview.

Token Setup Best Practices

SettingRecommendationNotes
Dedicated tokenRequiredDo not share the same token with Cursor, Claude Code, or Codex
Model whitelistStrongly recommendedAllow only the models Cherry Studio will actually use
IP restrictionDepends on your environmentFine for fixed office networks; risky on laptops that change networks often
Quota capStrongly recommendedDesktop chat apps can generate lots of multi-turn traffic
Dev / shared machine separationRecommendedUse separate tokens for personal, shared, and demo devices
Multi-key rotationUse carefullyCherry Studio supports multiple keys, but debugging is easier with a single key first

Verification Checklist

  • Check succeeds in the provider settings
  • API URL is set to https://crazyrouter.com
  • the provider toggle is enabled
  • at least one model was added from Manage
  • the first chat request succeeds
  • streaming output renders normally
  • MCP services work if you enabled them
  • the request appears in the Crazyrouter logs

MCP Usage Tips

Cherry Studio supports MCP services, but the cleanest rollout is a two-step process:
  1. verify the Crazyrouter chat model first
  2. add MCP services only after the model path is stable
That keeps model-routing issues separate from MCP tool issues. If you enable MCP:
  • start with a model that handles tool use reliably
  • attach only one MCP service for the first test
  • use a simple read-only tool call before trying more complex actions

Common Errors and Fixes

SymptomLikely causeFix
Check failswrong API key or pasted value contains spacesregenerate the token and paste a clean sk-... value
401 unauthorizedtoken expired, was deleted, or is invalidcreate a new token in Crazyrouter and replace it
403 / model not allowedthe selected model is not in the token whitelistallow that model in the token settings
404API URL was entered as a full endpoint path, or the provider type is wrongswitch back to the OpenAI provider type and set the URL to https://crazyrouter.com
Check still fails even though the URL and key look rightCherry Studio often validates against the last conversational model currently present in the provider list, and that model may be invalidkeep only gpt-5.4, remove suspicious models, and run Check again
model list does not loadwrong URL, network problem, or provider not enabledrun Check, then confirm the provider toggle and network connectivity
connection works but chat still failsthe added model is incompatible or unavailablekeep only gpt-5.4 for baseline testing
streaming behaves oddlymodel-level or client-version compatibility issueswitch back to gpt-5.4 and upgrade Cherry Studio
MCP tool calls failthe issue is in the MCP service, not the model routedisable MCP, confirm the model works, then fix MCP separately

Performance and Cost Tips

  • Start with only 1 to 2 models so the first rollout stays easy to debug
  • Default to gpt-5.4, then switch to claude-sonnet-4-6 for harder long-form and code tasks
  • If you keep long conversations open, give Cherry Studio its own quota cap
  • Since Cherry Studio makes model switching easy, reserve expensive models for heavier work only
  • If usage looks abnormal, check the Crazyrouter logs first for retries or long multi-turn sessions

FAQ

Which URL should I enter in Cherry Studio?

Start with https://crazyrouter.com. Because Cherry Studio’s own docs explain that many OpenAI-compatible upstreams only need the root URL, and the client appends the standard path itself. Crazyrouter should be set up that way first.

When would I need a full endpoint path ending with #?

Only when your upstream is not using a standard OpenAI route pattern. For normal Crazyrouter chat setup, do not use that mode first.

Which model should I test first?

Start with gpt-5.4.

Can I configure many models right away?

Yes, but it is better not to do that on day one. Get one baseline model working first, then expand.

Can Cherry Studio use MCP together with Crazyrouter?

Yes, but validate the chat model first and enable MCP after that. It makes troubleshooting much easier.
If your main goal is desktop multi-model chat and light MCP workflows, Cherry Studio is a strong option. If your main goal is agentic coding inside an IDE or terminal, Cursor, Claude Code, Codex, or Cline should still be the higher priority tools.