Skip to main content
n8n is a strong fit when you want to place LLM calls inside automation flows, agents, approval chains, and business-system integrations. When connecting it to Crazyrouter, the recommended rollout is to get one minimal workflow working with n8n’s OpenAI credential path first, then decide whether you need the more flexible HTTP Request fallback.

Overview

Using n8n’s OpenAI credentials or AI nodes, you can route workflow model traffic through Crazyrouter:
  • Recommended protocol: OpenAI-compatible API
  • Recommended route: n8n OpenAI API credential + node-level Base URL
  • Base URL: https://crazyrouter.com/v1
  • Auth method: sk-... token
  • Recommended first validation model: gpt-5.4
If a specific node does not expose the parameters or newer model features you need, fall back to the HTTP Request node and call Crazyrouter directly.

Best For

  • teams putting AI inside automation workflows
  • users connecting forms, databases, approvals, webhooks, and LLMs
  • builders using AI Agent nodes with tool orchestration
  • internal automation stacks that need visual workflow control

Protocol Used

Recommended protocol: OpenAI-compatible API When connecting Crazyrouter in n8n, use:
https://crazyrouter.com/v1
Do not enter:
  • https://crazyrouter.com
  • https://crazyrouter.com/v1/chat/completions

Prerequisites

ItemDetails
Crazyrouter accountRegister first at crazyrouter.com
Crazyrouter tokenCreate a dedicated sk-... token for n8n
n8nUse a current stable version with AI nodes available
Available modelsAllow at least one verified chat model such as gpt-5.4
Recommended starting whitelist:
  • gpt-5.4
  • claude-sonnet-4-6
  • gemini-3-pro-preview
  • text-embedding-3-large if you will use vector or retrieval-related flows

5-Minute Quick Start

1

Create a dedicated n8n token

In the Crazyrouter dashboard, create a token named n8n. For the first rollout, allow only gpt-5.4 and claude-sonnet-4-6.
2

Add an OpenAI credential

In n8n, go to SettingsCredentialsAdd Credential, then choose OpenAI API or the equivalent OpenAI credential type in your version.
3

Enter the credential and node values

First fill in the credential:
  • API Key: your sk-...
Then in the OpenAI, OpenAI Chat Model, or related AI node Options, set:
  • Base URL: https://crazyrouter.com/v1
4

Build a minimal workflow

Create a workflow such as Manual TriggerOpenAI Chat ModelOutput, and set the model to gpt-5.4.
5

Run the first validation

Send a simple input such as Reply only OK through the OpenAI Chat Model node and execute the workflow manually. Once it succeeds, add AI Agent or more complex logic later.

Minimal Workflow Example

OpenAI Chat Model node

{
  "node": "OpenAI Chat Model",
  "parameters": {
    "model": "gpt-5.4",
    "messages": [
      { "role": "user", "content": "{{ $json.input }}" }
    ]
  }
}

AI Agent workflow

Start with the simplest path:
  1. Manual Trigger
  2. AI Agent
  3. Output
In the AI Agent, choose the Crazyrouter credential and attach only one simple tool for the first test. That keeps model issues separate from tool-chain issues.

HTTP Request Fallback

Use the HTTP Request node if you need:
  • newer parameters not yet exposed by n8n AI nodes
  • tighter control over the request body
  • easier debugging of headers, body, or streaming behavior
Example:
Method: POST
URL: https://crazyrouter.com/v1/chat/completions
Headers:
  Authorization: Bearer sk-xxx
  Content-Type: application/json
Body:
  {
    "model": "gpt-5.4",
    "messages": [
      {"role": "user", "content": "{{ $json.input }}"}
    ]
  }
Recommended order: first validate with the native OpenAI credential and AI nodes; only switch to HTTP Request when node coverage is not enough.
Depending on the n8n version, Base URL may appear on the credential page or in the specific OpenAI / Chat Model node Options. Current official docs more clearly document node-level Base URL overrides, so for Crazyrouter the safest first setup is: put the API key in the credential, then set Base URL on the node.
Use caseRecommended modelWhy
Default workflow modelgpt-5.4Verified successfully in production on March 23, 2026, and suited for the main n8n baseline
Higher-quality complex agent flowsclaude-sonnet-4-6Better for more complex explanation and long-form tasks
Gemini fallback pathgemini-3-pro-previewUseful as a second vendor-compatible validation path
Retrieval / vector-related flowstext-embedding-3-largeGood for later vector or search-enhanced pipelines
Recommended order: get gpt-5.4 working in a minimal flow, then expand to agents, tools, and batch jobs.

Token Setup Best Practices

SettingRecommendationNotes
Dedicated tokenRequiredDo not share it with chat frontends, CLI tools, or SDK demos
Model whitelistStrongly recommendedAllow only the models the workflows really need
IP restrictionRecommended for fixed server egressUse carefully if local dev and cloud runs are mixed
Quota capStrongly recommendedRetries, loops, and bulk tasks can burn through budget quickly
Environment separationRequiredUse different tokens for dev, staging, and production
Node tieringRecommendedUse cheaper models for high-volume jobs and stronger models only where needed

Verification Checklist

  • the OpenAI API credential is saved successfully
  • node-level Base URL is set to https://crazyrouter.com/v1
  • the OpenAI Chat Model node executes successfully
  • the first minimal workflow returns a valid response
  • the AI Agent node works too if you need agent flows
  • the request appears in the Crazyrouter logs
  • token quota and model whitelist match your plan
  • dev, staging, and production workflows use separate tokens

Common Errors and Fixes

SymptomLikely causeFix
credential test failswrong API key, or Base URL was entered in the wrong place for your n8n versionrecheck the sk-... value and confirm Base URL is set where your version actually supports it
401 unauthorizedtoken expired, was deleted, or pasted with extra spacescreate a new token and replace it
403 / model not allowedthe workflow uses a model that is not whitelistedallow that model in Crazyrouter
404Base URL was entered as the root domain or a full endpoint pathchange it to https://crazyrouter.com/v1
workflow keeps retrying and cost grows fastnode retries, loops, or batch logic is misconfiguredlimit retries, split flows, and add quota caps
AI Agent starts but tool use is unstablethe tool chain is too complex for first-pass validationkeep only one simple tool until the baseline works
native AI node is missing parameters you needn8n’s node wrapper does not expose that feature yetswitch that step to an HTTP Request node
batch jobs are slow or expensivemodel choice or flow design is a poor fit for the taskmove large-volume runs back to the smallest validated baseline first

Performance and Cost Tips

  • Keep gpt-5.4 as the default automation baseline during initial rollout
  • Add stricter caps and alerts for looped, scheduled, or batch workflows
  • Keep production and debugging flows on separate tokens
  • Validate AI Agent flows with fewer tools and shorter chains before scaling up
  • If cost spikes, check both Crazyrouter logs and n8n execution history to see whether retries or loops are the cause

FAQ

Which Base URL should I use in n8n?

Use https://crazyrouter.com/v1.

Should I put Base URL in the credential or in the node?

Check the UI in your current n8n version. Current official docs more clearly mention node-level Base URL options; if your credential page also supports it, that can work too, but the safest first setup is credential for the API key and node for Base URL.

Should I start with native AI nodes or HTTP Request?

Start with native OpenAI credentials and AI nodes. Use HTTP Request only when you need deeper control or unsupported parameters.

Which model should I test first?

Start with gpt-5.4.

Why did workflow costs suddenly increase?

Usually the cause is not a single request. It is more often loops, batches, retries, or many executions sharing one token.

Does n8n really need multiple tokens?

Yes. At minimum, separate development from production. High-volume jobs should often get their own token too.
Once n8n is connected to Crazyrouter, the real challenge is not just making one request work. It is controlling retries, loops, batch execution, and quota boundaries across automation flows.