Skip to main content
LangChain is one of the most common LLM application frameworks for chat flows, prompt chains, tool use, RAG, agent-style workflows, and application-level orchestration. For Crazyrouter, the most reliable route is LangChain’s official OpenAI-compatible component stack.

Overview

With LangChain’s OpenAI components, you can send requests to Crazyrouter with:
  • recommended protocol: OpenAI-compatible API
  • base URL: https://crazyrouter.com/v1
  • auth variable: OPENAI_API_KEY
  • main Python package: langchain-openai
  • main JavaScript / TypeScript package: @langchain/openai
If you are integrating Crazyrouter into real application code rather than just chatting in a desktop client, LangChain is often the most natural engineering path.

Best For

  • developers integrating Crazyrouter into Python or Node.js applications
  • teams building prompt chains, RAG, tool calling, or workflow orchestration
  • users who want an abstraction layer instead of manually writing raw HTTP requests
  • projects that want to keep future model switching flexible

Protocol Used

Recommended protocol: OpenAI-compatible API Core Crazyrouter settings:
OPENAI_API_KEY=sk-xxx
BASE_URL=https://crazyrouter.com/v1
In LangChain, that usually maps to:
  • Python: api_key + base_url
  • JavaScript / TypeScript: apiKey + configuration.baseURL

Prerequisites

ItemNotes
Crazyrouter accountCreate one at crazyrouter.com
Crazyrouter tokenCreate a dedicated token for your LangChain project
PythonPrefer Python 3.10+
Node.jsPrefer Node.js 18+
LangChain packageslangchain-openai for Python and @langchain/openai for JS / TS
Allowed modelsAllow at least one chat model; if you use vector search, allow an embedding model too
Suggested starter allowlist:
  • gpt-5.4
  • claude-sonnet-4-6
  • gemini-3-pro-preview
  • text-embedding-3-large

Full Python Path

py -m venv .venv
.\.venv\Scripts\Activate.ps1
python -m pip install -U pip
pip install -U langchain-openai langchain-community
python --version
pip --version
If you want the FAISS example too:
pip install faiss-cpu
python3 -m venv .venv
source .venv/bin/activate
python -m pip install -U pip
pip install -U langchain-openai langchain-community
python --version
pip --version
If you want the FAISS example too:
pip install faiss-cpu

Full JavaScript / TypeScript Path

Windows PowerShell

npm init -y
npm install @langchain/openai @langchain/core
node -v
npm -v

macOS / Linux

npm init -y
npm install @langchain/openai @langchain/core
node -v
npm -v

Detailed Setup

1

Step 1: Create a LangChain-specific Crazyrouter token

For the first pass, allow only:
  • gpt-5.4
  • claude-sonnet-4-6
  • text-embedding-3-large
Add more models only after the basic route is already stable.
2

Step 2: Set the environment variable first

export OPENAI_API_KEY=sk-xxx
echo $OPENAI_API_KEY
If you want a persistent setup:
echo 'export OPENAI_API_KEY=sk-xxx' >> ~/.bashrc
source ~/.bashrc
3

Step 3: Run the smallest Python chat validation

Create test_langchain_chat.py:
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-5.4",
    api_key="sk-xxx",
    base_url="https://crazyrouter.com/v1",
    temperature=0,
)

response = llm.invoke("Reply only OK")
print(response.content)
Run:
python test_langchain_chat.py
4

Step 4: Switch to the env-var version

Once the route works, avoid hard-coding the key:
import os
from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-5.4",
    api_key=os.environ["OPENAI_API_KEY"],
    base_url="https://crazyrouter.com/v1",
    temperature=0,
)
5

Step 5: Run the smallest JavaScript / TypeScript chat validation

Create test-langchain-chat.mjs:
import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  model: "gpt-5.4",
  apiKey: process.env.OPENAI_API_KEY,
  configuration: {
    baseURL: "https://crazyrouter.com/v1",
  },
  temperature: 0,
});

const response = await llm.invoke("Reply only OK");
console.log(response.content);
Run:
node test-langchain-chat.mjs
6

Step 6: Add embeddings, prompts, and RAG gradually

Recommended order:
  1. single-turn chat first
  2. prompt + parser second
  3. embeddings third
  4. RAG or agent flows last

Python Examples

Minimal Chat Example

from langchain_openai import ChatOpenAI

llm = ChatOpenAI(
    model="gpt-5.4",
    api_key="sk-xxx",
    base_url="https://crazyrouter.com/v1",
    temperature=0.7,
)

response = llm.invoke("What is LangChain?")
print(response.content)

Embeddings Example

from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    api_key="sk-xxx",
    base_url="https://crazyrouter.com/v1",
)

vectors = embeddings.embed_documents(["Text one", "Text two"])
print(len(vectors), len(vectors[0]))

Prompt Chain Example

from langchain_openai import ChatOpenAI
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.output_parsers import StrOutputParser

llm = ChatOpenAI(
    model="gpt-5.4",
    api_key="sk-xxx",
    base_url="https://crazyrouter.com/v1",
)

prompt = ChatPromptTemplate.from_template("Explain {topic} in simple terms")
chain = prompt | llm | StrOutputParser()

result = chain.invoke({"topic": "quantum computing"})
print(result)

Minimal RAG Example

from langchain_openai import ChatOpenAI, OpenAIEmbeddings
from langchain_community.vectorstores import FAISS
from langchain_core.prompts import ChatPromptTemplate
from langchain_core.runnables import RunnablePassthrough

llm = ChatOpenAI(
    model="gpt-5.4",
    api_key="sk-xxx",
    base_url="https://crazyrouter.com/v1",
)

embeddings = OpenAIEmbeddings(
    model="text-embedding-3-large",
    api_key="sk-xxx",
    base_url="https://crazyrouter.com/v1",
)

texts = [
    "Crazyrouter supports multiple AI model protocols",
    "Crazyrouter supports OpenAI-compatible routing",
]

vectorstore = FAISS.from_texts(texts, embeddings)
retriever = vectorstore.as_retriever()

prompt = ChatPromptTemplate.from_template(
    "Answer the question based on the following context:\n{context}\n\nQuestion: {question}"
)

chain = {"context": retriever, "question": RunnablePassthrough()} | prompt | llm

result = chain.invoke("Which protocol is the best fit for LangChain on Crazyrouter?")
print(result.content)

JavaScript / TypeScript Example

Minimal Chat Example

import { ChatOpenAI } from "@langchain/openai";

const llm = new ChatOpenAI({
  model: "gpt-5.4",
  apiKey: process.env.OPENAI_API_KEY,
  configuration: {
    baseURL: "https://crazyrouter.com/v1",
  },
  temperature: 0,
});

const response = await llm.invoke("Hello");
console.log(response.content);
Use caseRecommended modelWhy
first-pass validationgpt-5.4verified successfully in production on March 23, 2026, and best for proving that LangChain and Crazyrouter are connected
higher-quality long-form and complex chainsclaude-sonnet-4-6better for complex explanation, summaries, and heavier reasoning
Gemini fallback pathgemini-3-pro-previewuseful as a second compatibility-validation path
vector retrievaltext-embedding-3-largestrong first embedding baseline

Token Setup Best Practices

SettingRecommendationNotes
dedicated tokenRequireddo not share LangChain project tokens with desktop clients
model allowlistStrongly recommendedstart with only the chat model plus embedding model you need
quota capStrongly recommendedchains, RAG, and agents can multiply spend quickly
environment splitRecommendedseparate dev, staging, and production
leak responseRotate immediatelynever commit the key to Git

Verification Checklist

  • Python or Node.js runtime is ready
  • OPENAI_API_KEY is set correctly
  • langchain-openai or @langchain/openai is installed
  • the chat model is set to https://crazyrouter.com/v1 through base_url or baseURL
  • the first Reply only OK request succeeds
  • Crazyrouter logs show the matching request
  • if embeddings are used, the embedding model is also allowed
  • if RAG is used, it was first validated on a tiny dataset

Common Errors And Fixes

SymptomLikely causeFix
401 unauthorizedwrong, expired, or badly pasted OPENAI_API_KEYgenerate a new token and set it again
404wrong base_url or missing /v1use https://crazyrouter.com/v1
model not foundwrong model name or the token does not allow itswitch back to gpt-5.4 and check the allowlist
embeddings failthe embedding model was not allowedadd text-embedding-3-large to the token allowlist
RAG fails in many placestoo many components were added at oncego back to single-turn chat, then rebuild step by step
spend rises too quicklychains, retrieval, and multi-turn agent loops are stacking costreduce scope and separate budgets by environment

FAQ

Which protocol should LangChain use with Crazyrouter?

Use the OpenAI-compatible route.

What base URL should I set?

Use https://crazyrouter.com/v1.

Which Python package should I use?

Prefer langchain-openai.

Which JavaScript / TypeScript package should I use?

Prefer @langchain/openai.

Why not jump straight into agents or large RAG pipelines?

Because once a LangChain flow becomes complex, debugging becomes much harder. Minimal chat first, then layer features gradually.
If you are integrating Crazyrouter into a real application codebase, LangChain is still one of the most important framework guides to keep detailed and accurate.