Skip to main content
POST
/
v1
/
embeddings
Create Embeddings
curl --request POST \
  --url https://api.example.com/v1/embeddings \
  --header 'Content-Type: application/json' \
  --data '
{
  "model": "<string>",
  "input": [
    "<string>"
  ],
  "encoding_format": "<string>",
  "dimensions": 123
}
'

Overview

Convert input text into high-dimensional vectors for use in semantic search, clustering, recommendations, and more. Fully compatible with the OpenAI Embeddings API format.

Supported Models

ModelDimensionsDescription
text-embedding-3-large3072High accuracy, recommended for production
text-embedding-3-small1536Cost-effective
text-embedding-ada-0021536Classic model

Request Parameters

model
string
required
Embedding model name, e.g. text-embedding-3-large
input
string | string[]
required
Text to embed. Supports a single string or an array of strings
encoding_format
string
default:"float"
Return format: float or base64
dimensions
integer
Output vector dimensions (only supported by text-embedding-3-* models)

Response Format

{
  "object": "list",
  "data": [
    {
      "object": "embedding",
      "index": 0,
      "embedding": [0.0023064255, -0.009327292, ...]
    }
  ],
  "model": "text-embedding-3-large",
  "usage": {
    "prompt_tokens": 8,
    "total_tokens": 8
  }
}

Code Examples

from openai import OpenAI

client = OpenAI(
    api_key="sk-xxx",
    base_url="https://crazyrouter.com/v1"
)

response = client.embeddings.create(
    model="text-embedding-3-large",
    input="Crazyrouter is an AI model gateway"
)

embedding = response.data[0].embedding
print(f"Vector dimensions: {len(embedding)}")
print(f"First 5 values: {embedding[:5]}")
Batch requests support up to 2048 texts per call. Each text should not exceed 8191 tokens.