OpenClaw Custom Model Provider Guide
This tutorial walks you through configuring custom model providers in OpenClaw to connect various third-party AI model services.
Prerequisites
Before you begin, make sure you have:
- OpenClaw installed and running
- An API key from at least one model provider (e.g., PinCC, Alibaba Cloud Bailian, DeepSeek, etc.)
Core Concepts
OpenClaw supports connecting any API endpoint compatible with OpenAI / Anthropic / Google protocols via models.providers, including:
- Official APIs: Anthropic, OpenAI, Google Gemini
- Third-party relays: PinCC, SiliconFlow, etc.
- Chinese LLMs: Alibaba Cloud Bailian (Qwen), DeepSeek, Moonshot (Kimi), MiniMax
- Local inference: Ollama, vLLM, LM Studio
Supported API Protocols
| Protocol | Use Case |
|---|---|
anthropic-messages | Anthropic Claude series and compatible APIs |
openai-completions | OpenAI and most OpenAI-compatible providers |
openai-responses | OpenAI Responses API |
google-generative-ai | Google Gemini series |
Configuration Modes
models.mode supports two modes:
merge(default): Custom providers coexist with built-in providersreplace: Completely disable built-in providers, use only custom configurations
In most cases, keep the default merge mode.
Configuration File Location
The OpenClaw configuration file is located at:
~/.openclaw/openclaw.jsonDirectory structure:
~/.openclaw/
├── openclaw.json # Core configuration
├── env # Environment variables / API Keys
├── backups/ # Configuration backups
└── logs/ # Log filesBasic Configuration Structure
The basic structure for model provider configuration in openclaw.json:
{
"models": {
"mode": "merge",
"providers": {
"provider-name": {
"baseUrl": "API base URL",
"apiKey": "your-api-key",
"api": "protocol-type",
"models": [
{
"id": "model-id",
"name": "Model Display Name",
"contextWindow": 200000,
"maxTokens": 8192
}
]
}
}
}
}Model Field Reference
| Field | Type | Description |
|---|---|---|
id | string | Model ID (required), must match the provider's model identifier exactly |
name | string | Model display name (required) |
contextWindow | number | Context window size (in tokens) |
maxTokens | number | Maximum output tokens |
reasoning | boolean | Whether this is a reasoning model (some models require false) |
input | array | Input types, e.g., ["text", "image"] |
cost | object | Pricing info with input/output/cacheRead/cacheWrite |
Configuration Examples
Example 1: PinCC (Multi-model Aggregation Platform)
PinCC supports Claude, OpenAI, and Gemini models with a single API key.
Step 1: Get PinCC API Key
- Navigate to the API Keys page in your dashboard
- Click Create Key and copy the generated key
Step 2: Configure Claude Models
{
"models": {
"mode": "merge",
"providers": {
"pincc-claude": {
"baseUrl": "your-pincc-relay-url",
"apiKey": "your-pincc-token",
"api": "anthropic-messages",
"authHeader": true,
"models": [
{
"id": "claude-opus-4-6",
"name": "Claude Opus 4.6",
"api": "anthropic-messages",
"reasoning": false,
"input": ["text"],
"contextWindow": 200000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-sonnet-4-6",
"name": "Claude Sonnet 4.6",
"api": "anthropic-messages",
"reasoning": false,
"input": ["text"],
"contextWindow": 200000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "claude-haiku-4-5",
"name": "Claude Haiku 4.5",
"api": "anthropic-messages",
"reasoning": false,
"input": ["text"],
"contextWindow": 200000,
"maxTokens": 8192,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
}
}
}Step 3: Configure OpenAI Models (Optional)
Add under providers:
{
"pincc-openai": {
"baseUrl": "your-pincc-relay-url/v1",
"apiKey": "your-pincc-token",
"auth": "token",
"api": "openai-responses",
"authHeader": true,
"models": [
{
"id": "gpt-5.4",
"name": "gpt-5.4",
"api": "openai-responses",
"reasoning": false,
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 16384,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.3-codex",
"name": "gpt-5.3-codex",
"api": "openai-responses",
"reasoning": false,
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 16384,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gpt-5.2",
"name": "gpt-5.2",
"api": "openai-responses",
"reasoning": false,
"input": ["text"],
"contextWindow": 128000,
"maxTokens": 16384,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
}Step 4: Configure Google Gemini Models (Optional)
{
"pincc-gemini": {
"baseUrl": "your-pincc-relay-url/v1beta",
"apiKey": "your-pincc-token",
"auth": "token",
"api": "google-generative-ai",
"authHeader": true,
"models": [
{
"id": "gemini-3.1-pro-preview",
"name": "Gemini 3.1 Pro Preview",
"api": "google-generative-ai",
"reasoning": false,
"input": ["text"],
"contextWindow": 1000000,
"maxTokens": 65536,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gemini-3-pro-preview",
"name": "Gemini 3 Pro Preview",
"api": "google-generative-ai",
"reasoning": false,
"input": ["text"],
"contextWindow": 1000000,
"maxTokens": 65536,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
},
{
"id": "gemini-3-flash-preview",
"name": "Gemini 3 Flash Preview",
"api": "google-generative-ai",
"reasoning": false,
"input": ["text"],
"contextWindow": 1000000,
"maxTokens": 65536,
"cost": { "input": 0, "output": 0, "cacheRead": 0, "cacheWrite": 0 }
}
]
}
}
---
### Example 2: Alibaba Cloud Bailian (Qwen)
```json
{
"models": {
"mode": "merge",
"providers": {
"bailian": {
"baseUrl": "https://dashscope.aliyuncs.com/compatible-mode/v1",
"apiKey": "your-bailian-api-key",
"api": "openai-completions",
"models": [
{
"id": "qwen3.5-plus",
"name": "Qwen 3.5 Plus",
"contextWindow": 131072,
"maxTokens": 8192,
"reasoning": false
}
]
}
}
}
}Note: Bailian models require
reasoningset tofalse, otherwise responses may be empty.
Example 3: DeepSeek
{
"models": {
"mode": "merge",
"providers": {
"deepseek": {
"baseUrl": "https://api.deepseek.com/v1",
"apiKey": "your-deepseek-api-key",
"api": "openai-completions",
"models": [
{
"id": "deepseek-chat",
"name": "DeepSeek Chat",
"contextWindow": 65536,
"maxTokens": 8192
}
]
}
}
}
}Example 4: Local Models (Ollama)
{
"models": {
"mode": "merge",
"providers": {
"ollama": {
"baseUrl": "http://localhost:11434/v1",
"apiKey": "ollama",
"api": "openai-completions",
"models": [
{
"id": "qwen3:32b",
"name": "Qwen3 32B (Local)",
"contextWindow": 32768,
"maxTokens": 4096
}
]
}
}
}
}Setting the Default Model
After configuring providers, set the default model for new sessions in openclaw.json:
{
"agents": {
"defaults": {
"maxConcurrent": 4,
"model": {
"primary": "pincc-claude/claude-sonnet-4-6"
}
}
}
}Model reference format: provider-id/model-id, for example:
pincc-claude/claude-opus-4-6bailian/qwen3.5-plusdeepseek/deepseek-chatollama/qwen3:32b
Verification
After configuration, follow these steps to verify:
Step 1: Restart OpenClaw Gateway
openclaw gateway restartStep 2: List configured models
openclaw models list --provider pincc-claudeStep 3: Send a test message
Launch OpenClaw, select your configured model, and send a test message.
Debugging
Diagnostic Command
openclaw doctorThis checks configuration syntax, provider connectivity, model availability, and auth status.
Common Issues
| Issue | Solution |
|---|---|
| Invalid API key | Verify key is copied correctly (no extra spaces), account has balance |
| Model unavailable | Check JSON format and model ID accuracy |
| Gateway won't start | Run openclaw doctor to check config syntax |
| State issues after switching providers | Delete ~/.openclaw/agents/main/agent/models.json and restart |
| Empty responses | Some models (e.g., Bailian) need "reasoning": false |
Debugging Tips
Use the minimal configuration approach:
- Start with minimal config (
baseUrl,apiKey,api, and one model with justidandname) - Verify with
openclaw models list --provider your-provider - Add more models and optional fields after confirming basic connectivity
Important Notes
- Replace all
apiKeyvalues with your actual keys - Do not directly modify
~/.openclaw/agents/main/agent/models.json— it gets overwritten byopenclaw.json - Make incremental changes to
openclaw.jsonrather than full replacements - OpenClaw sends significant context with each request; monitor your token usage and billing
- The config parser is strict — trailing commas, missing quotes, or nesting errors will cause startup failures