OpenRouter + OpenClaw: Access 300+ Models in 2 Minutes
Most people set up OpenClaw with one provider. Anthropic. Maybe OpenAI. Then they're stuck.
Want to try Llama 3 for a coding task? New API key. New config. New billing dashboard. Want to benchmark Claude vs GPT-4 vs Gemini on the same prompt? Three accounts. Three keys. Three invoices.
OpenRouter fixes this. One API key. 300+ models. 60+ providers. Same OpenClaw setup.
Here's how to wire it up.
Why OpenRouter for OpenClaw
Three reasons this matters:
- One key, every model. Claude Sonnet, GPT-4.1, Gemini Flash, Llama 3.1, Mistral, DeepSeek — all through a single
sk-or-key. No juggling provider dashboards. - Free tiers exist. Some models on OpenRouter have free usage tiers. If you're building a PoC or just learning OpenClaw, you can experiment without burning credits.
- Fallback routing. If one provider goes down, OpenRouter can route to another. Your agents stay up.
If you've been locked into a single model because switching providers felt like a chore — this is the fix.
Prerequisites
You need three things:
- OpenRouter account — Sign up at openrouter.ai
- API key — Generate one at openrouter.ai/settings/keys
- OpenClaw running — Either self-hosted or via Clawctl
Your API key will look like this: sk-or-v1-abc123...
Keep it safe. Treat it like a password.
Step 1: Get Your API Key
Go to openrouter.ai/settings/keys.
Click Create Key. Give it a name like "openclaw-production" so you remember what it's for. Copy the key — you won't see it again.
Add credits. Even $5 is enough to get started. Some models (like certain Llama and Mistral variants) have free tiers, but most production models charge per token.
Step 2: Configure OpenClaw
Add OpenRouter as a provider in your OpenClaw config:
Option A: Environment variable (recommended)
export OPENROUTER_API_KEY="sk-or-v1-your-key-here"
Option B: Configuration file
In your openclaw.json:
{
"credentials": {
"openrouter": {
"apiKey": "sk-or-v1-your-key-here"
}
}
}
Don't commit your key to git. Use environment variables or a secrets manager in production.
Step 3: Pick Your Model
This is where it gets fun. OpenRouter gives you access to models from every major provider — through one API.
Configure your agent's model:
{
"agents": {
"list": [
{
"id": "main",
"model": "openrouter/anthropic/claude-sonnet-4-5",
"workspace": "/path/to/workspace"
}
]
}
}
Top model picks for OpenClaw agents:
| Model | Best For | Speed | Cost |
|---|---|---|---|
| anthropic/claude-sonnet-4-5 | General agent tasks, coding | Fast | $$ |
| openai/gpt-4.1 | Broad reasoning, tool use | Fast | $$ |
| google/gemini-2.0-flash-exp | Speed-critical tasks | Very fast | $ |
| meta-llama/llama-3.1-70b | Open-source, privacy-first | Medium | $ |
| mistralai/mistral-large | European hosting, multilingual | Medium | $$ |
| deepseek/deepseek-chat | Cost-efficient coding | Fast | $ |
The model ID format is provider/model-name. Check the full list at openrouter.ai/models.
Recommendation: Start with anthropic/claude-sonnet-4-5. It's the best all-rounder for agentic work. Swap to cheaper models for low-stakes tasks once you know your workload.
Step 4: Test the Connection
Fire up OpenClaw and verify everything works:
openclaw start
Test with a simple prompt:
openclaw chat "What model are you? Tell me your exact model ID."
You should see a response confirming the model you configured. If you picked Claude Sonnet through OpenRouter, it'll behave exactly like native Anthropic — because it IS Claude, just routed through OpenRouter's API.
Step 5: Set Up Cost Controls
OpenRouter charges per token, just like direct provider APIs. Set limits so you don't wake up to a surprise bill:
{
"agents": {
"list": [
{
"id": "main",
"model": "openrouter/anthropic/claude-sonnet-4-5",
"tokenLimits": {
"maxInputTokens": 100000,
"maxOutputTokens": 4096,
"maxTokensPerDay": 500000
}
}
]
}
}
You can also set spending limits directly on OpenRouter's dashboard under Settings > Limits. Belt and suspenders.
Step 6: Multi-Model Setup (Advanced)
Here's the real power move. Run different models for different agents:
{
"agents": {
"list": [
{
"id": "analyst",
"model": "openrouter/anthropic/claude-sonnet-4-5",
"systemPrompt": "You analyze data and produce reports. Be thorough and precise."
},
{
"id": "web-scraper",
"model": "openrouter/google/gemini-2.0-flash-exp",
"systemPrompt": "You process web data quickly. Prioritize speed over depth."
}
]
}
}
Why this works:
- Analyst agent uses Claude Sonnet — better at reasoning, worth the extra cost
- Web scraper uses Gemini Flash — fast, cheap, good enough for data processing
- One API key manages both. One bill. One dashboard.
This is the setup pattern that lets you build composable multi-agent systems without managing three different provider accounts.
Step 7: Production Security
Lock it down before deploying:
{
"gateway": {
"host": "127.0.0.1",
"port": 3000,
"authToken": "your-secure-gateway-token"
},
"security": {
"egressControl": {
"enabled": true,
"allowedDomains": [
"openrouter.ai"
]
}
}
}
Egress control matters. Your agent only needs to talk to openrouter.ai. Block everything else. This prevents data exfiltration if an agent gets a malicious prompt.
Common Issues
"Invalid API key" Error
Check that:
- Your key starts with
sk-or-(notsk-ant-orsk-) - The key hasn't been revoked on OpenRouter's dashboard
- You have credits on your OpenRouter account
"Model not found" Error
OpenRouter model IDs use the format provider/model-name:
- ✅
openrouter/anthropic/claude-sonnet-4-5 - ❌
claude-sonnet-4-5 - ❌
anthropic/claude-sonnet
Check openrouter.ai/models for exact model IDs.
Slow Responses
Some models on OpenRouter are hosted by third parties with variable latency. If speed matters:
- Use
google/gemini-2.0-flash-expfor fastest responses - Stick to top-tier providers (Anthropic, OpenAI, Google) for consistent latency
- Check the "Latency" column on OpenRouter's model page
Free Tier Rate Limits
Free models have lower rate limits. If you're hitting limits:
- Add $5-10 in credits and use paid models
- Or reduce request frequency with token limits
Cost Comparison
Same model, different routes:
| Route | Claude Sonnet Input | Claude Sonnet Output |
|---|---|---|
| Direct Anthropic | $3/M tokens | $15/M tokens |
| Via OpenRouter | $3/M tokens | $15/M tokens |
OpenRouter doesn't mark up most major models. You pay the same price but get the flexibility to switch models without changing your setup.
Budget-friendly alternatives on OpenRouter:
| Model | Input | Output | When to Use |
|---|---|---|---|
| deepseek/deepseek-chat | $0.14/M | $0.28/M | Coding tasks, cost-sensitive |
| meta-llama/llama-3.1-8b | Free tier | Free tier | PoC, testing, learning |
| google/gemini-2.0-flash-exp | $0.10/M | $0.40/M | High-volume, speed-critical |
With Clawctl
If you're on Clawctl, the setup is even simpler. No config files.
- Open your Dashboard
- Click the Setup Wizard
- Select OpenRouter as your provider
- Paste your
sk-or-API key - Clawctl validates, encrypts, and deploys
That's it. Your key is stored encrypted (AES-256-GCM), injected into your agent at runtime, and never exposed in logs or API responses.
Clawctl also handles:
- Egress control — Only
openrouter.aiis allowed by default - Human approvals — 70+ high-risk actions require explicit approval
- Audit trail — Full searchable log of every agent action
- Cost monitoring — Token usage tracking with alerts
You get the model flexibility of OpenRouter with the production security of Clawctl. Best of both.
Set up OpenRouter on Clawctl →
What's Next
Once you're running OpenRouter + OpenClaw:
- Benchmark models — Try the same prompt across Claude, GPT-4, and Gemini. Pick the best fit for each task.
- Build multi-agent setups — Use Claude for reasoning, Gemini for speed, Llama for privacy-sensitive tasks.
- Explore free tiers — Test new models without spending a dime.
300+ models. One key. Go build something.