Complete reference for integrating with 300+ AI models through the OpenRouter TypeScript SDK using the callModel pattern
Use the skills CLI to install this skill with one command. Auto-detects all installed AI assistants.
Method 1 - skills CLI
npx skills i OpenRouterTeam/agent-skills/skills/typescript-sdkMethod 2 - openskills (supports sync & update)
npx openskills install OpenRouterTeam/agent-skillsAuto-detects Claude Code, Cursor, Codex CLI, Gemini CLI, and more. One install, works everywhere.
Installation Path
Download and extract to one of the following locations:
No setup needed. Let our cloud agents run this skill for you.
Select Provider
Select Model
Best for coding tasks
Environment setup included
A comprehensive TypeScript SDK for interacting with OpenRouter's unified API, providing access to 300+ AI models through a single, type-safe interface. This skill enables AI agents to leverage the callModel pattern for text generation, tool usage, streaming, and multi-turn conversations.
npm install @openrouter/sdkGet your API key from openrouter.ai/settings/keys, then initialize:
import OpenRouter from '@openrouter/sdk';
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});The SDK supports two authentication methods: API keys for server-side applications and OAuth PKCE flow for user-facing applications.
The primary authentication method uses API keys from your OpenRouter account.
export OPENROUTER_API_KEY=sk-or-v1-your-key-hereimport OpenRouter from '@openrouter/sdk';
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});The client automatically uses this key for all subsequent requests:
// API key is automatically included
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});Retrieve information about the currently configured API key:
const keyInfo = await client.apiKeys.getCurrentKeyMetadata();
console.log('Key name:', keyInfo.name);
console.log('Created:', keyInfo.createdAt);Programmatically manage API keys:
// List all keys
const keys = await client.apiKeys.list();
// Create a new key
const newKey = await client.apiKeys.create({
name: 'Production API Key'
});
// Get a specific key by hash
const key = await client.apiKeys.get({
hash: 'sk-or-v1-...'
For user-facing applications where users should control their own API keys, OpenRouter supports OAuth with PKCE (Proof Key for Code Exchange). This flow allows users to generate API keys through a browser authorization flow without your application handling their credentials.
Generate an authorization code and URL to start the OAuth flow:
const authResponse = await client.oAuth.createAuthCode({
callbackUrl: 'https://myapp.com/auth/callback'
});
// authResponse contains:
// - authorizationUrl: URL to redirect the user to
// - code: The authorization code for later exchange
console.log('Redirect user to:', authResponse.authorizationUrl);Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
callbackUrl | string | Yes | Your application's callback URL after user authorization |
Browser Redirect:
// In a browser environment
window.location.href = authResponse.authorizationUrl;
// Or in a server-rendered app, return a redirect response
res.redirect(authResponse.authorizationUrl);After the user authorizes your application, they are redirected back to your callback URL with an authorization code. Exchange this code for an API key:
// In your callback handler
const code = req.query.code; // From the redirect URL
const apiKeyResponse = await client.oAuth.exchangeAuthCodeForAPIKey({
code: code
});
// apiKeyResponse contains:
// - key: The user's API key
// - Additional metadata about the key
const userApiKey = apiKeyResponse.key;
// Store securely for this user's future requests
await saveUserApiKey(userId, userApiKey);Parameters:
| Parameter | Type | Required | Description |
|---|---|---|---|
code | string | Yes | The authorization code from the OAuth redirect |
import OpenRouter from '@openrouter/sdk';
import express from 'express';
const app = express();
The callModel function is the primary interface for text generation. It provides a unified, type-safe way to interact with any supported model.
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Explain quantum computing in one sentence.',
});
const text = await result.getText();The SDK accepts flexible input types for the input parameter:
A simple string becomes a user message:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello, how are you?'
});For multi-turn conversations:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: [
{ role: 'user', content: 'What is the capital of France?' },
{ role: 'assistant', content: 'The capital of France is Paris.' },
{ role: 'user', content: 'What is its population?' }
]
});Including images and text:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: [
{
role: 'user',
content: [
{ type: 'text', text: 'What is in this image?' },
{ type: 'image_url', image_url: { url: 'https://example.com/image.png' } }
]
}
]
});Use the instructions parameter for system-level guidance:
const result = client.callModel({
model: 'openai/gpt-5-nano',
instructions: 'You are a helpful coding assistant. Be concise.',
input: 'How do I reverse a string in Python?'
});The result object provides multiple methods for consuming the response:
| Method | Purpose |
|---|---|
getText() | Get complete text after all tools complete |
getResponse() | Full response object with token usage |
getTextStream() | Stream text deltas as they arrive |
getReasoningStream() | Stream reasoning tokens (for o1/reasoning models) |
getToolCallsStream() | Stream tool calls as they complete |
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a haiku about coding'
});
const text = await result.getText();
console.log(text);const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});
const response = await result.getResponse();
console.log('Text:', response.text);
console.log('Token usage:', response.usage);const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a short story'
});
for await (const delta of result.getTextStream()) {
process.stdout.write(delta);
}Create strongly-typed tools using Zod schemas for automatic validation and type inference.
import { tool } from '@openrouter/sdk';
import { z } from 'zod';
const weatherTool = tool({
name: 'get_weather',
description: 'Get current weather for a location',
inputSchema: z.object({
location: z.string().describe(
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'What is the weather in Paris?',
tools: [weatherTool]
});
const text = await result.getText();
// The SDK automatically executes the tool and continues the conversationStandard execute functions that return a result:
const calculatorTool = tool({
name: 'calculate',
description: 'Perform mathematical calculations',
inputSchema: z.object({
expression: z.string()
}),
execute: async ({ expression }) => {
return { result: eval(expression) };
}
});Yield progress events using eventSchema:
const searchTool = tool({
name: 'web_search',
description: 'Search the web',
inputSchema: z.object({ query: z.string() }),
eventSchema: z.object({
type: z.literal('progress'),
message: z.string()
}),
outputSchema: z.object
Set execute: false to handle tool calls yourself:
const manualTool = tool({
name: 'user_confirmation',
description: 'Request user confirmation',
inputSchema: z.object({ message: z.string() }),
execute: false
});Control automatic tool execution with stop conditions:
import { stepCountIs, maxCost, hasToolCall } from '@openrouter/sdk';
const result = client.callModel({
model: 'openai/gpt-5.2',
input: 'Research this topic thoroughly',
tools: [searchTool, analyzeTool],
stopWhen: [
stepCountIs(10), // Stop after 10 turns
maxCost(1.00), // Stop if cost exceeds $1.00
| Condition | Description |
|---|---|
stepCountIs(n) | Stop after n turns |
maxCost(amount) | Stop when cost exceeds amount |
hasToolCall(name) | Stop when specific tool is called |
const customStop = (context) => {
return context.messages.length > 20;
};
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Complex task',
tools: [myTool],
stopWhen: customStop
});Compute parameters based on conversation context:
const result = client.callModel({
model: (ctx) => ctx.numberOfTurns > 3 ? 'openai/gpt-4' : 'openai/gpt-4o-mini',
temperature: (ctx) => ctx.numberOfTurns > 1 ? 0.3 : 0.7,
input: 'Hello!'
});| Property | Type | Description |
|---|---|---|
numberOfTurns | number | Current turn count |
messages | array | All messages so far |
instructions | string | Current system instructions |
totalCost | number | Accumulated cost |
Tools can modify parameters for subsequent turns, enabling skills and context-aware behavior:
const skillTool = tool({
name: 'load_skill',
description: 'Load a specialized skill',
inputSchema: z.object({
skill: z.string().describe('Name of the skill to load')
}),
nextTurnParams: {
instructions: (params, context) => {
Control model behavior with these parameters:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a creative story',
temperature: 0.7, // Creativity (0-2, default varies by model)
maxOutputTokens: 1000, // Maximum tokens to generate
topP: 0.9, // Nucleus sampling parameter
frequencyPenalty: 0.5, // Reduce repetition
presencePenalty:
All streaming methods support concurrent consumers from a single result object:
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Write a detailed explanation'
});
// Consumer 1: Stream text to console
const textPromise = (async () => {
for await (const delta of result.getTextStream()) {
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Search for information about TypeScript',
tools: [searchTool]
});
for await (const toolCall of result.getToolCallsStream()) {
console.log(`Tool called: ${toolCall.name}`);
Convert between ecosystem formats for interoperability:
import { fromChatMessages, toChatMessage } from '@openrouter/sdk';
// OpenAI messages → OpenRouter format
const result = client.callModel({
model: 'openai/gpt-5-nano',
input: fromChatMessages(openaiMessages)
});
// Response → OpenAI chat message format
const response = await result.getResponse();
const chatMsg =import { fromClaudeMessages, toClaudeMessage } from '@openrouter/sdk';
// Claude messages → OpenRouter format
const result = client.callModel({
model: 'anthropic/claude-3-opus',
input: fromClaudeMessages(claudeMessages)
});
// Response → Claude message format
const response = await result.getResponse();
const claudeMsg =The SDK uses the OpenResponses format for messages. Understanding these shapes is essential for building robust agents.
Messages contain a role property that determines the message type:
| Role | Description |
|---|---|
user | User-provided input |
assistant | Model-generated responses |
system | System instructions |
developer | Developer-level directives |
tool | Tool execution results |
Simple text content from user or assistant:
interface TextMessage {
role: 'user' | 'assistant';
content: string;
}Messages with mixed content types:
interface MultimodalMessage {
role: 'user';
content: Array<
| { type: 'input_text'; text: string }
| { type: 'input_image'; imageUrl: string; detail?: 'auto' |
When the model requests a tool execution:
interface ToolCallMessage {
role: 'assistant';
content?: null;
tool_calls?: Array<{
id: string;
type: 'function';
function: {
name: string;
arguments: string; // JSON-encoded arguments
Result returned after tool execution:
interface ToolResultMessage {
role: 'tool';
tool_call_id: string;
content: string; // JSON-encoded result
}The complete response object from getResponse():
interface OpenResponsesNonStreamingResponse {
output: Array<ResponseMessage>;
usage?: {
inputTokens: number;
outputTokens: number;
cachedTokens?: number;
};
finishReason?: string;
warnings?: Array<{
Output messages in the response array:
// Text/content message
interface ResponseOutputMessage {
type: 'message';
role: 'assistant';
content: string | Array<ContentPart>;
reasoning?: string; // For reasoning models (o1, etc.)
}
// Tool result in output
interface FunctionCallOutputMessage {
type
When tool calls are parsed from the response:
interface ParsedToolCall {
id: string;
name: string;
arguments: unknown; // Validated against inputSchema
}After a tool completes execution:
interface ToolExecutionResult {
toolCallId: string;
toolName: string;
result: unknown; // Validated against outputSchema
preliminaryResults?: unknown[]; // From generator tools
error?: Error;
}Available in custom stop condition callbacks:
interface StepResult {
stepType: 'initial' | 'continue';
text: string;
toolCalls: ParsedToolCall[];
toolResults: ToolExecutionResult[];
response: OpenResponsesNonStreamingResponse;
usage?: {
inputTokens: number;
Available to tools and dynamic parameter functions:
interface TurnContext {
numberOfTurns: number; // Turn count (1-indexed)
turnRequest?: OpenResponsesRequest; // Current request being made
toolCall?: OpenResponsesFunctionToolCall; // Current tool call (in tool context)
}The SDK provides multiple streaming methods that yield different event types.
The getFullResponsesStream() method yields these event types:
type EnhancedResponseStreamEvent =
| ResponseCreatedEvent
| ResponseInProgressEvent
| OutputTextDeltaEvent
| OutputTextDoneEvent
| ReasoningDeltaEvent
| ReasoningDoneEvent
| FunctionCallArgumentsDeltaEvent
| FunctionCallArgumentsDoneEvent
| ResponseCompletedEvent
| ToolPreliminaryResultEvent;| Event Type | Description | Payload |
|---|---|---|
response.created | Response object initialized | { response: ResponseObject } |
response.in_progress | Generation has started | {} |
response.output_text.delta | Text chunk received | { delta: string } |
response.output_text.done | Text generation complete |
interface OutputTextDeltaEvent {
type: 'response.output_text.delta';
delta: string;
}For reasoning models (o1, etc.):
interface ReasoningDeltaEvent {
type: 'response.reasoning.delta';
delta: string;
}interface FunctionCallArgumentsDeltaEvent {
type: 'response.function_call_arguments.delta';
delta: string;
}From generator tools that yield progress:
interface ToolPreliminaryResultEvent {
type: 'tool.preliminary_result';
toolCallId: string;
result: unknown; // Matches the tool's eventSchema
}interface ResponseCompletedEvent {
type: 'response.completed';
response: OpenResponsesNonStreamingResponse;
}The getToolStream() method yields:
type ToolStreamEvent =
| { type: 'delta'; content: string }
| { type: 'preliminary_result'; toolCallId: string; result: unknown };const result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Analyze this data',
tools: [analysisTool]
});
for await (const event of result.getFullResponsesStream()) {
switch (event.type) {
case
The getNewMessagesStream() yields OpenResponses format updates:
type MessageStreamUpdate =
| ResponsesOutputMessage // Text/content updates
| OpenResponsesFunctionCallOutput; // Tool resultsconst result = client.callModel({
model: 'openai/gpt-5-nano',
input: 'Research this topic',
tools: [searchTool]
});
const allMessages: MessageStreamUpdate[] = [];
for await (const message of result.getNewMessagesStream()) {
Beyond callModel, the client provides access to other API endpoints:
const client = new OpenRouter({
apiKey: process.env.OPENROUTER_API_KEY
});
// List available models
const models = await client.models.list();
// Chat completions (alternative to callModel)
const completion = await client.chat.send({
model:
The SDK provides specific error types with actionable messages:
try {
const result = await client.callModel({
model: 'openai/gpt-5-nano',
input: 'Hello!'
});
const text = await result.getText();
} catch (error) {
if (error.statusCode ===
| Code | Meaning | Action |
|---|---|---|
| 400 | Bad request | Check request parameters |
| 401 | Unauthorized | Verify API key |
| 402 | Payment required | Add credits |
| 429 | Rate limited | Implement exponential backoff |
| 500 | Server error | Retry with backoff |
| 503 | Service unavailable | Try alternative model |
import OpenRouter, { tool, stepCountIs } from '@openrouter/sdk';
import { z } from 'zod';
const client = new OpenRouter
The callModel pattern provides automatic tool execution, type safety, and multi-turn handling.
Zod provides runtime validation and excellent TypeScript inference:
import { z } from 'zod';
const schema = z.object({
name: z.string().min(1),
age: z.number().int().positive()
});Always set reasonable limits to prevent runaway costs:
stopWhen: [stepCountIs(20), maxCost(5.00)]Implement retry logic for transient failures:
async function callWithRetry(params, maxRetries = 3) {
for (let i = 0; i < maxRetries; i++) {
try {
return await client.callModel(params).getText();
} catch (error) {
if
Streaming provides better UX and allows early termination:
for await (const delta of result.getTextStream()) {
// Process incrementally
}SDK Status: Beta - Report issues on GitHub
{ text: string } |
response.reasoning.delta | Reasoning chunk (o1 models) | { delta: string } |
response.reasoning.done | Reasoning complete | { reasoning: string } |
response.function_call_arguments.delta | Tool argument chunk | { delta: string } |
response.function_call_arguments.done | Tool arguments complete | { arguments: string } |
response.completed | Full response complete | { response: ResponseObject } |
tool.preliminary_result | Generator tool progress | { toolCallId: string; result: unknown } |