Agent Skills
Discover and share powerful Agent Skills for AI assistants
linear-rate-limits - Agent Skill - Agent Skills
Home/ Skills / linear-rate-limits Handle Linear API rate limiting, complexity budgets, and quotas.
Use when dealing with 429 errors, implementing throttling,
or optimizing request patterns to stay within limits.
Trigger: "linear rate limit", "linear throttling", "linear 429",
"linear API quota", "linear complexity", "linear request limits".
Use the skills CLI to install this skill with one command. Auto-detects all installed AI assistants.
Method 1 - skills CLI
npx skills i jeremylongshore/claude-code-plugins-plus-skills/plugins/saas-packs/linear-pack/skills/linear-rate-limits CopyMethod 2 - openskills (supports sync & update)
npx openskills install jeremylongshore/claude-code-plugins-plus-skills CopyAuto-detects Claude Code, Cursor, Codex CLI, Gemini CLI, and more. One install, works everywhere.
Installation Path
Download and extract to one of the following locations:
Claude Code Cursor OpenCode Gemini CLI Codex CLI
~/.claude/skills/linear-rate-limits/ Back No setup needed. Let our cloud agents run this skill for you.
Select Model
Claude Haiku 4.5 $0.10 Claude Sonnet 4.5 $0.20 Claude Opus 4.5 $0.50 Claude Sonnet 4.5 $0.20 /task
Best for coding tasks
Try NowEnvironment setup included
Linear Rate Limits
Overview
Linear uses the leaky bucket algorithm with two rate limiting dimensions. Understanding both is critical for reliable integrations:
Budget Limit Refill Rate Requests 5,000/hour per API key ~83/min constant refill Complexity 250,000 points/hour ~4,167/min constant refill Max single query 10,000 points Hard reject if exceeded
Complexity scoring: Each property = 0.1 pt, each object = 1 pt, connections multiply children by first arg (default 50), then round up.
Prerequisites
@linear/sdk installed
Understanding of HTTP response headers
Familiarity with async/await patterns
Instructions
Linear returns rate limit info on every response.
const response = await fetch ( "https://api.linear.app/graphql" , {
method: "POST" ,
headers: {
Authorization: process.env. LINEAR_API_KEY ! ,
"Content-Type" : "application/json" ,
},
body: JSON .
Step 2: Exponential Backoff with Jitter
import { LinearClient } from "@linear/sdk" ;
class RateLimitedClient {
private client : LinearClient ;
constructor ( apiKey : string ) {
this .client = new LinearClient
Step 3: Request Queue with Token Bucket
Prevent bursts by spacing requests evenly.
class RequestQueue {
private queue : Array <{ fn : () => Promise < any >; resolve : Function ; reject : Function }> =
Step 4: Reduce Query Complexity
// HIGH COMPLEXITY (~12,500 pts):
// 250 issues * (1 issue + 50 labels * 0.1 per field) = expensive
// const heavy = await client.issues({ first: 250 });
// LOW COMPLEXITY (~55 pts):
// 50 issues * (5 fields * 0.1 + 1 object) = cheap
const light = await client. issues ({
first: 50 ,
filter: { team: { id: { eq: teamId } } },
});
// Use rawRequest for minimal field selection
const minimal = await
Step 5: Batch Mutations
Combine multiple mutations into one GraphQL request.
// Instead of 100 separate issueUpdate calls (~100 requests):
async function batchUpdatePriority ( client : LinearClient , issueIds : string [], priority : number ) {
const chunkSize = 20 ; // Keep each batch under complexity limit
for
Step 6: Rate Limit Monitor
class RateLimitMonitor {
private remaining = { requests: 5000 , complexity: 250000 };
update ( headers : Headers ) {
const reqRemaining = headers. get ( "x-ratelimit-requests-remaining" );
const cxRemaining = headers.
Error Handling
Error Cause Solution HTTP 429 Request or complexity budget exceeded Parse headers, back off exponentially Query complexity too highSingle query > 10,000 pts Reduce first to 50, remove nested relations Burst of 429s on startup Init fetches too much data Stagger startup queries, cache static data Timeout on SDK call Server under load Add 30s timeout, retry once
Examples
Rate Limit Status Check
curl -s -I -X POST https://api.linear.app/graphql \
-H "Authorization: $LINEAR_API_KEY " \
-H "Content-Type: application/json" \
-d '{"query": "{ viewer { id } }"}' 2>&1 | grep -i ratelimit
Safe Bulk Import
const rlClient = new RateLimitedClient (process.env. LINEAR_API_KEY ! );
const items = [ /* issues to import */ ];
for ( let i = 0 ; i < items. length ; i ++ ) {
await rlClient. withRetry (() =>
rlClient.sdk.
Resources
stringify
({ query:
"{ viewer { id } }"
}),
});
// Key headers
const headers = {
requestsRemaining: response.headers. get ( "x-ratelimit-requests-remaining" ),
requestsLimit: response.headers. get ( "x-ratelimit-requests-limit" ),
requestsReset: response.headers. get ( "x-ratelimit-requests-reset" ),
complexityRemaining: response.headers. get ( "x-ratelimit-complexity-remaining" ),
complexityLimit: response.headers. get ( "x-ratelimit-complexity-limit" ),
queryComplexity: response.headers. get ( "x-complexity" ),
};
console. log ( `Requests: ${ headers . requestsRemaining }/${ headers . requestsLimit }` );
console. log ( `Complexity: ${ headers . complexityRemaining }/${ headers . complexityLimit }` );
console. log ( `This query cost: ${ headers . queryComplexity } points` );
({ apiKey });
}
async withRetry < T >( fn : () => Promise < T >, maxRetries = 5 ) : Promise < T > {
for ( let attempt = 0 ; attempt < maxRetries; attempt ++ ) {
try {
return await fn ();
} catch ( error : any ) {
const isRateLimited = error.status === 429 ||
error.message?. includes ( "rate" ) ||
error.type === "ratelimited" ;
if ( ! isRateLimited || attempt === maxRetries - 1 ) throw error;
// Exponential backoff: 1s, 2s, 4s, 8s, 16s + jitter
const delay = 1000 * Math. pow ( 2 , attempt) + Math. random () * 500 ;
console. warn ( `Rate limited (attempt ${ attempt + 1 }/${ maxRetries }), waiting ${ Math . round ( delay ) }ms` );
await new Promise ( r => setTimeout (r, delay));
}
}
throw new Error ( "Unreachable" );
}
get sdk () { return this .client; }
}
[];
private processing = false ;
private intervalMs : number ;
constructor ( requestsPerSecond = 10 ) {
this .intervalMs = 1000 / requestsPerSecond;
}
async enqueue < T >( fn : () => Promise < T >) : Promise < T > {
return new Promise (( resolve , reject ) => {
this .queue. push ({ fn, resolve, reject });
if ( ! this .processing) this . processQueue ();
});
}
private async processQueue () {
this .processing = true ;
while ( this .queue. length > 0 ) {
const { fn , resolve , reject } = this .queue. shift () ! ;
try {
resolve ( await fn ());
} catch (error) {
reject (error);
}
if ( this .queue. length > 0 ) {
await new Promise ( r => setTimeout (r, this .intervalMs));
}
}
this .processing = false ;
}
}
// Usage: 8 requests/second max
const queue = new RequestQueue ( 8 );
const client = new LinearClient ({ apiKey: process.env. LINEAR_API_KEY ! });
const teamResults = await Promise . all (
teamIds. map ( id => queue. enqueue (() => client. team (id)))
);
client.client.
rawRequest
(
`
query { issues(first: 50) { nodes { id identifier title priority } } }
` );
// Sort by updatedAt to get fresh data first, avoid paginating everything
const fresh = await client. issues ({
first: 50 ,
orderBy: "updatedAt" ,
filter: { updatedAt: { gte: lastSyncTime } },
});
(
let
i
=
0
; i
<
issueIds.
length
; i
+=
chunkSize) {
const chunk = issueIds. slice (i, i + chunkSize);
const mutations = chunk. map (( id , j ) =>
`u${ j }: issueUpdate(id: "${ id }", input: { priority: ${ priority } }) { success }`
). join ( " \n " );
await queue. enqueue (() =>
client.client. rawRequest ( `mutation BatchUpdate { ${ mutations } }` )
);
}
}
// Batch archive
async function batchArchive ( client : LinearClient , issueIds : string []) {
for ( let i = 0 ; i < issueIds. length ; i += 20 ) {
const chunk = issueIds. slice (i, i + 20 );
const mutations = chunk. map (( id , j ) =>
`a${ j }: issueArchive(id: "${ id }") { success }`
). join ( " \n " );
await client.client. rawRequest ( `mutation { ${ mutations } }` );
}
}
get
(
"x-ratelimit-complexity-remaining"
);
if (reqRemaining) this .remaining.requests = parseInt (reqRemaining);
if (cxRemaining) this .remaining.complexity = parseInt (cxRemaining);
}
isLow () : boolean {
return this .remaining.requests < 100 || this .remaining.complexity < 5000 ;
}
getStatus () {
return {
requests: this .remaining.requests,
complexity: this .remaining.complexity,
healthy: ! this . isLow (),
};
}
}
createIssue
({ teamId:
"team-uuid"
, title: items[i].title })
);
if ((i + 1 ) % 50 === 0 ) console. log ( `Imported ${ i + 1 }/${ items . length }` );
}