Use when you have a spec or requirements for a multi-step task, before touching code
Use the skills CLI to install this skill with one command. Auto-detects all installed AI assistants.
Method 1 - skills CLI
npx skills i obra/superpowers/skills/writing-plansMethod 2 - openskills (supports sync & update)
npx openskills install obra/superpowersAuto-detects Claude Code, Cursor, Codex CLI, Gemini CLI, and more. One install, works everywhere.
Installation Path
Download and extract to one of the following locations:
No setup needed. Let our cloud agents run this skill for you.
Select Provider
Select Model
Best for coding tasks
No setup required
Write comprehensive implementation plans assuming the engineer has zero context for our codebase and questionable taste. Document everything they need to know: which files to touch for each task, code, testing, docs they might need to check, how to test it. Give them the whole plan as bite-sized tasks. DRY. YAGNI. TDD. Frequent commits.
Assume they are a skilled developer, but know almost nothing about our toolset or problem domain. Assume they don't know good test design very well.
Announce at start: "I'm using the writing-plans skill to create the implementation plan."
Context: This should be run in a dedicated worktree (created by brainstorming skill).
Save plans to: docs/plans/YYYY-MM-DD-<feature-name>.md
Each step is one action (2-5 minutes):
Every plan MUST start with this header:
# [Feature Name] Implementation Plan
> **For Claude:** REQUIRED SUB-SKILL: Use superpowers:executing-plans to implement this plan task-by-task.
**Goal:** [One sentence describing what this builds]
**Architecture:** [2-3 sentences about approach]
**Tech Stack:** [Key technologies/libraries]
---### Task N: [Component Name]
**Files:**
- Create: `exact/path/to/file.py`
- Modify: `exact/path/to/existing.py:123-145`
- Test: `tests/exact/path/to/test.py`
**Step 1: Write the failing test**
```python
def test_specific_behavior():
result = function(input)
After saving the plan, offer execution choice:
"Plan complete and saved to docs/plans/<filename>.md. Two execution options:
1. Subagent-Driven (this session) - I dispatch fresh subagent per task, review between tasks, fast iteration
2. Parallel Session (separate) - Open new session with executing-plans, batch execution with checkpoints
Which approach?"
If Subagent-Driven chosen:
If Parallel Session chosen: