planning — for Claude Code planning, hospital-management-system-v3, community, for Claude Code, ide skills, Feature, Pipeline, Generate, quality, through

v1.0.0

À propos de ce Skill

Scenario recommande : Ideal for AI agents that need feature planning pipeline. Resume localise : # Feature Planning Pipeline Generate quality plans through systematic discovery, synthesis, verification, and decomposition. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Fonctionnalités

Feature Planning Pipeline
Generate quality plans through systematic discovery, synthesis, verification, and decomposition.
----------------- ---------------------------------------- -----------------------------------
0. Worktree Setup bd worktree Isolated feature branch
1. Discovery Parallel sub-agents, gkg, Librarian, exa Discovery Report

# Core Topics

thanhquan3010 thanhquan3010
[0]
[0]
Updated: 3/14/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
64
Canonical Locale
en
Detected Body Locale
en

Scenario recommande : Ideal for AI agents that need feature planning pipeline. Resume localise : # Feature Planning Pipeline Generate quality plans through systematic discovery, synthesis, verification, and decomposition. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Pourquoi utiliser cette compétence

Recommandation : planning helps agents feature planning pipeline. Feature Planning Pipeline Generate quality plans through systematic discovery, synthesis, verification, and decomposition. This AI agent skill supports

Meilleur pour

Scenario recommande : Ideal for AI agents that need feature planning pipeline.

Cas d'utilisation exploitables for planning

Cas d'usage : Applying Feature Planning Pipeline
Cas d'usage : Applying Generate quality plans through systematic discovery, synthesis, verification, and decomposition
Cas d'usage : Applying ----------------- ---------------------------------------- -----------------------------------

! Sécurité et Limitations

  • Limitation : Skip worktree only if : Quick fix on main that won't create new beads.
  • Limitation : After PR merges: Skip worktree only if : Quick fix on main that won't create new beads
  • Limitation : Requires repository-specific context from the skill documentation

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is planning?

Scenario recommande : Ideal for AI agents that need feature planning pipeline. Resume localise : # Feature Planning Pipeline Generate quality plans through systematic discovery, synthesis, verification, and decomposition. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install planning?

Run the command: npx killer-skills add thanhquan3010/hospital-management-system-v3/planning. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for planning?

Key use cases include: Cas d'usage : Applying Feature Planning Pipeline, Cas d'usage : Applying Generate quality plans through systematic discovery, synthesis, verification, and decomposition, Cas d'usage : Applying ----------------- ---------------------------------------- -----------------------------------.

Which IDEs are compatible with planning?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for planning?

Limitation : Skip worktree only if : Quick fix on main that won't create new beads.. Limitation : After PR merges: Skip worktree only if : Quick fix on main that won't create new beads. Limitation : Requires repository-specific context from the skill documentation.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add thanhquan3010/hospital-management-system-v3/planning. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use planning immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

planning

# Feature Planning Pipeline Generate quality plans through systematic discovery, synthesis, verification, and decomposition. This AI agent skill supports

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Feature Planning Pipeline

Generate quality plans through systematic discovery, synthesis, verification, and decomposition.

Pipeline Overview

USER REQUEST → Worktree Setup → Discovery → Synthesis → Verification → Decomposition → Validation → Track Planning → Ready Plan
PhaseToolOutput
0. Worktree Setupbd worktreeIsolated feature branch
1. DiscoveryParallel sub-agents, gkg, Librarian, exaDiscovery Report
2. SynthesisOracleApproach + Risk Map
3. VerificationSpikes via MULTI_AGENT_WORKFLOWValidated Approach + Learnings
4. Decompositionfile-beads skill.beads/*.md files
5. Validationbv + OracleValidated dependency graph
6. Track Planningbv --robot-planExecution plan with parallel tracks

Phase 0: Worktree Setup (Mandatory)

Why: Beads are tracked in git. Without worktrees, branch switching causes conflicts when PRs merge.

Always create a worktree before creating beads for a feature:

bash
1# From main repo root 2bd worktree create .worktrees/<feature-name> --branch feature/<feature-name> 3cd .worktrees/<feature-name>

This creates a redirect file so all beads operations share the main repo's .beads/ database. No merge conflicts when PR lands.

After PR merges:

bash
1cd <main-repo> 2git pull 3bd worktree remove .worktrees/<feature-name>

Skip worktree only if: Quick fix on main that won't create new beads.

Phase 1: Discovery (Parallel Exploration)

Launch parallel sub-agents to gather codebase intelligence:

Task() → Agent A: Architecture snapshot (gkg repo_map)
Task() → Agent B: Pattern search (find similar existing code)
Task() → Agent C: Constraints (package.json, tsconfig, deps)
Librarian → External patterns ("how do similar projects do this?")
exa → Library docs (if external integration needed)

Discovery Report Template

Save to history/<feature>/discovery.md:

markdown
1# Discovery Report: <Feature Name> 2 3## Architecture Snapshot 4 5- Relevant packages: ... 6- Key modules: ... 7- Entry points: ... 8 9## Existing Patterns 10 11- Similar implementation: <file> does X using Y pattern 12- Reusable utilities: ... 13- Naming conventions: ... 14 15## Technical Constraints 16 17- Node version: ... 18- Key dependencies: ... 19- Build requirements: ... 20 21## External References 22 23- Library docs: ... 24- Similar projects: ...

Phase 2: Synthesis (Oracle)

Feed Discovery Report to Oracle for gap analysis:

oracle(
  task: "Analyze gap between current codebase and feature requirements",
  context: "Discovery report attached. User wants: <feature>",
  files: ["history/<feature>/discovery.md"]
)

Oracle produces:

  1. Gap Analysis - What exists vs what's needed
  2. Approach Options - 1-3 strategies with tradeoffs
  3. Risk Assessment - LOW / MEDIUM / HIGH per component

Risk Classification

LevelCriteriaVerification
LOWPattern exists in codebaseProceed
MEDIUMVariation of existing patternInterface sketch, type-check
HIGHNovel or external integrationSpike required

Risk Indicators

Pattern exists in codebase? ─── YES → LOW base
                            └── NO  → MEDIUM+ base

External dependency? ─── YES → HIGH
                     └── NO  → Check blast radius

Blast radius >5 files? ─── YES → HIGH
                       └── NO  → MEDIUM

Save to history/<feature>/approach.md:

markdown
1# Approach: <Feature Name> 2 3## Gap Analysis 4 5| Component | Have | Need | Gap | 6| --------- | ---- | ---- | --- | 7| ... | ... | ... | ... | 8 9## Recommended Approach 10 11<Description> 12 13### Alternative Approaches 14 151. <Option A> - Tradeoff: ... 162. <Option B> - Tradeoff: ... 17 18## Risk Map 19 20| Component | Risk | Reason | Verification | 21| ----------- | ---- | ---------------- | ------------ | 22| Stripe SDK | HIGH | New external dep | Spike | 23| User entity | LOW | Follows existing | Proceed |

Phase 3: Verification (Risk-Based)

For HIGH Risk Items → Create Spike Beads

Spikes are mini-plans executed via MULTI_AGENT_WORKFLOW:

bash
1bd create "Spike: <question to answer>" -t epic -p 0 2bd create "Spike: Test X" -t task --blocks <spike-epic> 3bd create "Spike: Verify Y" -t task --blocks <spike-epic>

Spike Bead Template

markdown
1# Spike: <specific question> 2 3**Time-box**: 30 minutes 4**Output location**: .spikes/<spike-id>/ 5 6## Question 7 8Can we <specific technical question>? 9 10## Success Criteria 11 12- [ ] Working throwaway code exists 13- [ ] Answer documented (yes/no + details) 14- [ ] Learnings captured for main plan 15 16## On Completion 17 18Close with: `bd close <id> --reason "YES: <approach>" or "NO: <blocker>"`

Execute Spikes

Use the MULTI_AGENT_WORKFLOW:

  1. bv --robot-plan to parallelize spikes
  2. Task() per spike with time-box
  3. Workers write to .spikes/<feature>/<spike-id>/
  4. Close with learnings: bd close <id> --reason "<result>"

Aggregate Spike Results

oracle(
  task: "Synthesize spike results and update approach",
  context: "Spikes completed. Results: ...",
  files: ["history/<feature>/approach.md"]
)

Update approach.md with validated learnings.

Phase 4: Decomposition (file-beads skill)

Load the file-beads skill and create beads with embedded learnings:

bash
1skill("file-beads")

Bead Requirements

Each bead MUST include:

  • Spike learnings embedded in description (if applicable)
  • Reference to .spikes/ code for HIGH risk items
  • Clear acceptance criteria
  • File scope for track assignment

Example Bead with Learnings

markdown
1# Implement Stripe webhook handler 2 3## Context 4 5Spike bd-12 validated: Stripe SDK works with our Node version. 6See `.spikes/billing-spike/webhook-test/` for working example. 7 8## Learnings from Spike 9 10- Must use `stripe.webhooks.constructEvent()` for signature verification 11- Webhook secret stored in `STRIPE_WEBHOOK_SECRET` env var 12- Raw body required (not parsed JSON) 13 14## Acceptance Criteria 15 16- [ ] Webhook endpoint at `/api/webhooks/stripe` 17- [ ] Signature verification implemented 18- [ ] Events: `checkout.session.completed`, `invoice.paid`

Phase 5: Validation

Run bv Analysis

bash
1bv --robot-suggest # Find missing dependencies 2bv --robot-insights # Detect cycles, bottlenecks 3bv --robot-priority # Validate priorities

Fix Issues

bash
1bd dep add <from> <to> # Add missing deps 2bd dep remove <from> <to> # Break cycles 3bd update <id> --priority X # Adjust priorities

Oracle Final Review

oracle(
  task: "Review plan completeness and clarity",
  context: "Plan ready. Check for gaps, unclear beads, missing deps.",
  files: [".beads/"]
)

Phase 6: Track Planning

This phase creates an execution-ready plan so the orchestrator can spawn workers immediately without re-analyzing beads.

Step 1: Get Parallel Tracks

bash
1bv --robot-plan 2>/dev/null | jq '.plan.tracks'

Step 2: Assign File Scopes

For each track, determine the file scope based on beads in that track:

bash
1# For each bead, check which files it touches 2bd show <bead-id> # Look at description for file hints

Rules:

  • File scopes must NOT overlap between tracks
  • Use glob patterns: packages/sdk/**, apps/server/**
  • If overlap unavoidable, merge into single track

Step 3: Generate Agent Names

Assign unique adjective+noun names to each track:

  • BlueLake, GreenCastle, RedStone, PurpleBear, etc.
  • Names are memorable identifiers, NOT role descriptions

Step 4: Create Execution Plan

Save to history/<feature>/execution-plan.md:

markdown
1# Execution Plan: <Feature Name> 2 3Epic: <epic-id> 4Generated: <date> 5 6## Tracks 7 8| Track | Agent | Beads (in order) | File Scope | 9| ----- | ----------- | --------------------- | ----------------- | 10| 1 | BlueLake | bd-10 → bd-11 → bd-12 | `packages/sdk/**` | 11| 2 | GreenCastle | bd-20 → bd-21 | `packages/cli/**` | 12| 3 | RedStone | bd-30 → bd-31 → bd-32 | `apps/server/**` | 13 14## Track Details 15 16### Track 1: BlueLake - <track-description> 17 18**File scope**: `packages/sdk/**` 19**Beads**: 20 211. `bd-10`: <title> - <brief description> 222. `bd-11`: <title> - <brief description> 233. `bd-12`: <title> - <brief description> 24 25### Track 2: GreenCastle - <track-description> 26 27**File scope**: `packages/cli/**` 28**Beads**: 29 301. `bd-20`: <title> - <brief description> 312. `bd-21`: <title> - <brief description> 32 33### Track 3: RedStone - <track-description> 34 35**File scope**: `apps/server/**` 36**Beads**: 37 381. `bd-30`: <title> - <brief description> 392. `bd-31`: <title> - <brief description> 403. `bd-32`: <title> - <brief description> 41 42## Cross-Track Dependencies 43 44- Track 2 can start after bd-11 (Track 1) completes 45- Track 3 has no cross-track dependencies 46 47## Key Learnings (from Spikes) 48 49Embedded in beads, but summarized here for orchestrator reference: 50 51- <learning 1> 52- <learning 2>

Validation

Before finalizing, verify:

bash
1# No cycles in the graph 2bv --robot-insights 2>/dev/null | jq '.Cycles' 3 4# All beads assigned to tracks 5bv --robot-plan 2>/dev/null | jq '.plan.unassigned'

Output Artifacts

ArtifactLocationPurpose
Discovery Reporthistory/<feature>/discovery.mdCodebase snapshot
Approach Documenthistory/<feature>/approach.mdStrategy + risks
Spike Code.spikes/<feature>/Reference implementations
Spike LearningsEmbedded in beadsContext for workers
Beads.beads/*.mdExecutable work items
Execution Planhistory/<feature>/execution-plan.mdTrack assignments for orchestrator

Quick Reference

Tool Selection

NeedTool
Codebase structuremcp__gkg__repo_map
Find definitionsmcp__gkg__search_codebase_definitions
Find usagesmcp__gkg__get_references
Semantic searchmcp__morph_mcp__warpgrep_codebase_search
External patternslibrarian
Library docsmcp__MCP_DOCKER__resolve-library-idmcp__MCP_DOCKER__get-library-docs
Web researchmcp__MCP_DOCKER__web_search_exa
Gap analysisoracle
Create beadsskill("file-beads") + bd create
Validate graphbv --robot-*

Common Mistakes

  • Skipping discovery → Plan misses existing patterns
  • No risk assessment → Surprises during execution
  • No spikes for HIGH risk → Blocked workers
  • Missing learnings in beads → Workers re-discover same issues
  • No bv validation → Broken dependency graph

Compétences associées

Looking for an alternative to planning or another community skill for your workflow? Explore these related open-source skills.

Voir tout

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Navigateur

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Développeur