fixme-handle-plan-review — for Claude Code fixme-handle-plan-review, community, for Claude Code, ide skills, Fixme dir, <project>, and using the, Directory, document, relative

v1.0.0

このスキルについて

適した場面: Ideal for AI agents that need plan review feedback. ローカライズされた概要: Claude Code skill suite for automated task execution - config-driven pipelines with plan/review/execute/review cycles, ticket state, and PR comment resolution.

機能

Plan Review Feedback
Validate review findings against the codebase and classify each using the unified finding taxonomy.
Resolve inputs in this order:
Argument : if file paths are passed as arguments, use them
Conversation context : if findings and plan are in the current conversation, use them

# Core Topics

denis-pingin denis-pingin
[0]
[0]
Updated: 4/23/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
70
Canonical Locale
en
Detected Body Locale
en

適した場面: Ideal for AI agents that need plan review feedback. ローカライズされた概要: Claude Code skill suite for automated task execution - config-driven pipelines with plan/review/execute/review cycles, ticket state, and PR comment resolution.

このスキルを使用する理由

推奨ポイント: fixme-handle-plan-review helps agents plan review feedback. Claude Code skill suite for automated task execution - config-driven pipelines with plan/review/execute/review cycles, ticket state, and PR comment

おすすめ

適した場面: Ideal for AI agents that need plan review feedback.

実現可能なユースケース for fixme-handle-plan-review

ユースケース: Applying Plan Review Feedback
ユースケース: Applying Validate review findings against the codebase and classify each using the unified finding taxonomy
ユースケース: Applying Resolve inputs in this order:

! セキュリティと制限

  • 制約事項: Verify the finding's characterization of what the code does - do not trust it blindl
  • 制約事項: The issue's validity is not in question - only the approach to resolving it
  • 制約事項: Depends on intent, constraints, or decisions not captured in the plan, spec, or codebase

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is fixme-handle-plan-review?

適した場面: Ideal for AI agents that need plan review feedback. ローカライズされた概要: Claude Code skill suite for automated task execution - config-driven pipelines with plan/review/execute/review cycles, ticket state, and PR comment resolution.

How do I install fixme-handle-plan-review?

Run the command: npx killer-skills add denis-pingin/fixme/fixme-handle-plan-review. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for fixme-handle-plan-review?

Key use cases include: ユースケース: Applying Plan Review Feedback, ユースケース: Applying Validate review findings against the codebase and classify each using the unified finding taxonomy, ユースケース: Applying Resolve inputs in this order:.

Which IDEs are compatible with fixme-handle-plan-review?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for fixme-handle-plan-review?

制約事項: Verify the finding's characterization of what the code does - do not trust it blindl. 制約事項: The issue's validity is not in question - only the approach to resolving it. 制約事項: Depends on intent, constraints, or decisions not captured in the plan, spec, or codebase.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add denis-pingin/fixme/fixme-handle-plan-review. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use fixme-handle-plan-review immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

fixme-handle-plan-review

Install fixme-handle-plan-review, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Fixme Directory

All .fixme/ paths in this document are relative to the fixme root directory. When dispatched by fixme-task, the Fixme dir is provided in the <project> block of the dispatch prompt - use it as the base for all .fixme/ paths (e.g., <fixme-dir>/plans/, <fixme-dir>/decisions.md). When running standalone, resolve by running node ~/.claude/skills/fixme-tickets-md/scripts/fixme-tools.cjs root and using the fixme_dir field.

Plan Review Feedback

Validate review findings against the codebase and classify each using the unified finding taxonomy.

Input Resolution

Resolve inputs in this order:

  1. Argument: if file paths are passed as arguments, use them
  2. Conversation context: if findings and plan are in the current conversation, use them
  3. IDE context: if the user has a file open/selected, use it
  4. Ask: prompt the user for the findings and plan locations

Read the plan, the findings, and the spec/context document (if referenced) before proceeding.

If a decision log exists at .fixme/decisions.md, read it. Also read the plan's Locked Decisions section in its Context. These are settled user choices from prior ASK_USER and FIX_UNCLEAR questions.

Classification

  • FIX - real issue that affects correctness, performance, security, or maintainability. Either a single clear fix approach exists, OR one approach clearly dominates all alternatives on merit (grounded in concrete tradeoffs, not editorial labels like "simpler"). If the reviewer presented multiple options, you MUST independently evaluate each before classifying as FIX - see Multi-Option Discipline.
  • FIX_UNCLEAR - real issue, but the fix approach is ambiguous. Multiple viable strategies exist with genuine tradeoffs. This is the default classification whenever the reviewer offered 2+ options and your own independent evaluation does not produce a clear winner on the dimensions that matter (performance on common vs. rare paths, correctness, maintainability, user-visible impact). The issue's validity is not in question - only the approach to resolving it.
  • ASK_USER - insufficient context to determine whether the finding is even valid. Depends on intent, constraints, or decisions not captured in the plan, spec, or codebase. Requires human input to determine validity (not just approach).
  • REJECT_FALSE_POSITIVE - finding is factually wrong. The plan is correct, the reviewer misunderstood the plan's approach, the codebase state, or the spec constraints.
  • REJECT_WONT_FIX - finding is technically valid but intentionally out of scope, contradicts a locked decision (without revealing new concrete problems), or would be net-negative to address.
  • REJECT_ALREADY_FIXED - the issue described is already addressed by the plan's current state or a prior revision.

Process

For each finding:

  1. Read the actual code referenced by the finding
  2. Verify the finding's characterization of what the code does - do not trust it blindly
  3. Check whether the plan's context/spec explains the approach
  4. Check finding against locked decisions. Distinguish between [confirmed] decisions (user explicitly chose) and [assumed] decisions (user accepted recommendation by default or never explicitly answered):
    • Finding contradicts a [confirmed] decision:
      • If the finding reveals a concrete problem (bug, security issue, data loss): classify ASK_USER. Explain what new evidence suggests the previous decision may need revisiting, and recommend a path forward.
      • If the finding merely disagrees with the approach: classify REJECT_WONT_FIX. The user explicitly made this call.
    • Finding contradicts an [assumed] decision:
      • If the finding reveals a concrete problem: classify ASK_USER. The user never explicitly confirmed this decision, and new evidence suggests it's wrong.
      • If the finding offers a materially better alternative: classify ASK_USER. The user accepted this by default - they deserve to see the better option. Present both the assumed approach and the proposed alternative.
      • If the finding is a minor stylistic disagreement: classify REJECT_WONT_FIX.
    • Finding identifies an [assumed] decision that should have been confirmed (the reviewer flagged it as an Assumption Validity issue): classify ASK_USER. Present the decision and its alternatives to the user for explicit confirmation.
  5. Assess whether the suggested change would actually improve the outcome
  6. Classify and document

Multi-Option Discipline

When a finding's Suggestion presents 2+ plausible fix approaches (including "drop the fix" or "add a comment" as options), apply this discipline before classifying. This section exists because the default failure mode is to anchor on whichever option the reviewer labeled "simpler" and collapse the decision without evaluation.

  1. Independently evaluate every option. For each, assess concrete tradeoffs: correctness, performance on common vs. rare code paths, maintainability, user-visible behavior, security, effort, risk. Read the referenced code yourself. Do not outsource this evaluation to the reviewer - the reviewer's preference is a hypothesis, not the answer.

  2. Strike editorial shortcuts from your reasoning. Words like "simpler", "easier", "cleaner", "lighter touch", "just X" are anchors, not arguments. A "simpler" option that makes every request pay an extra I/O round-trip is not simpler in the dimension that matters. If your justification for picking an option reduces to "the reviewer called it simpler", you have not done the evaluation.

  3. Classify based on the evaluation outcome:

    • One option clearly dominates on the dimensions that matter, with no material downside → FIX. The Approach field records that option and cites WHY it wins on the concrete tradeoff (e.g. "hoist with guard: same performance as inline duplication, and eliminates the overlap duplication"), not on editorial language.
    • Multiple options are viable with genuine tradeoffs, or no option clearly dominates → FIX_UNCLEAR. The Question field presents every option with full Approach/Pros/Cons/Impact/Effort and a researched Recommendation (per the fixme-howto-present-decisions format). Let the user choose. This is the default when your evaluation does not produce a clear winner.
    • Every option is strictly worse than the status quo (including "drop the fix" as an option) → REJECT_WONT_FIX, with per-option disqualifying flaws listed. "Simpler to not do it" is not a disqualifying flaw.
  4. "Drop the fix" or "just add a comment" is not a free answer. These resolutions require either proving the original concern was invalid (→ REJECT_FALSE_POSITIVE with evidence) OR proving every alternative is strictly worse than leaving the code alone (→ REJECT_WONT_FIX with a per-option evaluation). Collapsing a multi-option finding into "drop it" because one option was labeled "simpler" is the exact failure mode this section exists to prevent.

  5. Default to FIX_UNCLEAR when uncertain. If you have evaluated every option and cannot confidently name a winner, that is FIX_UNCLEAR. The handler's job is to protect the user's ability to choose the best option, not to save them the decision by picking the path of least resistance.

Output Format

Per Finding

FieldDescription
FindingOne-line summary of the reviewer's concern
ClassificationFIX / FIX_UNCLEAR / ASK_USER / REJECT_FALSE_POSITIVE / REJECT_WONT_FIX / REJECT_ALREADY_FIXED
ConfidenceHIGH / MEDIUM / LOW
Why1-2 sentences. For FIX: what breaks or degrades. For FIX_UNCLEAR: what breaks AND what makes the fix approach ambiguous (name the competing approaches). For REJECT_*: why it's wrong, irrelevant, or already covered. For ASK_USER: what's unknown and why it matters
Question(ASK_USER and FIX_UNCLEAR only) For ASK_USER: a self-contained briefing on whether this is a real issue. For FIX_UNCLEAR: a self-contained briefing presenting the competing fix approaches. See Question Guidelines below
Approach(FIX only) Concrete steps to resolve - name files, functions, patterns. No hand-waving. For FIX_UNCLEAR: omitted (user chooses approach first)
Risk(FIX only) What could go wrong with the fix itself
Blast radius(FIX only) Which files/tests/behaviors are affected

Output Ordering

Group related findings that would be addressed by the same fix. Order: FIX (HIGH confidence first), then FIX_UNCLEAR, then ASK_USER, then REJECT_* items.

Decision Presentation Guidelines (ASK_USER and FIX_UNCLEAR)

The full guidelines are preloaded from the fixme-howto-present-decisions skill. Follow them exactly for all ASK_USER and FIX_UNCLEAR Question fields.

Key requirements (see preloaded skill for complete spec):

  • The Question field must be the FULL structured decision block - ## Decision: heading, **Context**:, **The question**:, **Options**: with all 5 sub-fields (Approach, Pros, Cons, Impact, Effort), and **Recommendation**: with research evidence
  • Never compress the Question field into a flat paragraph or omit sub-fields
  • Every file reference must be a clickable markdown link with absolute path and line numbers
  • Blank line between every section - no dense walls of text
  • Recommendation must show what was investigated and cross-reference the Options section's tradeoffs

Rules

  • Read the actual code before classifying. Don't trust the finding's characterization of what the code does.
  • A finding that's technically correct but would make the code worse is REJECT_WONT_FIX. Explain the tradeoff.
  • If a finding is ambiguous or context is lacking, classify as ASK_USER rather than guessing. If the finding is clearly valid but the fix approach is unclear, classify as FIX_UNCLEAR. A wrong FIX wastes implementation time. A wrong REJECT hides a real issue. ASK_USER or FIX_UNCLEAR costs only a question.
  • If two findings would be resolved by the same change, group them and note it.
  • Locked decisions are presumed correct. A finding that contradicts a locked decision is REJECT_WONT_FIX unless it reveals a concrete problem not visible when the decision was made - in which case ASK_USER with new evidence.
  • Multi-option findings default to FIX_UNCLEAR. Collapsing multiple alternatives into a single "simpler" FIX approach - or into REJECT_WONT_FIX or "add a comment" - requires an independent evaluation that names concrete tradeoffs, not editorial labels. See Multi-Option Discipline.

Routing Directive

End your output with a structured routing block that tells the orchestrator exactly what to do next. This is mandatory.

---
HANDLER_RESULT: CLEAN | HAS_FIX | HAS_ASK_USER
FIX_COUNT: <number>
FIX_UNCLEAR_COUNT: <number>
ASK_USER_COUNT: <number>
NEXT_ACTION: PLAN_LOOP_EXIT | PLAN_REVISION | ASK_USER_BATCH
  • CLEAN (0 FIX, 0 FIX_UNCLEAR, 0 ASK_USER): orchestrator exits the plan loop and proceeds to fixme-execute-plan
  • HAS_FIX (1+ FIX, 0 FIX_UNCLEAR, 0 ASK_USER): orchestrator dispatches fixme-write-plan in plan revision mode with the FIX items
  • HAS_ASK_USER (1+ FIX_UNCLEAR or ASK_USER): orchestrator batches questions to user before routing FIX items. FIX_UNCLEAR questions ask about approach. ASK_USER questions ask about validity.

関連スキル

Looking for an alternative to fixme-handle-plan-review or another community skill for your workflow? Explore these related open-source skills.

すべて表示

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

カスタマイズ可能なウィジェットプラグインをprompts.chatのフィードシステム用に生成する

149.6k
0
AI

flags

Logo of vercel
vercel

React フレームワーク

138.4k
0
ブラウザ

pr-review

Logo of pytorch
pytorch

Pythonにおけるテンソルと動的ニューラルネットワーク(強力なGPUアクセラレーション)

98.6k
0
開発者