implement-plan-with-docs — for Claude Code implement-plan-with-docs, InfoScraper, community, for Claude Code, ide skills, $ARGUMENTS, feature_plans, ## Implementation Steps, ### 1. Step Name, ### Verification commands

v1.0.0

Sobre este Skill

Cenario recomendado: Ideal for AI agents that need implement plan with docs. Resumo localizado: # Implement Plan With Docs Execute the feature plan at $ARGUMENTS to the letter. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Recursos

Implement Plan With Docs
Implementation steps — numbered headings under ## Implementation Steps (e.g., ### 1. Step Name)
Commit boundaries — steps whose name contains "Commit" or verification blocks labeled "After
Verification commands — code blocks under ### Verification commands or After commit N sections
Code patterns — the ## Code Patterns from Docs section, containing verbatim or adapted snippets to

# Core Topics

Elijah-J Elijah-J
[1]
[0]
Updated: 4/15/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
49
Canonical Locale
en
Detected Body Locale
en

Cenario recomendado: Ideal for AI agents that need implement plan with docs. Resumo localizado: # Implement Plan With Docs Execute the feature plan at $ARGUMENTS to the letter. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Por que usar essa habilidade

Recomendacao: implement-plan-with-docs helps agents implement plan with docs. Implement Plan With Docs Execute the feature plan at $ARGUMENTS to the letter. This AI agent skill supports Claude Code, Cursor, and

Melhor para

Cenario recomendado: Ideal for AI agents that need implement plan with docs.

Casos de Uso Práticos for implement-plan-with-docs

Caso de uso: Applying Implement Plan With Docs
Caso de uso: Applying Implementation steps — numbered headings under ## Implementation Steps (e.g., ### 1. Step Name)
Caso de uso: Applying Commit boundaries — steps whose name contains "Commit" or verification blocks labeled "After

! Segurança e Limitações

  • Limitacao: Requires repository-specific context from the skill documentation
  • Limitacao: Works best when the underlying tools and dependencies are already configured

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is implement-plan-with-docs?

Cenario recomendado: Ideal for AI agents that need implement plan with docs. Resumo localizado: # Implement Plan With Docs Execute the feature plan at $ARGUMENTS to the letter. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install implement-plan-with-docs?

Run the command: npx killer-skills add Elijah-J/InfoScraper/implement-plan-with-docs. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for implement-plan-with-docs?

Key use cases include: Caso de uso: Applying Implement Plan With Docs, Caso de uso: Applying Implementation steps — numbered headings under ## Implementation Steps (e.g., ### 1. Step Name), Caso de uso: Applying Commit boundaries — steps whose name contains "Commit" or verification blocks labeled "After.

Which IDEs are compatible with implement-plan-with-docs?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for implement-plan-with-docs?

Limitacao: Requires repository-specific context from the skill documentation. Limitacao: Works best when the underlying tools and dependencies are already configured.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Elijah-J/InfoScraper/implement-plan-with-docs. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use implement-plan-with-docs immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

implement-plan-with-docs

# Implement Plan With Docs Execute the feature plan at $ARGUMENTS to the letter. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Implement Plan With Docs

Execute the feature plan at $ARGUMENTS to the letter. The plan is the spec — implement what it says, not what you think is better. The local documentation corpus (output/docs/, output/repos/) is available as a safety net to resolve ambiguities or catch plan errors, but proactive corpus searching is not the primary activity.

Process

  1. Parse the plan — read $ARGUMENTS as a file path. If the path does not point to an existing file, try prepending feature_plans/. Read the plan and extract:

    • Implementation steps — numbered headings under ## Implementation Steps (e.g., ### 1. Step Name)
    • Commit boundaries — steps whose name contains "Commit" or verification blocks labeled "After commit N"
    • Verification commands — code blocks under ### Verification commands or **After commit N** sections
    • Prescriptive gotchas — entries in ## Gotchas & Warnings tagged (prescriptive → Step N). Each maps to a specific implementation step.
    • Code patterns — the ## Code Patterns from Docs section, containing verbatim or adapted snippets to follow
    • Gaps — anything flagged **NOT IN LOCAL DOCS**
    • E2E test protocol — commands under ### Before/after live test, ### Manual smoke test, or similar ## Testing Strategy subsections. Extract the concrete commands and their expected outcomes (metrics to check, output to verify).
    • Pre-commit baselines — any test protocol items that require measurements on the unmodified codebase (e.g., "run before the commit", "before/after delta", "baseline comparison"). These cannot be captured after implementation begins.

    Create tasks to track each implementation step, each commit boundary, and the e2e test protocol (if present). Summarize the plan structure to the user: N steps, M commits, any gaps.

  2. Pre-implementation scan — before writing code, read every file the plan says to create or modify. Verify they exist and match the plan's assumptions (function signatures, line numbers, import structures). Check that modules the plan imports from actually exist. If reality has drifted from the plan (e.g., a function was renamed, a line number shifted), note the drift and determine how to adapt while preserving intent. This is the earliest opportunity to catch stale plans — flag any drift to the user before proceeding.

    Run pre-commit baselines. If step 1 extracted pre-commit baselines, run them now on the unmodified codebase. Record the commands and their output — these measurements are destroyed once implementation begins and cannot be recaptured. Report the baseline results to the user before proceeding to step 3.

  3. Implement each step — walk through the plan's implementation steps in order. For each step:

    • Read the step's full description, code patterns, notes, and source references from the plan
    • Identify target files; Read any you have not already read
    • Implement using Write (new files) or Edit (modifications)
    • Follow the plan's code snippets as closely as possible — ruff-compliant formatting differences are acceptable, but structural changes are not
    • When a step references a code pattern from the ## Code Patterns from Docs section, use that pattern
    • When a step says "same pattern as X" or "follow the existing convention" without giving explicit code, search the codebase or corpus to find the referenced pattern (this is ambiguity resolution, not deviation)
    • After multi-file steps, run ruff check on the modified files
    • When implementing a step that has a corresponding prescriptive gotcha, verify the gotcha is addressed in your implementation
    • Mark each task as completed as you finish each step
  4. Consult the corpus when needed — search the local documentation corpus in exactly two situations:

    Ambiguity resolution — the plan references a pattern, convention, or API without giving the exact code. Search output/docs/ or output/repos/ to find it. Use Grep or Read for simple lookups. Use python -m shared.index search "<terms>" for broader searches. This is completing the plan, not challenging it.

    Contradiction detection — during implementation, you encounter a concrete mismatch between the plan and observable reality. Examples: wrong function signature, missing module, incorrect import path, API that doesn't accept the arguments the plan specifies. When this happens:

    1. Verify the ground truth — Read the actual file, search the docs
    2. Classify the mismatch:
      • (a) Minor adaptation — typo, import path, signature drift, off-by-one line number. The plan's intent is clear; only the surface detail is wrong.
      • (b) Architectural conflict — the plan's approach is fundamentally incompatible with the actual codebase structure. A design decision needs to change.
    3. For (a): fix while preserving intent. Note the adaptation in the completion report.
    4. For (b): launch a doc-researcher agent to gather comprehensive corpus evidence. Include the specific contradiction in the agent prompt. Based on the evidence, either:
      • Proceed with a justified deviation — but you must cite the corpus file(s) that justify it and note the deviation prominently in the completion report
      • Stop and ask the user — if the evidence is ambiguous or the deviation would change the plan's architecture

    The bar for deviation is high. The plan was built from corpus evidence and reviewed. "I think X is better" is not evidence. A deviation requires: (1) a concrete mismatch with observable reality, and (2) a cited corpus file or code observation that proves the plan wrong.

    Do not proactively search the corpus before each step. The plan is the spec. The corpus is the safety net.

  5. Verify and commit at plan boundaries — when you reach a commit boundary defined in the plan:

    1. Run pip install -e ".[dev]" if pyproject.toml was modified since the last install
    2. Run every verification command from the plan's verification block for that commit, in order
    3. If any command fails: diagnose the failure, fix the specific issue, re-run verification. Maximum 3 fix-verify cycles per failure — if still failing after 3 attempts, stop and report the failure to the user
    4. Stage all files created or modified since the last commit
    5. Commit with a descriptive message following the repo's conventions: imperative mood first line, detail in body, Co-Authored-By trailer
    6. Push to remote
  6. Final verification — after all commits, run the full test suite one last time:

    bash
    1pytest -v 2ruff check .

    Then run the plan's e2e test protocol if one was extracted in Step 1. This means executing every command from the manual smoke test or live test section, verifying the expected outcomes (pages discovered, output files created, content correctly converted, sync-mode skips, etc.). --help checks verify wiring; e2e tests verify the feature actually works. Both are required when the plan specifies them.

  7. Completion report — output directly to the conversation:

    ## Implementation Report: <plan title>
    
    ### Steps Completed
    - Step 1: <name> — done
    - Step 2: <name> — done
    ...
    
    ### Commits
    - `<hash>` <first line of commit message>
    - `<hash>` <first line of commit message>
    
    ### Verification Results
    - Commit 1: pytest N passed, ruff clean, [specific checks] OK
    - Commit 2: pytest N passed, ruff clean, [specific checks] OK
    
    ### Deviations
    None. Plan implemented as specified.
    — OR —
    - **Step N: <description>** — <what changed and why>
      Evidence: <corpus citation or code observation>
    
    ### Pre-Commit Baselines
    None required by plan.
    — OR —
    - **<protocol item>**: <recorded output/metrics from unmodified codebase>
    - **Post-commit comparison**: <delta vs. baseline>
    
    ### Gaps Encountered
    <If plan flagged NOT IN LOCAL DOCS gaps, note whether they were hit>
    

Critical Rules

  1. The plan is the spec. Implement what it says. Do not redesign, optimize, or "improve" unless demonstrably wrong with corpus evidence.
  2. Commit at the plan's boundaries, not yours. The plan groups changes into independently revertable units. Do not commit early or merge commit boundaries.
  3. Run verification before every commit. Never commit without passing the plan's verification commands.
  4. Fix forward, not sideways. When verification fails, fix the specific failure. Do not refactor adjacent code to "prevent" the problem.
  5. Deviations require evidence. A cited corpus file or concrete code observation. Not opinion.
  6. Stop on architectural conflicts. If the plan's approach is fundamentally incompatible with the codebase, stop and report to the user. Do not silently redesign.
  7. Never skip a step. If a step seems redundant or already done, verify explicitly (Read the file, check the code). Note "already present" in the report if confirmed.
  8. Track prescriptive gotchas. Each prescriptive gotcha maps to a specific implementation step. When implementing that step, verify the gotcha is addressed. A missed gotcha is a bug.

Habilidades Relacionadas

Looking for an alternative to implement-plan-with-docs or another community skill for your workflow? Explore these related open-source skills.

Ver tudo

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Gerar plugins de widgets personalizáveis para o sistema de feed do prompts.chat

flags

Logo of vercel
vercel

O Framework React

138.4k
0
Navegador

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Desenvolvedor