receiving-code-review — receiving-code-review AI agent skill receiving-code-review, superpowers, official, receiving-code-review AI agent skill, ide skills, receiving-code-review for Claude Code

Verified
v1.0.0

About this Skill

Ideal for Code Review Agents requiring rigorous technical verification and validation of feedback Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation

# Core Topics

obra obra
[113.6k]
[9110]
Updated: 3/26/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 10/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
10/11
Quality Score
71
Canonical Locale
en
Detected Body Locale
en

Ideal for Code Review Agents requiring rigorous technical verification and validation of feedback Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation

Core Value

Empowers agents to technically evaluate code review feedback using verification and evaluation protocols, ensuring technical correctness over social comfort, and leveraging core principles like 'Verify before implementing' and 'Ask before assuming' to guarantee accurate implementation of suggestions

Ideal Agent Persona

Ideal for Code Review Agents requiring rigorous technical verification and validation of feedback

Capabilities Granted for receiving-code-review

Evaluating code review feedback for technical soundness
Verifying suggested changes against codebase reality
Responding to code review feedback with technically accurate implementations

! Prerequisites & Limits

  • Requires technical expertise in code review and analysis
  • Limited to code review feedback reception and implementation

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Curated Collection Review

Reviewed In Curated Collections

This section shows how Killer-Skills has already collected, reviewed, and maintained this skill inside first-party curated paths. For operators and crawlers alike, this is a stronger signal than treating the upstream README as the primary story.

Reviewed Collection

Claude Code Workflow Tools to Install First

Reviewed 2026-04-17

Reviewed on 2026-04-17 for setup clarity, maintainer reliability, review coverage, and operator handoff readiness. We kept the tools that make Claude Code easier to trial and easier to standardize.

People landing here usually already know they want Claude Code. What they need next is a smaller list tied to review, guardrails, and handoff instead of another broad skills roundup.

6 entries Killer-Skills editorial review with monthly collection checks.
Reviewed Collection

Windsurf Workflow Tools to Install First

Reviewed 2026-04-17

Reviewed on 2026-04-17 for setup clarity, maintainer reliability, review support, and handoff readiness. We kept the tools that make Windsurf easier to trial, explain, and standardize.

People landing here usually already know they want Windsurf. What they need next is a smaller list tied to coding speed, review support, rules sync, and handoff instead of another broad skills roundup.

5 entries Killer-Skills editorial review with monthly collection checks.
Reviewed Collection

12 Official AI Agent Skills & Trusted Tools to Install First

Reviewed 2026-04-16

Reviewed on 2026-04-16 for first-party ownership, documentation quality, install clarity, and production relevance. This is the safest collection to use as a default starting point.

We prioritize this page because it lets users verify trust first and then move into one clear installation path instead of bouncing across more repo lists.

12 entries Maintained through Killer-Skills editorial review with trust, install-path, and operator checks.
After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is receiving-code-review?

Ideal for Code Review Agents requiring rigorous technical verification and validation of feedback Use when receiving code review feedback, before implementing suggestions, especially if feedback seems unclear or technically questionable - requires technical rigor and verification, not performative agreement or blind implementation

How do I install receiving-code-review?

Run the command: npx killer-skills add obra/superpowers/receiving-code-review. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for receiving-code-review?

Key use cases include: Evaluating code review feedback for technical soundness, Verifying suggested changes against codebase reality, Responding to code review feedback with technically accurate implementations.

Which IDEs are compatible with receiving-code-review?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for receiving-code-review?

Requires technical expertise in code review and analysis. Limited to code review feedback reception and implementation.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add obra/superpowers/receiving-code-review. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use receiving-code-review immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

receiving-code-review

Install receiving-code-review, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Code Review Reception

Overview

Code review requires technical evaluation, not emotional performance.

Core principle: Verify before implementing. Ask before assuming. Technical correctness over social comfort.

The Response Pattern

WHEN receiving code review feedback:

1. READ: Complete feedback without reacting
2. UNDERSTAND: Restate requirement in own words (or ask)
3. VERIFY: Check against codebase reality
4. EVALUATE: Technically sound for THIS codebase?
5. RESPOND: Technical acknowledgment or reasoned pushback
6. IMPLEMENT: One item at a time, test each

Forbidden Responses

NEVER:

  • "You're absolutely right!" (explicit CLAUDE.md violation)
  • "Great point!" / "Excellent feedback!" (performative)
  • "Let me implement that now" (before verification)

INSTEAD:

  • Restate the technical requirement
  • Ask clarifying questions
  • Push back with technical reasoning if wrong
  • Just start working (actions > words)

Handling Unclear Feedback

IF any item is unclear:
  STOP - do not implement anything yet
  ASK for clarification on unclear items

WHY: Items may be related. Partial understanding = wrong implementation.

Example:

your human partner: "Fix 1-6"
You understand 1,2,3,6. Unclear on 4,5.

❌ WRONG: Implement 1,2,3,6 now, ask about 4,5 later
✅ RIGHT: "I understand items 1,2,3,6. Need clarification on 4 and 5 before proceeding."

Source-Specific Handling

From your human partner

  • Trusted - implement after understanding
  • Still ask if scope unclear
  • No performative agreement
  • Skip to action or technical acknowledgment

From External Reviewers

BEFORE implementing:
  1. Check: Technically correct for THIS codebase?
  2. Check: Breaks existing functionality?
  3. Check: Reason for current implementation?
  4. Check: Works on all platforms/versions?
  5. Check: Does reviewer understand full context?

IF suggestion seems wrong:
  Push back with technical reasoning

IF can't easily verify:
  Say so: "I can't verify this without [X]. Should I [investigate/ask/proceed]?"

IF conflicts with your human partner's prior decisions:
  Stop and discuss with your human partner first

your human partner's rule: "External feedback - be skeptical, but check carefully"

YAGNI Check for "Professional" Features

IF reviewer suggests "implementing properly":
  grep codebase for actual usage

  IF unused: "This endpoint isn't called. Remove it (YAGNI)?"
  IF used: Then implement properly

your human partner's rule: "You and reviewer both report to me. If we don't need this feature, don't add it."

Implementation Order

FOR multi-item feedback:
  1. Clarify anything unclear FIRST
  2. Then implement in this order:
     - Blocking issues (breaks, security)
     - Simple fixes (typos, imports)
     - Complex fixes (refactoring, logic)
  3. Test each fix individually
  4. Verify no regressions

When To Push Back

Push back when:

  • Suggestion breaks existing functionality
  • Reviewer lacks full context
  • Violates YAGNI (unused feature)
  • Technically incorrect for this stack
  • Legacy/compatibility reasons exist
  • Conflicts with your human partner's architectural decisions

How to push back:

  • Use technical reasoning, not defensiveness
  • Ask specific questions
  • Reference working tests/code
  • Involve your human partner if architectural

Signal if uncomfortable pushing back out loud: "Strange things are afoot at the Circle K"

Acknowledging Correct Feedback

When feedback IS correct:

✅ "Fixed. [Brief description of what changed]"
✅ "Good catch - [specific issue]. Fixed in [location]."
✅ [Just fix it and show in the code]

❌ "You're absolutely right!"
❌ "Great point!"
❌ "Thanks for catching that!"
❌ "Thanks for [anything]"
❌ ANY gratitude expression

Why no thanks: Actions speak. Just fix it. The code itself shows you heard the feedback.

If you catch yourself about to write "Thanks": DELETE IT. State the fix instead.

Gracefully Correcting Your Pushback

If you pushed back and were wrong:

✅ "You were right - I checked [X] and it does [Y]. Implementing now."
✅ "Verified this and you're correct. My initial understanding was wrong because [reason]. Fixing."

❌ Long apology
❌ Defending why you pushed back
❌ Over-explaining

State the correction factually and move on.

Common Mistakes

MistakeFix
Performative agreementState requirement or just act
Blind implementationVerify against codebase first
Batch without testingOne at a time, test each
Assuming reviewer is rightCheck if breaks things
Avoiding pushbackTechnical correctness > comfort
Partial implementationClarify all items first
Can't verify, proceed anywayState limitation, ask for direction

Real Examples

Performative Agreement (Bad):

Reviewer: "Remove legacy code"
❌ "You're absolutely right! Let me remove that..."

Technical Verification (Good):

Reviewer: "Remove legacy code"
✅ "Checking... build target is 10.15+, this API needs 13+. Need legacy for backward compat. Current impl has wrong bundle ID - fix it or drop pre-13 support?"

YAGNI (Good):

Reviewer: "Implement proper metrics tracking with database, date filters, CSV export"
✅ "Grepped codebase - nothing calls this endpoint. Remove it (YAGNI)? Or is there usage I'm missing?"

Unclear Item (Good):

your human partner: "Fix items 1-6"
You understand 1,2,3,6. Unclear on 4,5.
✅ "Understand 1,2,3,6. Need clarification on 4 and 5 before implementing."

GitHub Thread Replies

When replying to inline review comments on GitHub, reply in the comment thread (gh api repos/{owner}/{repo}/pulls/{pr}/comments/{id}/replies), not as a top-level PR comment.

The Bottom Line

External feedback = suggestions to evaluate, not orders to follow.

Verify. Question. Then implement.

No performative agreement. Technical rigor always.

Related Skills

Looking for an alternative to receiving-code-review or another official skill for your workflow? Explore these related open-source skills.

View All

flags

Logo of facebook
facebook

Use when you need to check feature flag states, compare channels, or debug why a feature behaves differently across release channels.

244.2k
0
Developer

extract-errors

Logo of facebook
facebook

extract-errors is a React error handling skill that automates the process of extracting and assigning error codes, ensuring accurate and up-to-date error messages in React applications.

244.2k
0
Developer

fix

Logo of facebook
facebook

fix is a code optimization skill that automates formatting and linting using yarn prettier and linc.

244.2k
0
Developer

flow

Logo of facebook
facebook

Use when you need to run Flow type checking, or when seeing Flow type errors in React code.

244.2k
0
Developer