KS
Killer-Skills

sd-review — how to use sd-review how to use sd-review, sd-review alternative, sd-review setup guide, what is sd-review, sd-review vs code review tools, sd-review install, comprehensive code analysis, parallel code review, refactoring analysis

v1.0.0
GitHub

About this Skill

Perfect for Code Review Agents needing comprehensive defect and refactoring analysis capabilities. sd-review is a code analysis skill that combines defect review and refactoring analysis for comprehensive code evaluation

Features

Combines defect review and refactoring analysis for comprehensive code evaluation
Dispatches up to 5 reviewer agents in parallel for efficient code verification
Verifies findings against actual code for accurate results
Compiles a unified report for streamlined code review
Performs analysis only, with no code modifications

# Core Topics

kslhunter kslhunter
[0]
[0]
Updated: 3/6/2026

Quality Score

Top 5%
60
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add kslhunter/simplysm/sd-review

Agent Capability Analysis

The sd-review MCP Server by kslhunter is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use sd-review, sd-review alternative, sd-review setup guide.

Ideal Agent Persona

Perfect for Code Review Agents needing comprehensive defect and refactoring analysis capabilities.

Core Value

Empowers agents to perform parallel code verification using up to 5 reviewer agents, analyzing correctness, safety, API conventions, and structure, while compiling unified reports in defect review and refactoring analysis, all without modifying the code.

Capabilities Granted for sd-review MCP Server

Automating code defect reviews
Generating refactoring suggestions
Verifying API conventions
Analyzing code structure for simplification

! Prerequisites & Limits

  • Analysis only, no code modifications
  • Requires multiple reviewer agents for parallel verification
Project
SKILL.md
8.1 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

sd-review

Overview

Comprehensive code analysis combining defect review (correctness, safety, API, conventions) and refactoring analysis (structure, simplification). Dispatches up to 5 reviewer agents in parallel, verifies findings against actual code, and compiles a unified report.

Analysis only — no code modifications.

Principles

  • Breaking changes are irrelevant: Reviewers must NOT dismiss, soften, or deprioritize findings because the suggested fix would cause a breaking change. Correctness, safety, usability, architecture, and maintainability always take priority over API stability. If something is wrong, report it — regardless of breaking change impact.

Usage

  • /sd-review packages/solid — full review (all perspectives)
  • /sd-review packages/solid focus on bugs — selective review based on request
  • /sd-review packages/solid focus on refactoring — structural analysis only
  • /sd-review — if no argument, ask the user for the target path

Target Selection

  • With argument: review source code at the given path
  • Without argument: ask the user for the target path

Important: Review ALL source files under the target path. Do not use git status or git diff to limit scope.

Reviewer Perspectives

ReviewerPrompt TemplatePerspective
Code Reviewercode-reviewer-prompt.mdCorrectness & Safety — bugs, security, logic errors, architectural defects (circular deps, boundary violations)
API Reviewerapi-reviewer-prompt.mdUsability & DX — naming, types, consistency
Convention Checkerconvention-checker-prompt.mdProject rules — Grep-based systematic check against convention files (prohibited patterns, naming rules, export rules)
Code Simplifiercode-simplifier-prompt.mdSimplification — complexity, duplication, readability
Structure Analyzerstructure-analyzer-prompt.mdOrganization — responsibility separation, abstraction levels, module structure

Reviewer Selection

By default, run all 5 reviewers. If the user specifies a focus in natural language, select only the relevant reviewer(s):

User saysRun
"bugs", "security", "safety", "architecture", "dependencies", "boundaries"Code Reviewer only
"API", "naming", "types", "DX"API Reviewer only
"conventions", "rules", "patterns"Convention Checker only
"simplify", "complexity", "duplication", "readability"Code Simplifier only
"structure", "responsibility", "module", "organization", "abstraction"Structure Analyzer only
"defects", "correctness"Code + API + Convention
"refactoring", "refactor", "maintainability"Simplifier + Structure
(no specific focus)All 5

Use judgment for ambiguous requests.

Workflow

Step 1: Dispatch Reviewers

Read the prompt template files from this skill's directory. Replace [TARGET_PATH] with the actual target path. Then dispatch using Agent(general-purpose):

Agent(subagent_type=general-purpose, prompt=<filled template>)

Run selected reviewers in parallel (multiple Agent calls in a single message).

Step 2: Verify Findings

After collecting results from all reviewers, Read the actual code for each finding and verify.

For defect findings (Code, API, Convention reviewers):

  • Valid: the issue is real AND within scope → include in the report
  • Invalid — self-contradicted: the reviewer's own analysis shows the issue is mitigated (e.g., "exploitability is limited because..."). Drop it.
  • Invalid — type-only: reports a type definition as a runtime issue without showing actual runtime code that triggers it. Drop it.
  • Invalid — out of scope: the issue is about code outside the target path (e.g., how other packages use this code). Drop it.
  • Invalid — duplicate: another reviewer already reported the same issue. Keep only the one from the correct domain.
  • Invalid — bikeshedding: minor style preference on stable, well-commented code (magic numbers with clear comments, small interface field duplication, naming when used consistently). Drop it.
  • Invalid — severity inflated: downgrade or drop findings where the stated severity doesn't match the actual impact.
  • Invalid — already handled: handled elsewhere in the codebase (provide evidence)
  • Invalid — intentional pattern: by-design architectural decision
  • Invalid — misread: the reviewer misinterpreted the code

For refactoring findings (Simplifier, Structure reviewers):

Check 1 — Scope:

  • Is this about code structure? Not bugs, conventions, documentation, or performance → if not, drop (out of scope)
  • Is the issue within the target path? → if not, drop (out of target)
  • Already reported by another reviewer? → keep the better-scoped one (duplicate)
  • Minor style preference with no real structural impact? → drop (bikeshedding)

Check 2 — Duplication reality (for duplication findings):

  • Count actual duplicated lines. If < 30 lines total, drop — not worth extracting.
  • Compare side by side. If the "duplicates" have meaningful behavioral differences (different guards, parameters, error handling), drop — not true duplication.
  • Check if "similar types" are an intentional Input/Normalized pattern (optional props → required internal def with defaults applied, childrencell rename). If yes, drop — by design.

Check 3 — Separation benefit (for "too big", "mixed responsibilities", "mixed abstraction" findings):

  • Is the piece proposed for extraction < ~150 lines AND directly depends on the rest of the file (renders, calls, or shares state)? If yes, drop — splitting adds overhead without benefit.
  • Do all the abstractions serve a single cohesive domain concept (all functions called from one entry point, all types used together)? If yes, drop — it's cohesion, not mixing.
  • Would a realistic consumer reuse the extracted piece independently? If no, drop.

Check 4 — Not by design: Is this an established pattern used consistently across the codebase? (Provider+Component, Factory+Product, Input/Output types) If yes, drop.

Step 3: Invalid Findings Report

Present only the filtered findings to the user:

## Review: <target-path>

### Invalid Findings
[findings filtered out — grouped by rejection reason]

If there are no valid findings, report that the review found no actionable issues and end.

Step 4: User Confirmation

Present each verified finding to the user one at a time, ordered by severity (CRITICAL → WARNING → INFO → HIGH → MEDIUM → LOW).

For each finding, explain:

  1. What the problem is — the current code behavior and why it's an issue
  2. How it could be fixed — possible solution approaches (if multiple exist, list them briefly)
  3. Ask: address this or skip?

Collect only findings the user confirms. If the user skips all findings, report that and end.

Step 5: Brainstorm Handoff

Pass only the user-confirmed findings to sd-brainstorm.

Each finding includes: source reviewer, file:line, evidence, issue, and suggestion.

sd-brainstorm will handle prioritization, grouping, approach exploration, and design.

Common Mistakes

MistakeFix
Using git diff to limit review scopeReview ALL source files under target path
Skipping verification stepAlways verify reviewer findings against actual code
Reporting unverified issuesOnly include verified findings in final report
Running all reviewers for focused requestsMatch reviewer selection to user's request
Reporting bugs as refactoringAsk: "Is the behavior wrong?" If yes → defect, not refactoring
Reporting style as refactoringAsk: "Is this structural?" If no → lint, not refactoring
Presenting valid findings as final reportValid findings must be confirmed by user, then handed off to sd-brainstorm
Dumping all findings at once for user confirmationPresent findings one at a time with problem explanation and solution approaches

Completion Criteria

Report invalid findings, then hand off all valid findings to sd-brainstorm. No code modifications during review.

Related Skills

Looking for an alternative to sd-review or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication