KS
Killer-Skills

reviewing-code — how to use reviewing-code how to use reviewing-code, reviewing-code setup guide, yadm dotfiles management, codex vs gemini, reviewing-code install, code review best practices with AI tools

v1.0.0
GitHub

About this Skill

Perfect for Code Review Agents needing advanced bug detection and performance optimization capabilities. reviewing-code is a skill that utilizes yadm for managing dotfiles and prioritizes code review on substantive issues

Features

Utilizes yadm for dotfiles management
Prioritizes substantive code review issues like bugs and performance
Supports external review with Codex and Gemini
Checks for Codex availability with bash commands
Runs local branch with codex --config model_re command

# Core Topics

tdhopper tdhopper
[3]
[3]
Updated: 2/26/2026

Quality Score

Top 5%
57
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add tdhopper/dotfiles2.0/references/network-inventory.md

Agent Capability Analysis

The reviewing-code MCP Server by tdhopper is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use reviewing-code, reviewing-code setup guide, yadm dotfiles management.

Ideal Agent Persona

Perfect for Code Review Agents needing advanced bug detection and performance optimization capabilities.

Core Value

Empowers agents to streamline code review processes, focusing on substantive issues like bugs, performance, and complexity, while utilizing tools like Codex and Gemini for external review, and skipping linting concerns using yadm.

Capabilities Granted for reviewing-code MCP Server

Automating code reviews for bug detection
Optimizing code performance with external reviewers
Streamlining code review processes with yadm

! Prerequisites & Limits

  • Requires yadm for code management
  • Codex or Gemini availability for external review
  • Limited to substantive issues, excluding linting concerns
Project
SKILL.md
2.2 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Code Review

Focus on substantive issues: bugs, missing tests, complexity, performance, duplication, incomplete implementations. Skip linting concerns (formatting, imports, naming style).

External Review (Optional)

Check for external reviewers and use if available. Priority: Codex > Gemini

bash
1command -v codex >/dev/null 2>&1 && echo "Codex available" 2command -v gemini >/dev/null 2>&1 && echo "Gemini available"

If Codex available:

  • Local branch: codex --config model_reasoning_effort="high" review --base BASE_BRANCH
  • Remote PR: gh pr diff NUMBER | codex review --config model_reasoning_effort="high" -

If only Gemini: Pipe diff to gemini with review prompt.

Workflow

  1. Get the diff

    • PR: gh pr view NUMBER --json title,body,files then gh pr diff NUMBER
    • Branch: git diff origin/master...HEAD
    • Uncommitted: git diff
  2. Gather context - Read PR description, commit messages, project CLAUDE.md

  3. Review each file for:

    • Completeness: All code paths handled? Stubs left behind?
    • Tests: Added? Meaningful? Edge cases covered?
    • Complexity: Justified abstractions? Simpler alternatives?
    • Performance: Hot path regressions? Unbatched I/O?
    • Duplication: Similar code already exists? (rg "pattern")
  4. Synthesize external review (if used) with your findings. Consensus issues = high confidence.

Output Format

markdown
1## Summary 2[1-2 sentences] 3 4## External Reviewer 5[If used: Codex or Gemini] 6 7## Key Findings 8 9### Must Address 101. **[Issue]** (`file:line`) [Models] 11 - Details 12 - **Risk**: Why it matters 13 14### Should Consider 152. **[Issue]** (`file:line`) 16 - Details 17 18### Minor Notes 19- Observations 20 21## Tests 22[Coverage and quality assessment] 23 24## Complexity 25[Net impact on codebase complexity]

Numbering: Single sequence across all sections. Model attribution: [Codex + Claude], [Claude], etc.

Scope

In scope: Logic errors, missing error handling, test gaps, performance regressions, unnecessary complexity, duplication, incomplete implementations, project guideline violations.

Out of scope (linters handle): Formatting, import order, naming style, type annotations, docstring format.

Related Skills

Looking for an alternative to reviewing-code or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication