evaluate — for Claude Code evaluate, ai-skills, community, for Claude Code, ide skills, git diff --name-only HEAD~1, evaluate <file-or-directory>, evaluate <task-folder-name>, Quality, Evaluation

v1.0.0

About this Skill

Ideal for AI agents that need /evaluate — code quality evaluation. evaluate is an AI agent skill for /evaluate — code quality evaluation.

Features

/evaluate — Code Quality Evaluation
Evaluate code quality across 5 domains using a checklist. Runs directly in the main conversation —
/evaluate — auto-detect recently changed files via git diff --name-only HEAD 1
/evaluate <file-or-directory — evaluate a specific file or directory
/evaluate <task-folder-name — evaluate based on task-plan in docs/<task-folder-name /

# Core Topics

paycrux paycrux
[0]
[0]
Updated: 4/7/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
46
Canonical Locale
ko
Detected Body Locale
ko

Ideal for AI agents that need /evaluate — code quality evaluation. evaluate is an AI agent skill for /evaluate — code quality evaluation.

Core Value

evaluate helps agents /evaluate — code quality evaluation. ai-skills # /evaluate — Code Quality Evaluation Evaluate code quality across 5 domains using a checklist. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Ideal Agent Persona

Ideal for AI agents that need /evaluate — code quality evaluation.

Capabilities Granted for evaluate

Applying /evaluate — Code Quality Evaluation
Applying Evaluate code quality across 5 domains using a checklist. Runs directly in the main conversation —
Applying /evaluate — auto-detect recently changed files via git diff --name-only HEAD 1

! Prerequisites & Limits

  • /evaluate — auto-detect recently changed files via git diff --name-only HEAD 1
  • If no argument → get changed file list via git diff --name-only HEAD 1
  • Hooks called at top level only (no conditional/loop hooks)

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is evaluate?

Ideal for AI agents that need /evaluate — code quality evaluation. evaluate is an AI agent skill for /evaluate — code quality evaluation.

How do I install evaluate?

Run the command: npx killer-skills add paycrux/ai-skills/evaluate. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for evaluate?

Key use cases include: Applying /evaluate — Code Quality Evaluation, Applying Evaluate code quality across 5 domains using a checklist. Runs directly in the main conversation —, Applying /evaluate — auto-detect recently changed files via git diff --name-only HEAD 1.

Which IDEs are compatible with evaluate?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for evaluate?

/evaluate — auto-detect recently changed files via git diff --name-only HEAD 1. If no argument → get changed file list via git diff --name-only HEAD 1. Hooks called at top level only (no conditional/loop hooks).

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add paycrux/ai-skills/evaluate. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use evaluate immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

evaluate

ai-skills # /evaluate — Code Quality Evaluation Evaluate code quality across 5 domains using a checklist. This AI agent skill supports Claude Code, Cursor

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

/evaluate — Code Quality Evaluation

Evaluate code quality across 5 domains using a checklist. Runs directly in the main conversation — no sub-agents.

Argument Parsing

  • /evaluate — auto-detect recently changed files via git diff --name-only HEAD~1
  • /evaluate <file-or-directory> — evaluate a specific file or directory
  • /evaluate <task-folder-name> — evaluate based on task-plan in docs/<task-folder-name>/

Step 1: Determine Targets

  1. If argument provided → set that path as the target
  2. If no argument → get changed file list via git diff --name-only HEAD~1
  3. If a task-plan folder name is given → check docs/<folder>/progress.md for changed files

Read all target files.

Step 2: Evaluate by Checklist

Evaluate each target file against the checklist below. Each violation gets a severity:

  • CRITICAL: causes runtime errors, infinite loops, memory leaks, or security vulnerabilities
  • MAJOR: severely hinders maintainability, high bug probability
  • MINOR: a better pattern exists but current code has no functional issues

React / Accessibility

  • Hooks called at top level only (no conditional/loop hooks)
  • State immutability (no direct mutation, spread at all levels)
  • Stable list keys (no array index for reorderable lists)
  • No useEffect for derivable values or event-driven logic
  • Prop drilling ≤ 2 levels (else context/composition)
  • useMemo/useCallback only when measured or passing to memoized children
  • ARIA attributes present on interactive elements
  • Keyboard navigability (focus visible, tab order)
  • Color contrast ≥ 4.5:1
  • Form labels and error announcements

Engineering / Performance

  • No circular dependencies (direct or via barrel files)
  • No side effects in pure functions
  • No nested conditionals 3+ levels (use early return/guard)
  • No nested ternary 2+ levels
  • Single function ≤ 50 lines, single file ≤ 300 lines
  • DRY: no 10+ similar lines repeated in 2+ places
  • if-else chain 3+ on same variable → use mapper object
  • No hardcoded magic numbers/strings
  • Export signature changes verified against all call sites
  • Unnecessary re-renders (new object/array/function created in render passed to children)
  • Bundle size impact of new dependencies
  • Large data loaded entirely into memory when streaming/pagination possible

Security

  • No XSS (dangerouslySetInnerHTML, unescaped user input)
  • No SQL/NoSQL injection
  • No hardcoded secrets/credentials
  • Authentication/authorization checks on protected routes
  • Input validation at system boundaries

Step 3: Compose Report

Use the template at ${CLAUDE_SKILL_DIR}/templates/report.md.

Grade Criteria

GradeCriteria
ANo CRITICAL/MAJOR, MINOR ≤ 2
BNo CRITICAL, MAJOR 1-2
CNo CRITICAL, MAJOR 3+
DCRITICAL 1
FCRITICAL 2+

Overall grade follows the lowest grade among all domains.

Step 4: Save Report

Follow the save policy at ${CLAUDE_SKILL_DIR}/templates/save-policy.md.

Step 5: User Review & Fix Suggestions

Present the report, then wait for user judgment — do not auto-fix.

Use the prompt template at ${CLAUDE_SKILL_DIR}/templates/review-prompt.md.

Handling Based on User Response

User ChoiceAction
Fix allFix in CRITICAL → MAJOR → MINOR order directly, then re-run /evaluate
Selective fixFix only specified items directly, then re-run /evaluate
Keep as-isEnd without fixes. If linked to task-plan, proceed to completion

Fix Principles

  • Fix CRITICAL first — MINOR only when explicitly requested
  • Fix scope is limited to violation items — no surrounding code refactoring
  • Re-evaluation after fixes is max 2 times — if violations remain, defer to user judgment

Task-plan Integration

After evaluation, if a related task-plan folder (docs/*/) exists, append a summary to progress.md:

markdown
1### /evaluate result — {YYYY-MM-DD} 2- Overall grade: {A/B/C/D/F} 3- CRITICAL: {N}, MAJOR: {N}, MINOR: {N} 4- Report: `{evaluate report path}`

Rules

  • Include file path and line number for every violation
  • Overall grade follows the lowest grade among all domains
  • Must confirm with user about fixes after evaluation — no auto-fixing
  • If no violations found, honestly report "No violations"
  • 산출물은 한국어로 작성

Related Skills

Looking for an alternative to evaluate or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer