code-reviewer — for Claude Code code-reviewer, qatar-prode, community, for Claude Code, ide skills, Read({ file_path: contextFile }), Extract from the context file, — the story being reviewed, PR_NUMBER, compact

v1.0.0

À propos de ce Skill

Scenario recommande : Ideal for AI agents that need code reviewer (validation skill). Resume localise : # Code Reviewer (Validation Skill) Complete workflow for validating code quality before final PR review and merge. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Fonctionnalités

Code Reviewer (Validation Skill)
Complete workflow for validating code quality before final PR review and merge.
Step 0: Read Story Context File (MANDATORY)
First action before anything else:
// Find and read the context file

# Core Topics

gvinokur gvinokur
[0]
[0]
Updated: 4/8/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
55
Canonical Locale
en
Detected Body Locale
en

Scenario recommande : Ideal for AI agents that need code reviewer (validation skill). Resume localise : # Code Reviewer (Validation Skill) Complete workflow for validating code quality before final PR review and merge. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Pourquoi utiliser cette compétence

Recommandation : code-reviewer helps agents code reviewer (validation skill). Code Reviewer (Validation Skill) Complete workflow for validating code quality before final PR review and merge. This AI agent skill

Meilleur pour

Scenario recommande : Ideal for AI agents that need code reviewer (validation skill).

Cas d'utilisation exploitables for code-reviewer

Cas d'usage : Applying Code Reviewer (Validation Skill)
Cas d'usage : Applying Complete workflow for validating code quality before final PR review and merge
Cas d'usage : Applying Step 0: Read Story Context File (MANDATORY)

! Sécurité et Limitations

  • Limitation : If you don't know WORKTREE PATH or STORY NUMBER yet (fresh session after /compact or /clear):
  • Limitation : ONLY validate when user says "code looks good" after testing in Vercel Preview - Not before
  • Limitation : All checks must pass - CI/CD, SonarCloud quality gates

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is code-reviewer?

Scenario recommande : Ideal for AI agents that need code reviewer (validation skill). Resume localise : # Code Reviewer (Validation Skill) Complete workflow for validating code quality before final PR review and merge. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install code-reviewer?

Run the command: npx killer-skills add gvinokur/qatar-prode/code-reviewer. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for code-reviewer?

Key use cases include: Cas d'usage : Applying Code Reviewer (Validation Skill), Cas d'usage : Applying Complete workflow for validating code quality before final PR review and merge, Cas d'usage : Applying Step 0: Read Story Context File (MANDATORY).

Which IDEs are compatible with code-reviewer?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for code-reviewer?

Limitation : If you don't know WORKTREE PATH or STORY NUMBER yet (fresh session after /compact or /clear):. Limitation : ONLY validate when user says "code looks good" after testing in Vercel Preview - Not before. Limitation : All checks must pass - CI/CD, SonarCloud quality gates.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add gvinokur/qatar-prode/code-reviewer. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use code-reviewer immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

code-reviewer

# Code Reviewer (Validation Skill) Complete workflow for validating code quality before final PR review and merge. This AI agent skill supports Claude Code

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Code Reviewer (Validation Skill)

Complete workflow for validating code quality before final PR review and merge.

Step 0: Read Story Context File (MANDATORY)

First action before anything else:

typescript
1// Find and read the context file 2const contextFile = `${WORKTREE_PATH}/plans/STORY-${STORY_NUMBER}-context.md` 3Read({ file_path: contextFile })

Extract from the context file:

  • STORY_NUMBER — the story being reviewed
  • WORKTREE_PATH — absolute path to the story's worktree
  • PR_NUMBER — the PR to review

If you don't know WORKTREE_PATH or STORY_NUMBER yet (fresh session after /compact or /clear):

bash
1# Find the context file from the most recent story worktree 2ls /Users/gvinokur/Personal/qatar-prode-story-*/plans/STORY-*-context.md 2>/dev/null | tail -1

Read that file to bootstrap your session.

Why this matters: After a /compact or /clear, conversation history is gone. The context file is the single source of truth for story metadata.


Personas

Persona A — Quality Officer

Activates: After gh pr ready is run Role: Fetches SonarCloud results, enforces 0 new issues of any severity, blocks merge until quality gates pass Enforcement: NEVER approve story complete if any new SonarCloud issue exists, regardless of severity

Persona B — The Librarian

Activates: After Persona A confirms 0 new SonarCloud issues Role: Performs the Pre-Merge Documentation Audit (Section 7.5) — reads source files alongside their CODE-STRUCTURE layer entries, verifies accuracy of all signatures, Calls:, and Renders: lines

Hard Gate: Story complete is PROHIBITED until Persona B's Section 7.5 checklist is 100% verified. story complete CANNOT be called until Section 7.5 passes.


Overview

After implementation is complete, code is committed/pushed, and user has tested in Vercel Preview and is satisfied, run final SonarCloud validation. This is a hard gate - all issues must be resolved before proceeding to merge.

Note: Tests, lint, and build are run BEFORE commit (see /implementer Section 9). This validation phase focuses on SonarCloud analysis and quality gates.

Critical Rules

  1. ONLY validate when user says "code looks good" after testing in Vercel Preview - Not before
  2. Tests/lint/build already passed - These were run before commit (/implementer Section 9)
  3. 0 new SonarCloud issues of ANY severity - Low, medium, high, or critical
  4. 80% coverage on new code - SonarCloud enforces this automatically
  5. NEVER auto-fix issues - Always show user and ask permission
  6. All checks must pass - CI/CD, SonarCloud quality gates
  7. Keep PR in DRAFT until ready to merge - Only mark as ready for review when user explicitly requests it or asks to merge
  8. After all SonarCloud issues are resolved and before calling story complete, you MUST run the Pre-Merge Documentation Audit (Section 7.5). This is non-negotiable — story complete cannot be called until the Section 7.5 checklist is complete.

When to Run Validation

Default workflow (Vercel Preview testing):

  1. Implementation complete → Commit & push
  2. User tests in Vercel Preview
  3. User says "code looks good" or "I'm satisfied" (after testing in preview)
  4. NOW run this validation workflow (SonarCloud analysis)

Trigger phrases from user (after Vercel Preview testing):

  • "Code looks good" (tested in Vercel Preview)
  • "I'm satisfied with the implementation"
  • "Ready to merge"
  • "Let's check quality gates"
  • "Looks good in preview"

DO NOT validate:

  • During implementation
  • Before user has tested in Vercel Preview
  • When user is still iterating on functionality
  • Before commit (tests/lint/build happen before commit, not here)

Complete Validation Workflow

Prerequisites (already completed in implementation phase):

  • ✅ Tests run and passing (done before commit - /implementer Section 9 Step 3)
  • ✅ Linting passed (done before commit - /implementer Section 9 Step 3)
  • ✅ Build succeeded (done before commit - /implementer Section 9 Step 3)
  • ✅ Code committed and pushed
  • ✅ Vercel Preview deployment created

1. Plan Reconciliation (MANDATORY)

🛑 BEFORE running final validation, reconcile plan with implementation 🛑

Purpose: Ensure plan documentation accurately reflects what was actually built.

Step A.5: Audit via Git History (do this FIRST)

Before reading the plan, get the full picture of what actually changed on this branch:

bash
1# All commits on this branch 2git -C ${WORKTREE_PATH} log origin/main..HEAD --oneline 3 4# All source files changed across the entire branch lifetime 5git -C ${WORKTREE_PATH} diff origin/main..HEAD --name-only

Then:

  1. Group commits by phase: identify initial implementation vs. post-feedback iterations
  2. For each post-feedback commit: check whether a plan amendment already covers the change; if not, add one in Step D below
  3. Use the full file diff list (not memory) as the authoritative record of what changed

Step A: Read Plan Document

typescript
1Read({ 2 file_path: `${WORKTREE_PATH}/plans/STORY-${STORY_NUMBER}-plan.md` 3}) 4 5// Also check for change plans 6Read({ 7 file_path: `${WORKTREE_PATH}/plans/STORY-${STORY_NUMBER}-change-1.md` // if exists 8})

Step B: Compare Plan to Implementation

Review each section of the plan against actual code:

Technical Approach:

  • Does the code follow the approach described?
  • Were there architectural deviations?
  • Are all components/files mentioned in the plan present?

Implementation Steps:

  • Were all steps completed as described?
  • Were steps skipped or done differently?

Files Created/Modified:

  • Do the actual files match the plan?
  • Were additional files created?

Testing Strategy:

  • Were tests created as described?

Step C: Identify Gaps

Ask yourself:

  1. Are there code changes not mentioned in the plan?
  2. Are there amendments that should have been added but weren't?
  3. Does the plan contradict the actual implementation?
  4. Would a future developer be confused by plan vs. code?

Step D: Update Plan if Needed

If gaps found, add missing amendments:

typescript
1Edit({ 2 file_path: `${WORKTREE_PATH}/plans/STORY-${STORY_NUMBER}-plan.md`, 3 old_string: `## Testing Strategy`, 4 new_string: `## Implementation Amendments 5 6### Amendment X: [Title] 7**Date:** ${TODAY} 8**Reason:** [Why this was needed - discovered during reconciliation] 9**Change:** [What was actually done] 10 11## Testing Strategy` 12})

Commit plan updates:

bash
1git -C ${WORKTREE_PATH} add plans/STORY-${STORY_NUMBER}-plan.md 2git -C ${WORKTREE_PATH} commit -m "docs: reconcile plan with implementation 3 4Added amendments for changes discovered during implementation. 5 6Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>" 7git -C ${WORKTREE_PATH} push

Step E: Reconciliation Checklist

  • I have read the original plan document completely
  • I have compared plan to actual implementation
  • All deviations are documented (in amendments or change plans)
  • No contradictions exist between plan and code
  • Future developers can understand what was built and why
  • All amendments have clear reason and change description
  • Plan amendments are committed and pushed (if updates were needed)

Only proceed to user satisfaction verification after completing this checklist.

2. Verify User Satisfaction from Vercel Preview

Confirm user has tested in Vercel Preview and is satisfied with:

  • Functionality works as expected in preview environment
  • UI looks correct in preview environment
  • Edge cases are handled
  • No obvious bugs
  • User has explicitly said "code looks good" or similar

Only proceed when user explicitly confirms satisfaction after testing in Vercel Preview.

3. Wait for CI/CD Checks

Prerequisite: PR must already be marked as ready (Section 7 must have run). SonarCloud does not run on draft PRs.

bash
1# Wait for Vercel and SonarCloud 2./scripts/github-projects-helper pr wait-checks ${PR_NUMBER}

Once checks complete, immediately fetch SonarCloud issues:

bash
1# Get detailed SonarCloud analysis 2./scripts/github-projects-helper pr sonar-issues ${PR_NUMBER}

4. Analyze SonarCloud Results

Get SonarCloud issues using helper script:

bash
1./scripts/github-projects-helper pr sonar-issues ${PR_NUMBER}

This command will:

  • ✅ Fetch coverage percentage on new code
  • ✅ Fetch all new issues from SonarCloud for the PR
  • ✅ Categorize by severity (BLOCKER, CRITICAL, MAJOR, MINOR, INFO)
  • ✅ Categorize by type (BUG, VULNERABILITY, CODE_SMELL, SECURITY_HOTSPOT)
  • ✅ Show detailed issue descriptions with file locations and line numbers
  • ✅ Show the rule violated for each issue
  • ✅ Provide direct link to SonarCloud report

Quality Gate Criteria:

  • 0 new issues of ANY severity (including MINOR)
  • 80%+ coverage on new code
  • Security rating: A
  • Maintainability: B or higher
  • < 5% duplicated code

IMPORTANT: ALL new issues must be fixed, regardless of severity. Even MINOR code smells must be resolved before merge.

See sonarcloud-guide.md for common issues table and severity interpretation.

5. Handle Quality Gate Failures

If SonarCloud reports new issues:

  1. Fetch and present issues to user (using helper script output)
  2. Wait for user permission:
    • User says "yes, fix them" → Proceed to fix
    • User says "no, I'll fix manually" → Stop, wait for user
    • User wants specific fixes only → Fix only those
  3. Fix issues (if authorized):
    • Read the code with issues
    • Apply fixes for each issue
    • Run tests to verify fixes
    • Commit and push
    • Wait for re-analysis
  4. Verify fixes:
    bash
    1./scripts/github-projects-helper pr wait-checks ${PR_NUMBER} 2./scripts/github-projects-helper pr sonar-issues ${PR_NUMBER}

If coverage is below 80% on new code:

Do NOT just add random tests to hit a number. Follow this analytical workflow:

Step 1: Get the line-level coverage report for changed files

Run vitest with coverage, scoped to the changed files (use the same --coverage flag but restrict to only new/modified source files to keep output focused):

bash
1# Get changed source files (excluding tests) 2CHANGED=$(git -C ${WORKTREE_PATH} diff origin/main..HEAD --name-only | grep -E '^app/' | grep -v '__tests__' | tr '\n' ' ') 3 4# Run coverage on only those files 5npm --prefix ${WORKTREE_PATH} run test -- --coverage --reporter=verbose \ 6 --coverage.include="${CHANGED}"

This produces a text table showing lines, branches, functions, statements per file. Read the output to find which lines are uncovered.

Step 2: Read the uncovered lines in context

For each file below threshold, open the source file and read the specific uncovered lines. Ask:

  • What code path does this line represent? (e.g., the else branch of an auth check, the error path of an async call, a specific conditional in a render)
  • What scenario would trigger this path? (e.g., "user is not admin", "server returns empty array", "provider key not in lookup table")
  • Is this a meaningful edge case or dead code? If it's genuinely unreachable (e.g., TypeScript-narrowed impossible branch), note it and skip rather than writing a pointless test.

Step 3: Write one test per uncovered scenario

Group uncovered lines by the scenario they represent, not by line number. A scenario is the answer to "what must be true about the inputs for execution to reach this line?"

Example reasoning (DO this):

  • Line 45 is throw new Error('Unauthorized') → scenario: "caller is not an admin"
  • Line 62 is the fallback ?? rawKey in a label lookup → scenario: "provider key not in the known labels map"
  • Lines 80-85 are the loading spinner branch → scenario: "data fetch is still in flight"

Anti-pattern (DO NOT do this):

  • "Line 45 is uncovered, I'll add a test that reaches line 45" — this misses the point
  • Adding a test that exercises multiple uncovered lines from unrelated scenarios in one it() block

For each scenario, write a focused test with a name that reads as a sentence describing the scenario, e.g.:

it('throws Unauthorized when caller is not an admin', ...)
it('falls back to raw key when provider is not in PROVIDER_LABELS', ...)
it('shows loading spinner while the data fetch is in flight', ...)

Step 4: Verify coverage improvement before committing

Re-run coverage locally after writing tests. Confirm the new tests actually cover the targeted lines before committing:

bash
1npm --prefix ${WORKTREE_PATH} run test -- --coverage \ 2 --coverage.include="${CHANGED}"

Do not commit if coverage did not improve — something is wrong with the test setup.

Step 5: Present analysis to user before adding tests

Before writing any tests, present to the user:

  • Which files are below threshold
  • The uncovered scenarios identified (not raw line numbers)
  • The proposed test names/descriptions

Wait for user approval, then implement.

6. Validate Vercel Deployment

Check deployment:

bash
1# Get Vercel preview URL 2gh pr view ${PR_NUMBER} --json statusCheckRollup --jq '.statusCheckRollup[] | select(.name | contains("vercel")) | .targetUrl'

Verify:

  • ✅ Deployment successful
  • ✅ Preview URL accessible
  • ✅ Application loads without errors

If deployment fails:

  • Review deployment logs
  • Fix build/runtime errors
  • Commit, push, re-validate

7. Mark PR as Ready for Review

CRITICAL: This step runs IMMEDIATELY when "code looks good" is received — BEFORE waiting for CI/CD (Section 3). SonarCloud only runs on non-draft PRs, so this must come first.

Trigger: User says "code looks good" / "I'm satisfied" / "looks good in preview" (any phrase from the "When to Run Validation" section above).

Action: Run immediately after confirming user satisfaction (Section 2):

bash
1gh pr ready ${PR_NUMBER}

This removes the DRAFT status, which triggers SonarCloud analysis. Then proceed to Section 3 (Wait for CI/CD).

DO NOT mark as ready for review:

  • ❌ During planning phase
  • ❌ During implementation phase
  • ❌ Before user has tested in Vercel Preview
  • ❌ When user is still iterating on functionality

7.5. Pre-Merge Documentation Audit (MANDATORY)

🛑 This section is a hard gate before story complete. Do NOT proceed to Section 8 until the checklist at the bottom of this section is fully checked off. 🛑

Purpose: Verify CODE-STRUCTURE layer files accurately reflect the current implementation — not the initial implementation, and not stale entries from before feedback-driven changes.

Key distinction: This is NOT a presence check ("was the layer file touched on this branch?"). It is an accuracy check — read both the source file and its layer entry, and verify they match the current code. A function documented during the initial task but whose signature changed during a feedback session will still have a stale entry even though the layer file was technically "updated."

Prerequisite: SonarCloud must report 0 new issues before running this audit.

Step A: Get all changed source files

bash
1git -C ${WORKTREE_PATH} diff origin/main..HEAD --name-only | grep -E '^app/'

Step B: Audit layer files directly

For each changed source file, read it alongside its layer file entry and verify accuracy:

bash
1# Get the list of changed source files 2CHANGED_FILES=$(git -C ${WORKTREE_PATH} diff origin/main..HEAD --name-only | grep -E '^app/') 3echo "${CHANGED_FILES}"

For each file in CHANGED_FILES:

  1. Read the source file: Read({ file_path: "${WORKTREE_PATH}/${file}" })
  2. Identify the corresponding layer file (db.md, actions.md, utils.md, pages.md, or components-[domain].md)
  3. Read the layer file entry for that file
  4. Compare: do all function signatures, Calls: lines, and Renders: lines match the actual code?
  5. Note any drift found

If drift is found: apply corrections directly to the layer files using the Edit tool. If no drift: note "no documentation drift found" explicitly.

Step C: Apply call graph updates if flagged

If Gemini's Call Graph Assessment says YES, update CODE-STRUCTURE.md ## Call Graph with the new flows identified.

Step D: Commit any updates

bash
1git -C ${WORKTREE_PATH} add docs/code-structure/ CODE-STRUCTURE.md 2git -C ${WORKTREE_PATH} commit -m "docs: pre-merge CODE-STRUCTURE audit 3 4Verified and corrected layer file entries against final implementation. 5Captures signature/relationship changes from post-feedback iterations. 6 7Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>" 8git -C ${WORKTREE_PATH} push

If no changes were needed, explicitly note "no documentation drift found" — do not skip this confirmation.

After applying corrections, re-read the updated layer file entries and verify they now match the source. Only proceed to the checklist below once all entries are confirmed accurate.

Section 7.5 Checklist (must be complete before Section 8)

  • SonarCloud reports 0 new issues (prerequisite — Sonar must be clean first)
  • Every changed app/ source file has been read alongside its layer file entry
  • All function/component signatures match current code (not plan or earlier iteration)
  • Calls: and Renders: lines reflect current code, not original implementation
  • Removed or renamed exports are removed from layer files
  • CODE-STRUCTURE.md call graph reflects current cross-layer flows
  • All modified layer file Last updated: headers updated to today
  • Updates committed and pushed (or "no drift found" explicitly confirmed)

8. Final Quality Gate Confirmation

Prerequisites before presenting final summary:

  • ✅ Pre-Merge Documentation Audit complete (Section 7.5 checklist fully checked off)

Present this summary to the user and STOP. Fill in real values — do not use placeholder text:

## Validation Summary — Story #${STORY_NUMBER}

### Pre-Merge Checks
| Check | Status | Details |
|-------|--------|---------|
| Tests | ✓ Pass | [N] tests passing |
| Lint | ✓ Pass | No lint errors |
| Build | ✓ Pass | Build succeeded |

### CI/CD
| Check | Status | Details |
|-------|--------|---------|
| Vercel | ✓ Deployed | [preview URL] |
| SonarCloud | ✓ Pass | 0 new issues, [X]% coverage on new code |

### Documentation Audit (Section 7.5)
| Check | Status | Details |
|-------|--------|---------|
| CODE-STRUCTURE drift | ✓ Clean | [or: X corrections applied] |
| Call graph | ✓ Up to date | [or: updated] |

### Plan Reconciliation
| Check | Status | Details |
|-------|--------|---------|
| Plan vs. implementation | ✓ Aligned | [or: N amendments added] |

SonarCloud report: [URL]
PR: #${PR_NUMBER} — [PR URL]

🛑 STOP HERE. DO NOT PROCEED FURTHER. 🛑

After presenting the summary above, WAIT for the user. Do not run any additional commands.

What you MUST NOT do after presenting the summary:

  • ❌ Do NOT run story complete — only run when user explicitly says "merge", "complete the story", or similar
  • ❌ Do NOT ask "would you like me to merge?" or "shall I complete the story?"
  • ❌ Do NOT suggest next steps that imply proceeding

To merge: Wait for user to explicitly say "merge" / "merge this" / "complete the story" / "story complete", then run via /git-ops Section 4:

bash
1# Complete story (merge + cleanup) — ONLY on explicit user request 2./scripts/github-projects-helper story complete ${STORY_NUMBER} --project ${PROJECT_NUMBER}

Quality Gate Enforcement

Zero Tolerance for New Issues

NO EXCUSES for skipping SonarCloud issues:

  • ❌ "It's just a low severity issue" → Fix it
  • ❌ "It's minor code smell" → Fix it
  • ❌ "It doesn't affect functionality" → Fix it
  • ❌ "We can fix it later" → Fix it now
  • 0 new issues is the only acceptable outcome

Common Mistakes

MistakeWhy It's WrongCorrect Approach
Auto-fixing issuesUser loses context and controlShow issues, ask permission
Ignoring low severityAccumulates technical debtFix ALL new issues
Validating too earlyUser hasn't tested yetWait for user satisfaction
Skipping re-validationDon't confirm fixes workedAlways re-check after fixes
Merging with issuesFails quality standards0 new issues before merge
Calling story complete before Section 7.5CODE-STRUCTURE entries remain staleRun Section 7.5 first — it's a hard gate

Compétences associées

Looking for an alternative to code-reviewer or another community skill for your workflow? Explore these related open-source skills.

Voir tout

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Navigateur

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Développeur