audit-review — for Claude Code audit-review, CoordExp, community, for Claude Code, ide skills, progress, path:line, apply_patch, configs, openspec

v1.0.0

Об этом навыке

Подходящий сценарий: Ideal for AI agents that need produce an audit report that helps an implementer safely change/refine coordexp without guessing. Локализованное описание: # Audit Review Overview Produce an audit report that helps an implementer safely change/refine CoordExp without guessing. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Возможности

Produce an audit report that helps an implementer safely change/refine CoordExp without guessing.
Optimize for correctness, reproducibility, and contract/pipeline integrity rather than style
Use the repo's source-of-truth order: openspec/specs/ - docs/ - openspec/changes/<active-change /
Output Contract (What You Deliver)
“Confirmed OK / ruled out” notes to prevent backtracking.

# Ключевые темы

Pein2017 Pein2017
[0]
[0]
Обновлено: 4/27/2026

Skill Overview

Start with fit, limitations, and setup before diving into the repository.

Подходящий сценарий: Ideal for AI agents that need produce an audit report that helps an implementer safely change/refine coordexp without guessing. Локализованное описание: # Audit Review Overview Produce an audit report that helps an implementer safely change/refine CoordExp without guessing. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Зачем использовать этот навык

Рекомендация: audit-review helps agents produce an audit report that helps an implementer safely change/refine coordexp without guessing. Audit Review Overview Produce an audit report that helps an implementer safely

Подходит лучше всего

Подходящий сценарий: Ideal for AI agents that need produce an audit report that helps an implementer safely change/refine coordexp without guessing.

Реализуемые кейсы использования for audit-review

Сценарий использования: Produce an audit report that helps an implementer safely change/refine CoordExp without guessing
Сценарий использования: Optimize for correctness, reproducibility, and contract/pipeline integrity rather than style
Сценарий использования: Use the repo's source-of-truth order: openspec/specs/ - docs/ - openspec/changes/<active-change /

! Безопасность и ограничения

  • Ограничение: Suggested next actions for an implementer (do not implement changes yourself).
  • Ограничение: Guardrails (Read-Only Audit)
  • Ограничение: Do not modify production code/configs/specs. No apply patch against src/, configs/, openspec/, etc.

About The Source

The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Labs-демо

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ и шаги установки

These questions and steps mirror the structured data on this page for better search understanding.

? Частые вопросы

Что такое audit-review?

Подходящий сценарий: Ideal for AI agents that need produce an audit report that helps an implementer safely change/refine coordexp without guessing. Локализованное описание: # Audit Review Overview Produce an audit report that helps an implementer safely change/refine CoordExp without guessing. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Как установить audit-review?

Выполните команду: npx killer-skills add Pein2017/CoordExp. Она работает с Cursor, Windsurf, VS Code, Claude Code и более чем 19 другими IDE.

Для чего можно использовать audit-review?

Ключевые сценарии использования: Сценарий использования: Produce an audit report that helps an implementer safely change/refine CoordExp without guessing, Сценарий использования: Optimize for correctness, reproducibility, and contract/pipeline integrity rather than style, Сценарий использования: Use the repo's source-of-truth order: openspec/specs/ - docs/ - openspec/changes/<active-change /.

Какие IDE совместимы с audit-review?

Этот навык совместим с Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Для единой установки используйте CLI Killer-Skills.

Есть ли ограничения у audit-review?

Ограничение: Suggested next actions for an implementer (do not implement changes yourself).. Ограничение: Guardrails (Read-Only Audit). Ограничение: Do not modify production code/configs/specs. No apply patch against src/, configs/, openspec/, etc..

Как установить этот skill

  1. 1. Откройте терминал

    Откройте терминал или командную строку в директории проекта.

  2. 2. Запустите команду установки

    Выполните: npx killer-skills add Pein2017/CoordExp. CLI автоматически определит вашу IDE или агента и настроит навык.

  3. 3. Начните использовать skill

    Skill уже активен. Ваш AI-агент может сразу использовать audit-review в текущем проекте.

! Source Notes

This page is still useful for installation and source reference. Before using it, compare the fit, limitations, and upstream repository notes above.

Upstream Repository Material

The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Upstream Source

audit-review

# Audit Review Overview Produce an audit report that helps an implementer safely change/refine CoordExp without guessing. This AI agent skill supports Claude

SKILL.md
Readonly
Upstream Repository Material
The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.
Upstream Source

Audit Review

Overview

Produce an audit report that helps an implementer safely change/refine CoordExp without guessing. Optimize for correctness, reproducibility, and contract/pipeline integrity rather than style refactors.

Use the repo's source-of-truth order: openspec/specs/ -> docs/ -> openspec/changes/<active-change>/ -> progress/. Use progress/ for evidence, diagnostics, benchmark scope, and history; do not answer current-behavior questions from it when docs/specs cover the contract.

Output Contract (What You Deliver)

  • Severity-ranked findings (P0/P1/P2) with concrete evidence handles (path:line, config keys, exact commands, or tool output).
  • “Confirmed OK / ruled out” notes to prevent backtracking.
  • Verification steps: exact commands/tests to reproduce or validate each claim.
  • Open questions: smallest set of clarifications required to remove ambiguity.
  • Suggested next actions for an implementer (do not implement changes yourself).

Use references/report-template.md if you want a ready-made skeleton.

Guardrails (Read-Only Audit)

  • Do not modify production code/configs/specs. No apply_patch against src/, configs/, openspec/, etc.
  • Prefer read-only exploration: rg, find, git diff, sed, python -m pytest, python -m py_compile.
  • If you must write a temporary test or probe:
  • Default: write under /tmp/ so the repo stays clean.
  • If you need it under temp/ for sharing, ask the user first and keep artifacts minimal.
  • Always check and report git status --porcelain at the start; if the worktree is dirty and it matters, ask before proceeding.
  • For Python code exploration: Serena MCP is mandatory (symbol-aware navigation; provide relative_path constraints).
  • Never invent results. If a claim cannot be verified, label it as a hypothesis and keep it out of severity-ranked findings.
  • Do not conflate benchmark scopes. Report val200, limit=200, first-200, proxy view, full-val, raw-text vs coord-token, checkpoint ids, and launch shape when they affect the claim.

Workflow (Breadth Pass -> Depth Pass -> Report)

Step 0: Clarify The Ask (Smallest Unblocking Questions)

  • If scope is ambiguous, ask 1–3 questions max:
  • What exact artifact(s) are we auditing: path(s) or concept?
  • Is the goal: “spec/design review only” or “implementation vs spec audit”?
  • Any constraints: time budget, no-network, specific configs/datasets, must-pass tests?

Assume the deliverable is a report for a separate implementer unless the user explicitly asks you to change code.

Step 1: Snapshot + Map The Surface Area (Breadth Pass)

  • Safety snapshot:
  • Run git status --porcelain and note any dirty files.
  • If auditing a change/PR, capture git diff --name-only (or change directory file listing) to bound the search.
  • Identify entrypoints and contracts:
    • Docs/specs: docs/AGENT_INDEX.md, docs/catalog.yaml, docs/PROJECT_CONTEXT.md, docs/SYSTEM_OVERVIEW.md, docs/IMPLEMENTATION_MAP.md, relevant openspec/specs/, relevant domain docs.
    • Progress: progress/index.yaml, progress/README.md, and the matching category router when you need empirical evidence.
    • Code: likely entrypoints (src/bootstrap/, src/config/loader.py, src/datasets/geometry.py, src/trainers/, src/infer/, src/eval/, public_data/).
    • Tests: locate tests adjacent to the target area and any policy scans.
  • Grep for relevant context (fast, wide net):
  • Use rg to find: config keys, CLI flags, spec terms, artifact filenames, error messages.
  • Use references/grep-seeds.md when you need good starting patterns.
  • Build a short “context index”:
  • Key files with 1-line reason each.
  • Key symbols to inspect (class/function names) with file paths.

Step 2: Inspect The Highest-Risk Flows (Depth Pass)

Pick 3–5 top risk areas based on impact and likelihood, then deep-dive with evidence:

  • Pipeline and process flow:
    • Trace data flow: input -> transforms -> packing -> training/infer/eval -> artifacts.
    • Verify invariant-sensitive steps (geometry, ordering, normalization).
    • Route geometry checks through src/datasets/geometry.py, not ad hoc bbox math.
  • Configuration and contracts:
    • Check strict parsing / unknown-key behavior (fail-fast vs silently ignored).
    • Check backward-compat surfaces (stable CLI contracts, deprecated keys policy).
    • Check that stable workflows stay YAML-first instead of adding CLI flags.
  • Artifacts and eval validity:
    • Verify training manifests: resolved_config.json, runtime_env.json, effective_runtime.json, pipeline_manifest.json, experiment_manifest.json, run_metadata.json.
    • Verify infer/eval artifacts: summary.json, resolved_config.json, resolved_config.path, gt_vs_pred.jsonl, gt_vs_pred_scored.jsonl, metrics.json, and guarded companions when enabled.
    • For current infer behavior inspect src/infer/pipeline.py::run_pipeline; for current eval behavior inspect src/eval/detection.py::evaluate_and_save.
  • Determinism and reproducibility:
    • Look for ordering-dependent behavior, random seeds, multiprocess I/O, filesystem-dependent nondeterminism.
  • Silent failure policy:
    • Ensure unexpected exceptions are not swallowed in core paths; best-effort behavior should be narrow and justified.

Step 3: Validate Or Falsify With Targeted Tests (Optional, But High Value)

  • Prefer running existing targeted tests first.
  • If a hypothesis needs a minimal repro, write a temporary test:
  • Put it in /tmp/ and run it with PYTHONPATH=. so the repo stays unchanged.
  • Keep it tiny and single-purpose; delete it afterwards (or ask before deleting if the user wants to keep it).
  • When tests are too expensive to run, provide a verification plan with expected artifacts and failure signals.

Step 4: Write The Audit Report

  • Lead with findings (ranked). Each finding must include:
  • Evidence handle (path:line, config key, or command output summary).
  • Why it matters (correctness/repro/eval validity/maintainability).
  • Suggested fix direction (for implementer) and how to verify.
  • Add “confirmed OK / ruled out” checks that reduce backtracking.
  • End with open questions (only what’s truly needed).

Resources (optional)

Open these only when helpful (progressive disclosure):

  • references/report-template.md: audit report skeleton (P0/P1/P2 + evidence + verification).
  • references/grep-seeds.md: high-signal rg starting points for broad context discovery.
  • references/pipeline-checklist.md: checklist for pipeline/process correctness and reproducibility risks.

Связанные навыки

Looking for an alternative to audit-review or another community skill for your workflow? Explore these related open-source skills.

Показать все

openclaw-release-maintainer

Logo of openclaw
openclaw

Локализованное описание: 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

widget-generator

Logo of f
f

Локализованное описание: Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat. It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor

flags

Logo of vercel
vercel

Локализованное описание: The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Браузер

pr-review

Logo of pytorch
pytorch

Локализованное описание: Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review. It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Разработчик