ai-verify — ai-agents ai-verify, ai-engineering, community, ai-agents, ide skills, claude-code, developer-tools, devsecops, github-copilot

v1.0.0

关于此技能

非常适合需要全面内容分析和验证协议的代码审查代理 Use when you need to PROVE a claim with evidence, run quality/security scans, or validate that work is actually complete. Evidence before claims -- no should work allowed.

# 核心主题

arcasilesgroup arcasilesgroup
[3]
[0]
更新于: 3/18/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 6/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
6/11
Quality Score
54
Canonical Locale
en
Detected Body Locale
en

非常适合需要全面内容分析和验证协议的代码审查代理 Use when you need to PROVE a claim with evidence, run quality/security scans, or validate that work is actually complete. Evidence before claims -- no should work allowed.

核心价值

赋予代理执行质量、安全性和治理,通过git钩子和多模式扫描,利用验证协议和扫描器确保基于证据的声明,具有包括命令基于证明和退出代码检查在内的技术能力

适用 Agent 类型

非常适合需要全面内容分析和验证协议的代码审查代理

赋予的主要能力 · ai-verify

验证代码质量之前声明功能
部署之前扫描安全漏洞
验证AI工作空间中的治理和合规性

! 使用限制与门槛

  • 需要git仓库访问
  • 仅限于具有兼容git钩子的仓库

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The page lacks a strong recommendation layer.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

评审后的下一步

先决定动作,再继续看上游仓库材料

Killer-Skills 的主价值不应该停在“帮你打开仓库说明”,而是先帮你判断这项技能是否值得安装、是否应该回到可信集合复核,以及是否已经进入工作流落地阶段。

实验室 Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

常见问题与安装步骤

以下问题与步骤与页面结构化数据保持一致,便于搜索引擎理解页面内容。

? FAQ

ai-verify 是什么?

非常适合需要全面内容分析和验证协议的代码审查代理 Use when you need to PROVE a claim with evidence, run quality/security scans, or validate that work is actually complete. Evidence before claims -- no should work allowed.

如何安装 ai-verify?

运行命令:npx killer-skills add arcasilesgroup/ai-engineering/ai-verify。支持 Cursor、Windsurf、VS Code、Claude Code 等 19+ IDE/Agent。

ai-verify 适用于哪些场景?

典型场景包括:验证代码质量之前声明功能、部署之前扫描安全漏洞、验证AI工作空间中的治理和合规性。

ai-verify 支持哪些 IDE 或 Agent?

该技能兼容 Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer。可使用 Killer-Skills CLI 一条命令通用安装。

ai-verify 有哪些限制?

需要git仓库访问;仅限于具有兼容git钩子的仓库。

安装步骤

  1. 1. 打开终端

    在你的项目目录中打开终端或命令行。

  2. 2. 执行安装命令

    运行:npx killer-skills add arcasilesgroup/ai-engineering/ai-verify。CLI 会自动识别 IDE 或 AI Agent 并完成配置。

  3. 3. 开始使用技能

    ai-verify 已启用,可立即在当前项目中调用。

! 参考页模式

此页面仍可作为安装与查阅参考,但 Killer-Skills 不再把它视为主要可索引落地页。请优先阅读上方评审结论,再决定是否继续查看上游仓库说明。

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

ai-verify

安装 ai-verify,这是一款面向AI agent workflows and automation的 AI Agent Skill。查看评审结论、使用场景与安装路径。

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Verify

Purpose

Evidence before claims. This skill has two faces: (1) a verification protocol that proves claims with commands, and (2) a multi-mode scanner for quality, security, and governance. Both share the same principle: run the command, read the output, check the exit code. No guessing.

When to Use

  • Before claiming "it works" (run the test, show the output)
  • Before claiming "it's secure" (run the scan, show the findings)
  • Before claiming "Done!" (verify every acceptance criterion with evidence)
  • When running quality/security/governance scans on a codebase

Process

Verification Protocol (claim mode)

For every claim, follow IRRV:

I -- IDENTIFY: What command proves this claim?

  • "Tests pass" -> uv run pytest tests/ -v
  • "No lint errors" -> ruff check .
  • "No secrets" -> gitleaks protect --staged
  • "File exists" -> ls -la path/to/file

R -- RUN: Execute the FULL command. Not a subset. Not from memory. Fresh execution.

R -- READ: Read the FULL output. Check:

  • Exit code (0 = success, non-zero = failure)
  • Warning lines (even with exit code 0)
  • Actual numbers (test count, coverage %, finding count)

V -- VERIFY: Does the output CONFIRM the claim?

  • If yes: report with evidence (exact command + key output lines)
  • If no: report the discrepancy. Do not claim success.

Forbidden words (never use these without evidence):

  • "should work", "probably fine", "seems to", "looks good"
  • "Done!", "Perfect!", "All set!"
  • "I believe", "I think", "most likely"

Scan Modes (7 parallel modes)

ModeCommandWhat it assesses
governance/ai-verify governanceIntegrity, compliance, ownership boundaries
security/ai-verify securityOWASP SAST, secret detection, dependency vulns
quality/ai-verify qualityCoverage, complexity, duplication, lint
performance/ai-verify performanceN+1 queries, O(n^2), memory leaks, bundle size
a11y/ai-verify a11yWCAG 2.1 AA compliance
feature/ai-verify featureSpec vs code gaps, disconnected implementations
architecture/ai-verify architectureDrift, coupling, cohesion, boundaries
platform/ai-verify platformAll 7 modes aggregated -> GO/NO-GO

Auto-detect: when invoked without a mode, infer from context.

Scan Output Contract

Every scan mode produces:

markdown
1## Score: N/100 2## Verdict: PASS | WARN | FAIL 3 4## Findings 5| # | Severity | Category | Description | Location | Remediation | 6 7## Gate Check 8- Blocker findings: N (threshold: 0) 9- Critical findings: N (threshold: 0)

Scan Thresholds

ModeBlocker if...Critical if...
governanceAny integrity FAILAny compliance FAIL
securityCritical/high CVEAny secret detected
qualityCoverage < 80%Blocker/critical lint
performanceN+1 in critical pathO(n^2) in hot path
architectureCircular dependencyCritical drift from spec
platformAny blocker in ANY modeScore < 60

Verification Checklist (use before claiming DONE)

- [ ] Every acceptance criterion verified with a command
- [ ] All tests pass (exact count reported)
- [ ] Lint/format clean (zero warnings)
- [ ] No secrets in staged files
- [ ] Coverage maintained or improved (exact % reported)
- [ ] No forbidden words used in the completion report

Common Mistakes

  • Claiming success without running the command
  • Running a subset of tests instead of the full suite
  • Ignoring warnings when exit code is 0
  • Using forbidden words ("should work") instead of evidence
  • Not checking exit codes
  • Reporting coverage from memory instead of from the tool output

Integration

  • Called by: /ai-dispatch (post-task review), ai-build agent (after implementation), user directly
  • Calls: stack-specific tools (pytest, ruff, gitleaks, etc.)
  • Read-only: never modifies source code -- produces findings with remediation

$ARGUMENTS

相关技能

寻找 ai-verify 的替代方案 (Alternative) 或可搭配使用的同类 community Skill?探索以下相关开源技能。

查看全部

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

为prompts.chat的信息反馈系统生成可定制的插件小部件

149.6k
0
AI

flags

Logo of vercel
vercel

React 框架

138.4k
0
浏览器

pr-review

Logo of pytorch
pytorch

Python中具有强大GPU加速的张量和动态神经网络

98.6k
0
开发者工具