quality-metrics — community quality-metrics, ai-news-influencer, community, ide skills

v1.0.0

关于此技能

适合需要高级质量监控和DORA指标分析以优化软件开发的AI代理。 Measure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.

natea natea
[1]
[0]
更新于: 1/13/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 6/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
6/11
Quality Score
56
Canonical Locale
en
Detected Body Locale
en

适合需要高级质量监控和DORA指标分析以优化软件开发的AI代理。 Measure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.

核心价值

赋予代理跟踪关键绩效指标的能力,例如部署频率、领先时间和更改故障率,利用参与度指标来改进策略并通过质量门槛和阈值提高代码质量。

适用 Agent 类型

适合需要高级质量监控和DORA指标分析以优化软件开发的AI代理。

赋予的主要能力 · quality-metrics

监控Twitter以获取相关新闻和趋势并为开发提供信息
根据质量指标和DORA原则生成吸引人的LinkedIn内容
分析缺陷逃逸率和平均检测时间(MTTD)以改进流程

! 使用限制与门槛

  • 需要访问Twitter和LinkedIn APIs以进行社交媒体监控和内容生成
  • 仅关注DORA指标,这可能不适用于所有开发环境或团队

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The page lacks a strong recommendation layer.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

评审后的下一步

先决定动作,再继续看上游仓库材料

Killer-Skills 的主价值不应该停在“帮你打开仓库说明”,而是先帮你判断这项技能是否值得安装、是否应该回到可信集合复核,以及是否已经进入工作流落地阶段。

实验室 Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

常见问题与安装步骤

以下问题与步骤与页面结构化数据保持一致,便于搜索引擎理解页面内容。

? FAQ

quality-metrics 是什么?

适合需要高级质量监控和DORA指标分析以优化软件开发的AI代理。 Measure quality effectively with actionable metrics. Use when establishing quality dashboards, defining KPIs, or evaluating test effectiveness.

如何安装 quality-metrics?

运行命令:npx killer-skills add natea/ai-news-influencer/quality-metrics。支持 Cursor、Windsurf、VS Code、Claude Code 等 19+ IDE/Agent。

quality-metrics 适用于哪些场景?

典型场景包括:监控Twitter以获取相关新闻和趋势并为开发提供信息、根据质量指标和DORA原则生成吸引人的LinkedIn内容、分析缺陷逃逸率和平均检测时间(MTTD)以改进流程。

quality-metrics 支持哪些 IDE 或 Agent?

该技能兼容 Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer。可使用 Killer-Skills CLI 一条命令通用安装。

quality-metrics 有哪些限制?

需要访问Twitter和LinkedIn APIs以进行社交媒体监控和内容生成;仅关注DORA指标,这可能不适用于所有开发环境或团队。

安装步骤

  1. 1. 打开终端

    在你的项目目录中打开终端或命令行。

  2. 2. 执行安装命令

    运行:npx killer-skills add natea/ai-news-influencer/quality-metrics。CLI 会自动识别 IDE 或 AI Agent 并完成配置。

  3. 3. 开始使用技能

    quality-metrics 已启用,可立即在当前项目中调用。

! 参考页模式

此页面仍可作为安装与查阅参考,但 Killer-Skills 不再把它视为主要可索引落地页。请优先阅读上方评审结论,再决定是否继续查看上游仓库说明。

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

quality-metrics

安装 quality-metrics,这是一款面向AI agent workflows and automation的 AI Agent Skill。查看评审结论、使用场景与安装路径。

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Quality Metrics

<default_to_action> When measuring quality or building dashboards:

  1. MEASURE outcomes (bug escape rate, MTTD) not activities (test count)
  2. FOCUS on DORA metrics: Deployment frequency, Lead time, MTTD, MTTR, Change failure rate
  3. AVOID vanity metrics: 100% coverage means nothing if tests don't catch bugs
  4. SET thresholds that drive behavior (quality gates block bad code)
  5. TREND over time: Direction matters more than absolute numbers

Quick Metric Selection:

  • Speed: Deployment frequency, lead time for changes
  • Stability: Change failure rate, MTTR
  • Quality: Bug escape rate, defect density, test effectiveness
  • Process: Code review time, flaky test rate

Critical Success Factors:

  • Metrics without action are theater
  • What you measure is what you optimize
  • Trends matter more than snapshots </default_to_action>

Quick Reference Card

When to Use

  • Building quality dashboards
  • Defining quality gates
  • Evaluating testing effectiveness
  • Justifying quality investments

Meaningful vs Vanity Metrics

✅ Meaningful❌ Vanity
Bug escape rateTest case count
MTTD (detection)Lines of test code
MTTR (recovery)Test executions
Change failure rateCoverage % (alone)
Lead time for changesRequirements traced

DORA Metrics

MetricEliteHighMediumLow
Deploy FrequencyOn-demandWeeklyMonthlyYearly
Lead Time< 1 hour< 1 week< 1 month> 6 months
Change Failure Rate< 5%< 15%< 30%> 45%
MTTR< 1 hour< 1 day< 1 week> 1 month

Quality Gate Thresholds

MetricBlocking ThresholdWarning
Test pass rate100%-
Critical coverage> 80%> 70%
Security critical0-
Performance p95< 200ms< 500ms
Flaky tests< 2%< 5%

Core Metrics

Bug Escape Rate

Bug Escape Rate = (Production Bugs / Total Bugs Found) × 100

Target: < 10% (90% caught before production)

Test Effectiveness

Test Effectiveness = (Bugs Found by Tests / Total Bugs) × 100

Target: > 70%

Defect Density

Defect Density = Defects / KLOC

Good: < 1 defect per KLOC

Mean Time to Detect (MTTD)

MTTD = Time(Bug Reported) - Time(Bug Introduced)

Target: < 1 day for critical, < 1 week for others

Dashboard Design

typescript
1// Agent generates quality dashboard 2await Task("Generate Dashboard", { 3 metrics: { 4 delivery: ['deployment-frequency', 'lead-time', 'change-failure-rate'], 5 quality: ['bug-escape-rate', 'test-effectiveness', 'defect-density'], 6 stability: ['mttd', 'mttr', 'availability'], 7 process: ['code-review-time', 'flaky-test-rate', 'coverage-trend'] 8 }, 9 visualization: 'grafana', 10 alerts: { 11 critical: { bug_escape_rate: '>20%', mttr: '>24h' }, 12 warning: { coverage: '<70%', flaky_rate: '>5%' } 13 } 14}, "qe-quality-analyzer");

Quality Gate Configuration

json
1{ 2 "qualityGates": { 3 "commit": { 4 "coverage": { "min": 80, "blocking": true }, 5 "lint": { "errors": 0, "blocking": true } 6 }, 7 "pr": { 8 "tests": { "pass": "100%", "blocking": true }, 9 "security": { "critical": 0, "blocking": true }, 10 "coverage_delta": { "min": 0, "blocking": false } 11 }, 12 "release": { 13 "e2e": { "pass": "100%", "blocking": true }, 14 "performance_p95": { "max_ms": 200, "blocking": true }, 15 "bug_escape_rate": { "max": "10%", "blocking": false } 16 } 17 } 18}

Agent-Assisted Metrics

typescript
1// Calculate quality trends 2await Task("Quality Trend Analysis", { 3 timeframe: '90d', 4 metrics: ['bug-escape-rate', 'mttd', 'test-effectiveness'], 5 compare: 'previous-90d', 6 predictNext: '30d' 7}, "qe-quality-analyzer"); 8 9// Evaluate quality gate 10await Task("Quality Gate Evaluation", { 11 buildId: 'build-123', 12 environment: 'staging', 13 metrics: currentMetrics, 14 policy: qualityPolicy 15}, "qe-quality-gate");

Agent Coordination Hints

Memory Namespace

aqe/quality-metrics/
├── dashboards/*         - Dashboard configurations
├── trends/*             - Historical metric data
├── gates/*              - Gate evaluation results
└── alerts/*             - Triggered alerts

Fleet Coordination

typescript
1const metricsFleet = await FleetManager.coordinate({ 2 strategy: 'quality-metrics', 3 agents: [ 4 'qe-quality-analyzer', // Trend analysis 5 'qe-test-executor', // Test metrics 6 'qe-coverage-analyzer', // Coverage data 7 'qe-production-intelligence', // Production metrics 8 'qe-quality-gate' // Gate decisions 9 ], 10 topology: 'mesh' 11});

Common Traps

TrapProblemSolution
Coverage worship100% coverage, bugs still escapeMeasure bug escape rate instead
Test count focusMany tests, slow feedbackMeasure execution time
Activity metricsBusy work, no outcomesMeasure outcomes (MTTD, MTTR)
Point-in-timeSnapshot without contextTrack trends over time


Remember

Measure outcomes, not activities. Bug escape rate > test count. MTTD/MTTR > coverage %. Trends > snapshots. Set gates that block bad code. What you measure is what you optimize.

With Agents: Agents track metrics automatically, analyze trends, trigger alerts, and make gate decisions. Use agents to maintain continuous quality visibility.

相关技能

寻找 quality-metrics 的替代方案 (Alternative) 或可搭配使用的同类 community Skill?探索以下相关开源技能。

查看全部

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

为prompts.chat的信息反馈系统生成可定制的插件小部件

149.6k
0
AI

flags

Logo of vercel
vercel

React 框架

138.4k
0
浏览器

pr-review

Logo of pytorch
pytorch

Python中具有强大GPU加速的张量和动态神经网络

98.6k
0
开发者工具