quality-metrics — quality-metrics ai agent skill quality-metrics, sentinel-api-testing, proffesor-for-testing, community, quality-metrics ai agent skill, ai agent skill, ide skills, agent automation, quality-metrics automation, quality-metrics workflow tool, AI agent skills, Claude Code

v1.0.0
GitHub

About this Skill

Perfect for AI Agentic API Testing Agents needing advanced quality metrics analysis and DORA metrics integration. AI Agentic API Testing Platform - Automated testing with specialized ephemeral agents

# Core Topics

proffesor-for-testing proffesor-for-testing
[35]
[9]
Updated: 3/1/2026

Quality Score

Top 5%
61
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
> npx killer-skills add proffesor-for-testing/sentinel-api-testing/quality-metrics
Supports 19+ Platforms
Cursor
Windsurf
VS Code
Trae
Claude
OpenClaw
+12 more

Agent Capability Analysis

The quality-metrics skill by proffesor-for-testing is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for quality-metrics ai agent skill, quality-metrics automation, quality-metrics workflow tool.

Ideal Agent Persona

Perfect for AI Agentic API Testing Agents needing advanced quality metrics analysis and DORA metrics integration.

Core Value

Empowers agents to measure outcomes with metrics like bug escape rate and MTTD, focus on key DORA metrics such as deployment frequency and lead time, and set thresholds that drive behavior, all while avoiding vanity metrics and trending quality over time using protocols like API testing and data formats like JSON.

Capabilities Granted for quality-metrics

Automating quality metrics analysis for AI-powered applications
Generating DORA metrics dashboards for DevOps teams
Debugging low-quality code deployments using MTTD and MTTR metrics

! Prerequisites & Limits

  • Requires specialized ephemeral agents for automated testing
  • Limited to measuring outcomes and DORA metrics, not activities or vanity metrics
Project
SKILL.md
5.8 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8
SKILL.md
Readonly

Quality Metrics

<default_to_action> When measuring quality or building dashboards:

  1. MEASURE outcomes (bug escape rate, MTTD) not activities (test count)
  2. FOCUS on DORA metrics: Deployment frequency, Lead time, MTTD, MTTR, Change failure rate
  3. AVOID vanity metrics: 100% coverage means nothing if tests don't catch bugs
  4. SET thresholds that drive behavior (quality gates block bad code)
  5. TREND over time: Direction matters more than absolute numbers

Quick Metric Selection:

  • Speed: Deployment frequency, lead time for changes
  • Stability: Change failure rate, MTTR
  • Quality: Bug escape rate, defect density, test effectiveness
  • Process: Code review time, flaky test rate

Critical Success Factors:

  • Metrics without action are theater
  • What you measure is what you optimize
  • Trends matter more than snapshots </default_to_action>

Quick Reference Card

When to Use

  • Building quality dashboards
  • Defining quality gates
  • Evaluating testing effectiveness
  • Justifying quality investments

Meaningful vs Vanity Metrics

✅ Meaningful❌ Vanity
Bug escape rateTest case count
MTTD (detection)Lines of test code
MTTR (recovery)Test executions
Change failure rateCoverage % (alone)
Lead time for changesRequirements traced

DORA Metrics

MetricEliteHighMediumLow
Deploy FrequencyOn-demandWeeklyMonthlyYearly
Lead Time< 1 hour< 1 week< 1 month> 6 months
Change Failure Rate< 5%< 15%< 30%> 45%
MTTR< 1 hour< 1 day< 1 week> 1 month

Quality Gate Thresholds

MetricBlocking ThresholdWarning
Test pass rate100%-
Critical coverage> 80%> 70%
Security critical0-
Performance p95< 200ms< 500ms
Flaky tests< 2%< 5%

Core Metrics

Bug Escape Rate

Bug Escape Rate = (Production Bugs / Total Bugs Found) × 100

Target: < 10% (90% caught before production)

Test Effectiveness

Test Effectiveness = (Bugs Found by Tests / Total Bugs) × 100

Target: > 70%

Defect Density

Defect Density = Defects / KLOC

Good: < 1 defect per KLOC

Mean Time to Detect (MTTD)

MTTD = Time(Bug Reported) - Time(Bug Introduced)

Target: < 1 day for critical, < 1 week for others

Dashboard Design

typescript
1// Agent generates quality dashboard 2await Task("Generate Dashboard", { 3 metrics: { 4 delivery: ['deployment-frequency', 'lead-time', 'change-failure-rate'], 5 quality: ['bug-escape-rate', 'test-effectiveness', 'defect-density'], 6 stability: ['mttd', 'mttr', 'availability'], 7 process: ['code-review-time', 'flaky-test-rate', 'coverage-trend'] 8 }, 9 visualization: 'grafana', 10 alerts: { 11 critical: { bug_escape_rate: '>20%', mttr: '>24h' }, 12 warning: { coverage: '<70%', flaky_rate: '>5%' } 13 } 14}, "qe-quality-analyzer");

Quality Gate Configuration

json
1{ 2 "qualityGates": { 3 "commit": { 4 "coverage": { "min": 80, "blocking": true }, 5 "lint": { "errors": 0, "blocking": true } 6 }, 7 "pr": { 8 "tests": { "pass": "100%", "blocking": true }, 9 "security": { "critical": 0, "blocking": true }, 10 "coverage_delta": { "min": 0, "blocking": false } 11 }, 12 "release": { 13 "e2e": { "pass": "100%", "blocking": true }, 14 "performance_p95": { "max_ms": 200, "blocking": true }, 15 "bug_escape_rate": { "max": "10%", "blocking": false } 16 } 17 } 18}

Agent-Assisted Metrics

typescript
1// Calculate quality trends 2await Task("Quality Trend Analysis", { 3 timeframe: '90d', 4 metrics: ['bug-escape-rate', 'mttd', 'test-effectiveness'], 5 compare: 'previous-90d', 6 predictNext: '30d' 7}, "qe-quality-analyzer"); 8 9// Evaluate quality gate 10await Task("Quality Gate Evaluation", { 11 buildId: 'build-123', 12 environment: 'staging', 13 metrics: currentMetrics, 14 policy: qualityPolicy 15}, "qe-quality-gate");

Agent Coordination Hints

Memory Namespace

aqe/quality-metrics/
├── dashboards/*         - Dashboard configurations
├── trends/*             - Historical metric data
├── gates/*              - Gate evaluation results
└── alerts/*             - Triggered alerts

Fleet Coordination

typescript
1const metricsFleet = await FleetManager.coordinate({ 2 strategy: 'quality-metrics', 3 agents: [ 4 'qe-quality-analyzer', // Trend analysis 5 'qe-test-executor', // Test metrics 6 'qe-coverage-analyzer', // Coverage data 7 'qe-production-intelligence', // Production metrics 8 'qe-quality-gate' // Gate decisions 9 ], 10 topology: 'mesh' 11});

Common Traps

TrapProblemSolution
Coverage worship100% coverage, bugs still escapeMeasure bug escape rate instead
Test count focusMany tests, slow feedbackMeasure execution time
Activity metricsBusy work, no outcomesMeasure outcomes (MTTD, MTTR)
Point-in-timeSnapshot without contextTrack trends over time

Related Skills


Remember

Measure outcomes, not activities. Bug escape rate > test count. MTTD/MTTR > coverage %. Trends > snapshots. Set gates that block bad code. What you measure is what you optimize.

With Agents: Agents track metrics automatically, analyze trends, trigger alerts, and make gate decisions. Use agents to maintain continuous quality visibility.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is quality-metrics?

Perfect for AI Agentic API Testing Agents needing advanced quality metrics analysis and DORA metrics integration. AI Agentic API Testing Platform - Automated testing with specialized ephemeral agents

How do I install quality-metrics?

Run the command: npx killer-skills add proffesor-for-testing/sentinel-api-testing/quality-metrics. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for quality-metrics?

Key use cases include: Automating quality metrics analysis for AI-powered applications, Generating DORA metrics dashboards for DevOps teams, Debugging low-quality code deployments using MTTD and MTTR metrics.

Which IDEs are compatible with quality-metrics?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for quality-metrics?

Requires specialized ephemeral agents for automated testing. Limited to measuring outcomes and DORA metrics, not activities or vanity metrics.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add proffesor-for-testing/sentinel-api-testing/quality-metrics. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use quality-metrics immediately in the current project.

Related Skills

Looking for an alternative to quality-metrics or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
Design

linear

Logo of lobehub
lobehub

Linear issue management. MUST USE when: (1) user mentions LOBE-xxx issue IDs (e.g. LOBE-4540), (2) user says linear, linear issue, link linear, (3) creating PRs that reference Linear issues. Provides

73.4k
0
Communication

testing

Logo of lobehub
lobehub

Testing guide using Vitest. Use when writing tests (.test.ts, .test.tsx), fixing failing tests, improving test coverage, or debugging test issues. Triggers on test creation, test debugging, mock setup

73.3k
0
Communication

zustand

Logo of lobehub
lobehub

Zustand state management guide. Use when working with store code (src/store/**), implementing actions, managing state, or creating slices. Triggers on Zustand store development, state management questions, or action implementation.

72.8k
0
Communication