qa — for Claude Code clincher, community, for Claude Code, ide skills, agency, ai-agent, ansible, batteries-included, caprover, deployment

v1.0.0

Об этом навыке

Подходящий сценарий: Ideal for AI agents that need /qa: systematic qa testing. Локализованное описание: 🦞 [2026.03.10] Hardened OpenClaw deployment on a single VPS: one command, production-ready. It covers agency, agents, ai-agent workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Возможности

/qa: Systematic QA Testing
Parse the user's request for these parameters:
Parameter Default Override example
----------- --------- -----------------
Target URL (required) https://myapp.com, http://localhost:3000

# Core Topics

musexmachine musexmachine
[2]
[1]
Updated: 4/9/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
72
Canonical Locale
en
Detected Body Locale
en

Подходящий сценарий: Ideal for AI agents that need /qa: systematic qa testing. Локализованное описание: 🦞 [2026.03.10] Hardened OpenClaw deployment on a single VPS: one command, production-ready. It covers agency, agents, ai-agent workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Зачем использовать этот навык

Рекомендация: qa helps agents /qa: systematic qa testing. 🦞 [2026.03.10] Hardened OpenClaw deployment on a single VPS: one command, production-ready. This AI agent skill supports Claude Code, Cursor, and Windsurf

Подходит лучше всего

Подходящий сценарий: Ideal for AI agents that need /qa: systematic qa testing.

Реализуемые кейсы использования for qa

Сценарий использования: Applying /qa: Systematic QA Testing
Сценарий использования: Applying Parse the user's request for these parameters:
Сценарий использования: Applying Parameter Default Override example

! Безопасность и ограничения

  • Ограничение: Requires repository-specific context from the skill documentation
  • Ограничение: Works best when the underlying tools and dependencies are already configured

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is qa?

Подходящий сценарий: Ideal for AI agents that need /qa: systematic qa testing. Локализованное описание: 🦞 [2026.03.10] Hardened OpenClaw deployment on a single VPS: one command, production-ready. It covers agency, agents, ai-agent workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install qa?

Run the command: npx killer-skills add musexmachine/clincher/qa. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for qa?

Key use cases include: Сценарий использования: Applying /qa: Systematic QA Testing, Сценарий использования: Applying Parse the user's request for these parameters:, Сценарий использования: Applying Parameter Default Override example.

Which IDEs are compatible with qa?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for qa?

Ограничение: Requires repository-specific context from the skill documentation. Ограничение: Works best when the underlying tools and dependencies are already configured.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add musexmachine/clincher/qa. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use qa immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

qa

🦞 [2026.03.10] Hardened OpenClaw deployment on a single VPS: one command, production-ready. It covers agency, agents, ai-agent workflows. This AI agent skill

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

/qa: Systematic QA Testing

You are a QA engineer. Test web applications like a real user — click everything, fill every form, check every state. Produce a structured report with evidence.

Setup

Parse the user's request for these parameters:

ParameterDefaultOverride example
Target URL(required)https://myapp.com, http://localhost:3000
Modefull--quick, --regression .gstack/qa-reports/baseline.json
Output dir.gstack/qa-reports/Output to /tmp/qa
ScopeFull appFocus on the billing page
AuthNoneSign in to user@example.com, Import cookies from cookies.json

Find the browse binary:

bash
1B=$(browse/bin/find-browse 2>/dev/null || ~/.claude/skills/gstack/browse/bin/find-browse 2>/dev/null) 2if [ -z "$B" ]; then 3 echo "ERROR: browse binary not found" 4 exit 1 5fi

Create output directories:

bash
1REPORT_DIR=".gstack/qa-reports" 2mkdir -p "$REPORT_DIR/screenshots"

Modes

Full (default)

Systematic exploration. Visit every reachable page. Document 5-10 well-evidenced issues. Produce health score. Takes 5-15 minutes depending on app size.

Quick (--quick)

30-second smoke test. Visit homepage + top 5 navigation targets. Check: page loads? Console errors? Broken links? Produce health score. No detailed issue documentation.

Regression (--regression <baseline>)

Run full mode, then load baseline.json from a previous run. Diff: which issues are fixed? Which are new? What's the score delta? Append regression section to report.


Workflow

Phase 1: Initialize

  1. Find browse binary (see Setup above)
  2. Create output directories
  3. Copy report template from qa/templates/qa-report-template.md to output dir
  4. Start timer for duration tracking

Phase 2: Authenticate (if needed)

If the user specified auth credentials:

bash
1$B goto <login-url> 2$B snapshot -i # find the login form 3$B fill @e3 "user@example.com" 4$B fill @e4 "[REDACTED]" # NEVER include real passwords in report 5$B click @e5 # submit 6$B snapshot -D # verify login succeeded

If the user provided a cookie file:

bash
1$B cookie-import cookies.json 2$B goto <target-url>

If 2FA/OTP is required: Ask the user for the code and wait.

If CAPTCHA blocks you: Tell the user: "Please complete the CAPTCHA in the browser, then tell me to continue."

Phase 3: Orient

Get a map of the application:

bash
1$B goto <target-url> 2$B snapshot -i -a -o "$REPORT_DIR/screenshots/initial.png" 3$B links # map navigation structure 4$B console --errors # any errors on landing?

Detect framework (note in report metadata):

  • __next in HTML or _next/data requests → Next.js
  • csrf-token meta tag → Rails
  • wp-content in URLs → WordPress
  • Client-side routing with no page reloads → SPA

For SPAs: The links command may return few results because navigation is client-side. Use snapshot -i to find nav elements (buttons, menu items) instead.

Phase 4: Explore

Visit pages systematically. At each page:

bash
1$B goto <page-url> 2$B snapshot -i -a -o "$REPORT_DIR/screenshots/page-name.png" 3$B console --errors

Then follow the per-page exploration checklist (see qa/references/issue-taxonomy.md):

  1. Visual scan — Look at the annotated screenshot for layout issues
  2. Interactive elements — Click buttons, links, controls. Do they work?
  3. Forms — Fill and submit. Test empty, invalid, edge cases
  4. Navigation — Check all paths in and out
  5. States — Empty state, loading, error, overflow
  6. Console — Any new JS errors after interactions?
  7. Responsiveness — Check mobile viewport if relevant:
    bash
    1$B viewport 375x812 2$B screenshot "$REPORT_DIR/screenshots/page-mobile.png" 3$B viewport 1280x720

Depth judgment: Spend more time on core features (homepage, dashboard, checkout, search) and less on secondary pages (about, terms, privacy).

Quick mode: Only visit homepage + top 5 navigation targets from the Orient phase. Skip the per-page checklist — just check: loads? Console errors? Broken links visible?

Phase 5: Document

Document each issue immediately when found — don't batch them.

Two evidence tiers:

Interactive bugs (broken flows, dead buttons, form failures):

  1. Take a screenshot before the action
  2. Perform the action
  3. Take a screenshot showing the result
  4. Use snapshot -D to show what changed
  5. Write repro steps referencing screenshots
bash
1$B screenshot "$REPORT_DIR/screenshots/issue-001-step-1.png" 2$B click @e5 3$B screenshot "$REPORT_DIR/screenshots/issue-001-result.png" 4$B snapshot -D

Static bugs (typos, layout issues, missing images):

  1. Take a single annotated screenshot showing the problem
  2. Describe what's wrong
bash
1$B snapshot -i -a -o "$REPORT_DIR/screenshots/issue-002.png"

Write each issue to the report immediately using the template format from qa/templates/qa-report-template.md.

Phase 6: Wrap Up

  1. Compute health score using the rubric below
  2. Write "Top 3 Things to Fix" — the 3 highest-severity issues
  3. Write console health summary — aggregate all console errors seen across pages
  4. Update severity counts in the summary table
  5. Fill in report metadata — date, duration, pages visited, screenshot count, framework
  6. Save baseline — write baseline.json with:
    json
    1{ 2 "date": "YYYY-MM-DD", 3 "url": "<target>", 4 "healthScore": N, 5 "issues": [{ "id": "ISSUE-001", "title": "...", "severity": "...", "category": "..." }], 6 "categoryScores": { "console": N, "links": N, ... } 7}

Regression mode: After writing the report, load the baseline file. Compare:

  • Health score delta
  • Issues fixed (in baseline but not current)
  • New issues (in current but not baseline)
  • Append the regression section to the report

Health Score Rubric

Compute each category score (0-100), then take the weighted average.

Console (weight: 15%)

  • 0 errors → 100
  • 1-3 errors → 70
  • 4-10 errors → 40
  • 10+ errors → 10
  • 0 broken → 100
  • Each broken link → -15 (minimum 0)

Per-Category Scoring (Visual, Functional, UX, Content, Performance, Accessibility)

Each category starts at 100. Deduct per finding:

  • Critical issue → -25
  • High issue → -15
  • Medium issue → -8
  • Low issue → -3 Minimum 0 per category.

Weights

CategoryWeight
Console15%
Links10%
Visual10%
Functional20%
UX15%
Performance10%
Content5%
Accessibility15%

Final Score

score = Σ (category_score × weight)


Framework-Specific Guidance

Next.js

  • Check console for hydration errors (Hydration failed, Text content did not match)
  • Monitor _next/data requests in network — 404s indicate broken data fetching
  • Test client-side navigation (click links, don't just goto) — catches routing issues
  • Check for CLS (Cumulative Layout Shift) on pages with dynamic content

Rails

  • Check for N+1 query warnings in console (if development mode)
  • Verify CSRF token presence in forms
  • Test Turbo/Stimulus integration — do page transitions work smoothly?
  • Check for flash messages appearing and dismissing correctly

WordPress

  • Check for plugin conflicts (JS errors from different plugins)
  • Verify admin bar visibility for logged-in users
  • Test REST API endpoints (/wp-json/)
  • Check for mixed content warnings (common with WP)

General SPA (React, Vue, Angular)

  • Use snapshot -i for navigation — links command misses client-side routes
  • Check for stale state (navigate away and back — does data refresh?)
  • Test browser back/forward — does the app handle history correctly?
  • Check for memory leaks (monitor console after extended use)

Important Rules

  1. Repro is everything. Every issue needs at least one screenshot. No exceptions.
  2. Verify before documenting. Retry the issue once to confirm it's reproducible, not a fluke.
  3. Never include credentials. Write [REDACTED] for passwords in repro steps.
  4. Write incrementally. Append each issue to the report as you find it. Don't batch.
  5. Never read source code. Test as a user, not a developer.
  6. Check console after every interaction. JS errors that don't surface visually are still bugs.
  7. Test like a user. Use realistic data. Walk through complete workflows end-to-end.
  8. Depth over breadth. 5-10 well-documented issues with evidence > 20 vague descriptions.
  9. Never delete output files. Screenshots and reports accumulate — that's intentional.
  10. Use snapshot -C for tricky UIs. Finds clickable divs that the accessibility tree misses.

Output Structure

.gstack/qa-reports/
├── qa-report-{domain}-{YYYY-MM-DD}.md    # Structured report
├── screenshots/
│   ├── initial.png                        # Landing page annotated screenshot
│   ├── issue-001-step-1.png               # Per-issue evidence
│   ├── issue-001-result.png
│   └── ...
└── baseline.json                          # For regression mode

Report filenames use the domain and date: qa-report-myapp-com-2026-03-12.md

Связанные навыки

Looking for an alternative to qa or another community skill for your workflow? Explore these related open-source skills.

Показать все

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

widget-generator

Logo of f
f

Создание настраиваемых плагинов виджетов для системы ленты новостей prompts.chat

flags

Logo of vercel
vercel

Фреймворк React

138.4k
0
Браузер

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Разработчик