qa-playswag — for Claude Code qa-playswag, qa-skills, community, for Claude Code, ide skills, $ARGUMENTS, SPEC_FILE, TESTS_DIR, __tests__, LANG=py

v1.0.0

이 스킬 정보

적합한 상황: Ideal for AI agents that need playswag — api coverage analyzer. 현지화된 요약: qa-skills # PlaySwag — API Coverage Analyzer Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made QA automation tasks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

기능

PlaySwag — API Coverage Analyzer
Usage: /playswag [spec-file] [tests-dir] [--js --ts --py]
All arguments are optional — missing ones will be asked interactively.
Step 1 — Gather Required Inputs
If $ARGUMENTS contains a path to a .json, .yaml, or .yml file — use it as SPEC FILE.

# Core Topics

AZANIR AZANIR
[2]
[1]
Updated: 3/25/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
70
Canonical Locale
en
Detected Body Locale
en

적합한 상황: Ideal for AI agents that need playswag — api coverage analyzer. 현지화된 요약: qa-skills # PlaySwag — API Coverage Analyzer Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made QA automation tasks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

이 스킬을 사용하는 이유

추천 설명: qa-playswag helps agents playswag — api coverage analyzer. qa-skills # PlaySwag — API Coverage Analyzer Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made

최적의 용도

적합한 상황: Ideal for AI agents that need playswag — api coverage analyzer.

실행 가능한 사용 사례 for qa-playswag

사용 사례: Applying PlaySwag — API Coverage Analyzer
사용 사례: Applying Usage: /playswag [spec-file] [tests-dir] [--js --ts --py]
사용 사례: Applying All arguments are optional — missing ones will be asked interactively

! 보안 및 제한 사항

  • 제한 사항: - TypeScript (npx tsx analyze.ts) — requires tsx
  • 제한 사항: - Python (python3 analyze.py) — requires Python 3
  • 제한 사항: Requires repository-specific context from the skill documentation

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is qa-playswag?

적합한 상황: Ideal for AI agents that need playswag — api coverage analyzer. 현지화된 요약: qa-skills # PlaySwag — API Coverage Analyzer Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made QA automation tasks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install qa-playswag?

Run the command: npx killer-skills add AZANIR/qa-skills/qa-playswag. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for qa-playswag?

Key use cases include: 사용 사례: Applying PlaySwag — API Coverage Analyzer, 사용 사례: Applying Usage: /playswag [spec-file] [tests-dir] [--js --ts --py], 사용 사례: Applying All arguments are optional — missing ones will be asked interactively.

Which IDEs are compatible with qa-playswag?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for qa-playswag?

제한 사항: - TypeScript (npx tsx analyze.ts) — requires tsx. 제한 사항: - Python (python3 analyze.py) — requires Python 3. 제한 사항: Requires repository-specific context from the skill documentation.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add AZANIR/qa-skills/qa-playswag. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use qa-playswag immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

qa-playswag

Install qa-playswag, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

PlaySwag — API Coverage Analyzer

Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made QA automation tasks.

Usage: /playswag [spec-file] [tests-dir] [--js|--ts|--py]

All arguments are optional — missing ones will be asked interactively.


Step 1 — Gather Required Inputs

Spec file

If $ARGUMENTS contains a path to a .json, .yaml, or .yml file — use it as SPEC_FILE.

Otherwise, ask the user:

"What is the path to your OpenAPI/Swagger spec file? (JSON or YAML)"

Verify the file exists:

bash
1test -f "$SPEC_FILE" && echo "found" || echo "not found"

If not found, show the error and ask again.


Tests directory

If $ARGUMENTS contains a second path argument — use it as TESTS_DIR.

Otherwise, auto-detect by checking (relative to spec file's parent dir, then cwd): tests/test/e2e/__tests__/specs/src/tests/. (cwd)

If none of these exist, ask the user:

"Where are your test files? (provide directory path, or press Enter to use current directory)"


Language / Runner

Determine LANG from:

  1. --py flag in $ARGUMENTSLANG=py
  2. --ts flag in $ARGUMENTSLANG=ts
  3. --js flag in $ARGUMENTSLANG=js
  4. Auto-detect: check if node is available → LANG=js (recommended default)
  5. Auto-detect: check if python3 is available → LANG=py
  6. If unclear, ask the user:

"Which runner should I use to analyze?"

  • JavaScript (node analyze.js) — runs anywhere with Node.js installed, no extra packages needed ✅
  • TypeScript (npx tsx analyze.ts) — requires tsx
  • Python (python3 analyze.py) — requires Python 3

Step 2 — Locate Scripts

Find skill scripts dir. Try in order:

bash
1# Project-local (preferred) 2SCRIPTS=".claude/skills/qa-playswag/scripts" 3 4# Global install 5SCRIPTS="$HOME/.claude/skills/qa-playswag/scripts"

Use whichever exists:

bash
1[ -f "$SCRIPTS/analyze.js" ] && echo "found" || echo "not found"

Step 3 — Run the Analyzer

analyze.js is pure CommonJS — runs with any Node.js 14+, zero dependencies needed for JSON specs. For YAML specs it auto-tries js-yaml, yaml, then falls back to python3.

bash
1node "$SCRIPTS/analyze.js" "$SPEC_FILE" "$TESTS_DIR" [options]

TypeScript (LANG=ts) — try runners in order:

bash
1npx --yes tsx "$SCRIPTS/analyze.ts" "$SPEC_FILE" "$TESTS_DIR" [options]

If tsx fails, try ts-node or compile+run. If all fail → auto-fallback to analyze.js.

Python (LANG=py):

bash
1python3 "$SCRIPTS/analyze.py" "$SPEC_FILE" "$TESTS_DIR" [options]

If PyYAML missing and spec is YAML — pip3 install pyyaml first. If Python fails → auto-fallback to analyze.js (if node available).

Multi-spec & URL support

bash
1# Multiple spec files (merge endpoints, dedup by method+path) 2node "$SCRIPTS/analyze.js" spec1.yaml spec2.yaml -- "$TESTS_DIR" 3 4# Spec from URL (fetched and cached in /tmp for 5 min) 5node "$SCRIPTS/analyze.js" https://api.example.com/openapi.json "$TESTS_DIR"

CLI Options (all runtimes)

FlagDescription
--fail-under <pct>Exit 1 if endpoint coverage < pct% (CI quality gate)
--output <dir>, -o <dir>Output directory (default: ./playswag-report)
--format <list>Comma-separated: html, json, tasks, badge, junit (default: all)
--json-onlyShorthand for --format json
--include <patterns>Only analyze matching paths (comma-sep, wildcard *)
--exclude <patterns>Skip matching paths
--include-tags <tags>Only analyze endpoints with these tags
--exclude-tags <tags>Skip endpoints with these tags
--historyAppend to playswag-history.json and show delta

Exit Codes

CodeMeaning
0Success (and coverage >= threshold, if specified)
1Coverage below --fail-under threshold
2Fatal error (spec not found, parse failure)

CI/CD Example (GitHub Actions)

yaml
1- name: API coverage gate 2 run: node .cursor/skills/qa-playswag/scripts/analyze.js openapi.yaml tests/ --fail-under 80 --json-only

Step 4 — Show Results

After the script runs, read ./playswag-report/summary.json and display:

## PlaySwag Coverage Report

**Spec:** {spec-file}
**Tests:** {tests-dir}

| Metric                    | Value      |
|---------------------------|------------|
| 📊 Endpoint Coverage      | XX% (A/B)  |
| ✗ Uncovered endpoints     | N          |
| 🗑 Deprecated still tested | N          |
| 📋 QA Tasks created       | N          |
| ⚠ Unmatched test calls    | N          |

### Uncovered Endpoints (N) — by priority:

🔴 HIGH (POST/PUT/DELETE/auth):
  - POST /api/users — Create user [users]
  - DELETE /api/orders/{id} — Cancel order [orders]

🟡 MEDIUM (GET):
  - GET /api/reports — List reports [reports]

### Files:
→ HTML Report: ./playswag-report/report.html
→ QA Tasks:    ./playswag-report/tasks.md
→ Badge:       ./playswag-report/playswag-badge.svg
→ Summary:     ./playswag-report/summary.json

Open: open ./playswag-report/report.html

Step 5 — Error Handling

ErrorWhat to do
Spec file not foundAsk user to confirm path
JSON parse errorShow first 5 lines of file, ask user to fix
YAML parse errorSuggest npm i js-yaml or pip3 install pyyaml
node not foundUse Python script
python3 not foundInform user, suggest brew install python3
No test files foundRun anyway (0% coverage), inform user
Script errorShow stderr, offer manual analysis via Read+Grep

Manual fallback (if all scripts fail):

  1. Read spec with Read tool
  2. Scan tests with Grep for URL/request patterns
  3. Print text summary in chat
  4. Write tasks.md manually

Notes

Supported spec formats

FormatVersionFeatures
OpenAPI 2.0 (Swagger)JSON/YAMLbasePath, definitions, $ref resolution
OpenAPI 3.0JSON/YAMLservers[].url, components.schemas, $ref resolution
OpenAPI 3.1JSON/YAMLSame as 3.0

Test files detected

.spec.ts/js, .test.ts/js, .e2e.ts/js, .spec.tsx/jsx, test_*.py, *_test.py

HTTP clients detected

Playwright request, axios, fetch, supertest (request(app)), got, ky, httpx, requests, cy.request (Cypress), Node.js http/https, RestAssured (given().when()), aiohttp session, .request('METHOD', path)

Template literal URLs (/api/users/${id}) are partially matched by extracting the static prefix.

Output files

FileDescription
report.htmlInteractive HTML report with filters, search, tag coverage, copy/export/print
summary.jsonMachine-readable coverage data with coverageByTag, status code and parameter metrics
tasks.mdMarkdown QA automation tasks sorted by priority
playswag-badge.svgShields.io-style SVG badge
playswag-junit.xmlJUnit XML for CI (uncovered = failure)
playswag-history.jsonCoverage history for trend tracking (--history)

Coverage dimensions

DimensionMethod
EndpointRegex-matched API calls in tests vs spec paths
Status codeAssertion patterns (.expect(200), toHaveStatus(), assert status_code ==)
ParameterName-match scan (param names from spec appearing in test files)
By tagPer-tag breakdown (spec tags field)

QA task example

From tasks.md:

markdown
1### TASK-001: Cover `POST /api/users` 2 3| Field | Value | 4|-------|-------| 5| **Priority** | High | 6| **Endpoint** | `POST /api/users` | 7| **Auth required** | Yes | 8 9**Acceptance Criteria:** 10- [ ] Happy path -> `201` 11- [ ] Invalid input -> 400/422 12- [ ] Unauthenticated -> 401

Known limitations

  • Static analysis only — regex-based; cannot detect dynamic URL construction or runtime-generated paths beyond template literal prefixes.
  • Status code coverage is assertion-based (scans for .expect(200) etc.), not runtime-verified.
  • Parameter coverage uses name-matching heuristic — a param name appearing in a test file is counted as "used", even if it's unrelated.
  • No auth flow verificationauthRequired flag comes from spec security definitions, but actual auth testing is not verified.
  • YAML requires either js-yaml/yaml npm package, PyYAML, or python3 with PyYAML as fallback.

vs MichalFidor/playswag (npm)

npm playswagThis skill
ApproachRuntime (Playwright intercepts HTTP)Static (regex scan of test files)
Spec$ref, serversResolved $ref; basePath / servers[].url; multi-spec merge
LanguagesTypeScript testsJS / TS / Python (same canonical regex set)
OutputHTMLHTML + JSON + Markdown + SVG + JUnit XML
CoverageEndpoint onlyEndpoint + status code + parameter + by-tag
CI--fail-under, JUnit XML, --history delta

Analyzer behavior

  • basePath (Swagger 2.0) and servers[].url path (OAS 3.x) are collected and used when matching test URLs to spec paths (direct match, strip base, or base + template).
  • $ref in the spec is resolved (JSON Pointer #/...) before parsing operations, so request body fields in QA tasks are populated from components.schemas / definitions.
  • Multi-spec (-- separator): endpoints are merged and deduplicated by method:path.
  • URL specs: fetched via curl/wget/node http, cached in /tmp for 5 minutes.
  • YAML fallback (when no js-yaml in Node): the temp spec path is passed via PLAYSWAG_YAML_FILE to Python — not embedded in the shell string.
  • analyze.js is the safest runner — no build step, no extra packages, works in any Node.js 14+ project.

관련 스킬

Looking for an alternative to qa-playswag or another community skill for your workflow? Explore these related open-source skills.

모두 보기

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
인공지능

widget-generator

Logo of f
f

prompts.chat 피드 시스템을 위한 사용자 지정 가능한 위젯 플러그인을 생성합니다

149.6k
0
인공지능

flags

Logo of vercel
vercel

리액트 프레임워크

138.4k
0
브라우저

pr-review

Logo of pytorch
pytorch

파이썬에서 텐서와 동적 신경망 구현 및 강력한 GPU 가속 지원

98.6k
0
개발자