qa-playswag — for Claude Code qa-playswag, qa-skills, community, for Claude Code, ide skills, $ARGUMENTS, SPEC_FILE, TESTS_DIR, __tests__, LANG=py

v1.0.0

Об этом навыке

Подходящий сценарий: Ideal for AI agents that need playswag — api coverage analyzer. Локализованное описание: qa-skills # PlaySwag — API Coverage Analyzer Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made QA automation tasks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Возможности

PlaySwag — API Coverage Analyzer
Usage: /playswag [spec-file] [tests-dir] [--js --ts --py]
All arguments are optional — missing ones will be asked interactively.
Step 1 — Gather Required Inputs
If $ARGUMENTS contains a path to a .json, .yaml, or .yml file — use it as SPEC FILE.

# Ключевые темы

AZANIR AZANIR
[2]
[1]
Обновлено: 3/25/2026

Skill Overview

Start with fit, limitations, and setup before diving into the repository.

Подходящий сценарий: Ideal for AI agents that need playswag — api coverage analyzer. Локализованное описание: qa-skills # PlaySwag — API Coverage Analyzer Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made QA automation tasks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Зачем использовать этот навык

Рекомендация: qa-playswag helps agents playswag — api coverage analyzer. qa-skills # PlaySwag — API Coverage Analyzer Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps +

Подходит лучше всего

Подходящий сценарий: Ideal for AI agents that need playswag — api coverage analyzer.

Реализуемые кейсы использования for qa-playswag

Сценарий использования: Applying PlaySwag — API Coverage Analyzer
Сценарий использования: Applying Usage: /playswag [spec-file] [tests-dir] [--js --ts --py]
Сценарий использования: Applying All arguments are optional — missing ones will be asked interactively

! Безопасность и ограничения

  • Ограничение: - TypeScript (npx tsx analyze.ts) — requires tsx
  • Ограничение: - Python (python3 analyze.py) — requires Python 3
  • Ограничение: Requires repository-specific context from the skill documentation

About The Source

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Labs-демо

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ и шаги установки

These questions and steps mirror the structured data on this page for better search understanding.

? Частые вопросы

Что такое qa-playswag?

Подходящий сценарий: Ideal for AI agents that need playswag — api coverage analyzer. Локализованное описание: qa-skills # PlaySwag — API Coverage Analyzer Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made QA automation tasks. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Как установить qa-playswag?

Выполните команду: npx killer-skills add AZANIR/qa-skills/qa-playswag. Она работает с Cursor, Windsurf, VS Code, Claude Code и более чем 19 другими IDE.

Для чего можно использовать qa-playswag?

Ключевые сценарии использования: Сценарий использования: Applying PlaySwag — API Coverage Analyzer, Сценарий использования: Applying Usage: /playswag [spec-file] [tests-dir] [--js --ts --py], Сценарий использования: Applying All arguments are optional — missing ones will be asked interactively.

Какие IDE совместимы с qa-playswag?

Этот навык совместим с Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Для единой установки используйте CLI Killer-Skills.

Есть ли ограничения у qa-playswag?

Ограничение: - TypeScript (npx tsx analyze.ts) — requires tsx. Ограничение: - Python (python3 analyze.py) — requires Python 3. Ограничение: Requires repository-specific context from the skill documentation.

Как установить этот skill

  1. 1. Откройте терминал

    Откройте терминал или командную строку в директории проекта.

  2. 2. Запустите команду установки

    Выполните: npx killer-skills add AZANIR/qa-skills/qa-playswag. CLI автоматически определит вашу IDE или агента и настроит навык.

  3. 3. Начните использовать skill

    Skill уже активен. Ваш AI-агент может сразу использовать qa-playswag в текущем проекте.

! Source Notes

This page is still useful for installation and source reference. Before using it, compare the fit, limitations, and upstream repository notes above.

Upstream Repository Material

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Upstream Source

qa-playswag

Install qa-playswag, an AI agent skill for AI agent workflows and automation. Explore features, use cases, limitations, and setup guidance.

SKILL.md
Readonly
Upstream Repository Material
The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.
Upstream Source

PlaySwag — API Coverage Analyzer

Analyze an OpenAPI spec against your test suite and generate an HTML report with coverage gaps + ready-made QA automation tasks.

Usage: /playswag [spec-file] [tests-dir] [--js|--ts|--py]

All arguments are optional — missing ones will be asked interactively.


Step 1 — Gather Required Inputs

Spec file

If $ARGUMENTS contains a path to a .json, .yaml, or .yml file — use it as SPEC_FILE.

Otherwise, ask the user:

"What is the path to your OpenAPI/Swagger spec file? (JSON or YAML)"

Verify the file exists:

bash
1test -f "$SPEC_FILE" && echo "found" || echo "not found"

If not found, show the error and ask again.


Tests directory

If $ARGUMENTS contains a second path argument — use it as TESTS_DIR.

Otherwise, auto-detect by checking (relative to spec file's parent dir, then cwd): tests/test/e2e/__tests__/specs/src/tests/. (cwd)

If none of these exist, ask the user:

"Where are your test files? (provide directory path, or press Enter to use current directory)"


Language / Runner

Determine LANG from:

  1. --py flag in $ARGUMENTSLANG=py
  2. --ts flag in $ARGUMENTSLANG=ts
  3. --js flag in $ARGUMENTSLANG=js
  4. Auto-detect: check if node is available → LANG=js (recommended default)
  5. Auto-detect: check if python3 is available → LANG=py
  6. If unclear, ask the user:

"Which runner should I use to analyze?"

  • JavaScript (node analyze.js) — runs anywhere with Node.js installed, no extra packages needed ✅
  • TypeScript (npx tsx analyze.ts) — requires tsx
  • Python (python3 analyze.py) — requires Python 3

Step 2 — Locate Scripts

Find skill scripts dir. Try in order:

bash
1# Project-local (preferred) 2SCRIPTS=".claude/skills/qa-playswag/scripts" 3 4# Global install 5SCRIPTS="$HOME/.claude/skills/qa-playswag/scripts"

Use whichever exists:

bash
1[ -f "$SCRIPTS/analyze.js" ] && echo "found" || echo "not found"

Step 3 — Run the Analyzer

analyze.js is pure CommonJS — runs with any Node.js 14+, zero dependencies needed for JSON specs. For YAML specs it auto-tries js-yaml, yaml, then falls back to python3.

bash
1node "$SCRIPTS/analyze.js" "$SPEC_FILE" "$TESTS_DIR" [options]

TypeScript (LANG=ts) — try runners in order:

bash
1npx --yes tsx "$SCRIPTS/analyze.ts" "$SPEC_FILE" "$TESTS_DIR" [options]

If tsx fails, try ts-node or compile+run. If all fail → auto-fallback to analyze.js.

Python (LANG=py):

bash
1python3 "$SCRIPTS/analyze.py" "$SPEC_FILE" "$TESTS_DIR" [options]

If PyYAML missing and spec is YAML — pip3 install pyyaml first. If Python fails → auto-fallback to analyze.js (if node available).

Multi-spec & URL support

bash
1# Multiple spec files (merge endpoints, dedup by method+path) 2node "$SCRIPTS/analyze.js" spec1.yaml spec2.yaml -- "$TESTS_DIR" 3 4# Spec from URL (fetched and cached in /tmp for 5 min) 5node "$SCRIPTS/analyze.js" https://api.example.com/openapi.json "$TESTS_DIR"

CLI Options (all runtimes)

FlagDescription
--fail-under <pct>Exit 1 if endpoint coverage < pct% (CI quality gate)
--output <dir>, -o <dir>Output directory (default: ./playswag-report)
--format <list>Comma-separated: html, json, tasks, badge, junit (default: all)
--json-onlyShorthand for --format json
--include <patterns>Only analyze matching paths (comma-sep, wildcard *)
--exclude <patterns>Skip matching paths
--include-tags <tags>Only analyze endpoints with these tags
--exclude-tags <tags>Skip endpoints with these tags
--historyAppend to playswag-history.json and show delta

Exit Codes

CodeMeaning
0Success (and coverage >= threshold, if specified)
1Coverage below --fail-under threshold
2Fatal error (spec not found, parse failure)

CI/CD Example (GitHub Actions)

yaml
1- name: API coverage gate 2 run: node .cursor/skills/qa-playswag/scripts/analyze.js openapi.yaml tests/ --fail-under 80 --json-only

Step 4 — Show Results

After the script runs, read ./playswag-report/summary.json and display:

## PlaySwag Coverage Report

**Spec:** {spec-file}
**Tests:** {tests-dir}

| Metric                    | Value      |
|---------------------------|------------|
| 📊 Endpoint Coverage      | XX% (A/B)  |
| ✗ Uncovered endpoints     | N          |
| 🗑 Deprecated still tested | N          |
| 📋 QA Tasks created       | N          |
| ⚠ Unmatched test calls    | N          |

### Uncovered Endpoints (N) — by priority:

🔴 HIGH (POST/PUT/DELETE/auth):
  - POST /api/users — Create user [users]
  - DELETE /api/orders/{id} — Cancel order [orders]

🟡 MEDIUM (GET):
  - GET /api/reports — List reports [reports]

### Files:
→ HTML Report: ./playswag-report/report.html
→ QA Tasks:    ./playswag-report/tasks.md
→ Badge:       ./playswag-report/playswag-badge.svg
→ Summary:     ./playswag-report/summary.json

Open: open ./playswag-report/report.html

Step 5 — Error Handling

ErrorWhat to do
Spec file not foundAsk user to confirm path
JSON parse errorShow first 5 lines of file, ask user to fix
YAML parse errorSuggest npm i js-yaml or pip3 install pyyaml
node not foundUse Python script
python3 not foundInform user, suggest brew install python3
No test files foundRun anyway (0% coverage), inform user
Script errorShow stderr, offer manual analysis via Read+Grep

Manual fallback (if all scripts fail):

  1. Read spec with Read tool
  2. Scan tests with Grep for URL/request patterns
  3. Print text summary in chat
  4. Write tasks.md manually

Notes

Supported spec formats

FormatVersionFeatures
OpenAPI 2.0 (Swagger)JSON/YAMLbasePath, definitions, $ref resolution
OpenAPI 3.0JSON/YAMLservers[].url, components.schemas, $ref resolution
OpenAPI 3.1JSON/YAMLSame as 3.0

Test files detected

.spec.ts/js, .test.ts/js, .e2e.ts/js, .spec.tsx/jsx, test_*.py, *_test.py

HTTP clients detected

Playwright request, axios, fetch, supertest (request(app)), got, ky, httpx, requests, cy.request (Cypress), Node.js http/https, RestAssured (given().when()), aiohttp session, .request('METHOD', path)

Template literal URLs (/api/users/${id}) are partially matched by extracting the static prefix.

Output files

FileDescription
report.htmlInteractive HTML report with filters, search, tag coverage, copy/export/print
summary.jsonMachine-readable coverage data with coverageByTag, status code and parameter metrics
tasks.mdMarkdown QA automation tasks sorted by priority
playswag-badge.svgShields.io-style SVG badge
playswag-junit.xmlJUnit XML for CI (uncovered = failure)
playswag-history.jsonCoverage history for trend tracking (--history)

Coverage dimensions

DimensionMethod
EndpointRegex-matched API calls in tests vs spec paths
Status codeAssertion patterns (.expect(200), toHaveStatus(), assert status_code ==)
ParameterName-match scan (param names from spec appearing in test files)
By tagPer-tag breakdown (spec tags field)

QA task example

From tasks.md:

markdown
1### TASK-001: Cover `POST /api/users` 2 3| Field | Value | 4|-------|-------| 5| **Priority** | High | 6| **Endpoint** | `POST /api/users` | 7| **Auth required** | Yes | 8 9**Acceptance Criteria:** 10- [ ] Happy path -> `201` 11- [ ] Invalid input -> 400/422 12- [ ] Unauthenticated -> 401

Known limitations

  • Static analysis only — regex-based; cannot detect dynamic URL construction or runtime-generated paths beyond template literal prefixes.
  • Status code coverage is assertion-based (scans for .expect(200) etc.), not runtime-verified.
  • Parameter coverage uses name-matching heuristic — a param name appearing in a test file is counted as "used", even if it's unrelated.
  • No auth flow verificationauthRequired flag comes from spec security definitions, but actual auth testing is not verified.
  • YAML requires either js-yaml/yaml npm package, PyYAML, or python3 with PyYAML as fallback.

vs MichalFidor/playswag (npm)

npm playswagThis skill
ApproachRuntime (Playwright intercepts HTTP)Static (regex scan of test files)
Spec$ref, serversResolved $ref; basePath / servers[].url; multi-spec merge
LanguagesTypeScript testsJS / TS / Python (same canonical regex set)
OutputHTMLHTML + JSON + Markdown + SVG + JUnit XML
CoverageEndpoint onlyEndpoint + status code + parameter + by-tag
CI--fail-under, JUnit XML, --history delta

Analyzer behavior

  • basePath (Swagger 2.0) and servers[].url path (OAS 3.x) are collected and used when matching test URLs to spec paths (direct match, strip base, or base + template).
  • $ref in the spec is resolved (JSON Pointer #/...) before parsing operations, so request body fields in QA tasks are populated from components.schemas / definitions.
  • Multi-spec (-- separator): endpoints are merged and deduplicated by method:path.
  • URL specs: fetched via curl/wget/node http, cached in /tmp for 5 minutes.
  • YAML fallback (when no js-yaml in Node): the temp spec path is passed via PLAYSWAG_YAML_FILE to Python — not embedded in the shell string.
  • analyze.js is the safest runner — no build step, no extra packages, works in any Node.js 14+ project.

Связанные навыки

Looking for an alternative to qa-playswag or another community skill for your workflow? Explore these related open-source skills.

Показать все

openclaw-release-maintainer

Logo of openclaw
openclaw

Локализованное описание: 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

widget-generator

Logo of f
f

Локализованное описание: Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat . It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor

flags

Logo of vercel
vercel

Локализованное описание: The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Браузер

pr-review

Logo of pytorch
pytorch

Локализованное описание: Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review . It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Разработчик