qa-testing-strategy — for Claude Code qa-testing-strategy, splice-app, community, for Claude Code, ide skills, ## Decision Tree: Test Strategy, Testing, Strategy, Risk-based, quality

v1.0.0

Acerca de este Skill

Escenario recomendado: Ideal for AI agents that need qa testing strategy (jan 2026). Resumen localizado: # QA Testing Strategy (Jan 2026) Risk-based quality engineering strategy for modern software delivery. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Características

QA Testing Strategy (Jan 2026)
Risk-based quality engineering strategy for modern software delivery.
Create or update a risk-based test strategy (what to test, where, and why)
Define quality gates and release criteria (merge vs deploy)
Select the smallest effective layer (unit → integration → contract → E2E)

# Core Topics

gouravsingh311 gouravsingh311
[0]
[0]
Updated: 4/2/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
55
Canonical Locale
en
Detected Body Locale
en

Escenario recomendado: Ideal for AI agents that need qa testing strategy (jan 2026). Resumen localizado: # QA Testing Strategy (Jan 2026) Risk-based quality engineering strategy for modern software delivery. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

¿Por qué usar esta habilidad?

Recomendacion: qa-testing-strategy helps agents qa testing strategy (jan 2026). QA Testing Strategy (Jan 2026) Risk-based quality engineering strategy for modern software delivery. This AI agent skill supports Claude

Mejor para

Escenario recomendado: Ideal for AI agents that need qa testing strategy (jan 2026).

Casos de uso accionables for qa-testing-strategy

Caso de uso: Applying QA Testing Strategy (Jan 2026)
Caso de uso: Applying Risk-based quality engineering strategy for modern software delivery
Caso de uso: Applying Create or update a risk-based test strategy (what to test, where, and why)

! Seguridad y limitaciones

  • Limitacion: Need to test: [Feature Type]
  • Limitacion: Test Pyramid Decision Tree: Test Strategy ```text Need to test: [Feature Type] │ ├─ Pure business logic/invariants
  • Limitacion: Requires repository-specific context from the skill documentation

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is qa-testing-strategy?

Escenario recomendado: Ideal for AI agents that need qa testing strategy (jan 2026). Resumen localizado: # QA Testing Strategy (Jan 2026) Risk-based quality engineering strategy for modern software delivery. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install qa-testing-strategy?

Run the command: npx killer-skills add gouravsingh311/splice-app/qa-testing-strategy. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for qa-testing-strategy?

Key use cases include: Caso de uso: Applying QA Testing Strategy (Jan 2026), Caso de uso: Applying Risk-based quality engineering strategy for modern software delivery, Caso de uso: Applying Create or update a risk-based test strategy (what to test, where, and why).

Which IDEs are compatible with qa-testing-strategy?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for qa-testing-strategy?

Limitacion: Need to test: [Feature Type]. Limitacion: Test Pyramid Decision Tree: Test Strategy ```text Need to test: [Feature Type] │ ├─ Pure business logic/invariants. Limitacion: Requires repository-specific context from the skill documentation.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add gouravsingh311/splice-app/qa-testing-strategy. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use qa-testing-strategy immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

qa-testing-strategy

# QA Testing Strategy (Jan 2026) Risk-based quality engineering strategy for modern software delivery. This AI agent skill supports Claude Code, Cursor, and

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

QA Testing Strategy (Jan 2026)

Risk-based quality engineering strategy for modern software delivery.

Core references: curated links in data/sources.json (SLOs/error budgets, contracts, E2E, OpenTelemetry). Start with references/operational-playbook.md for a compact, navigable overview.

Scope

  • Create or update a risk-based test strategy (what to test, where, and why)
  • Define quality gates and release criteria (merge vs deploy)
  • Select the smallest effective layer (unit → integration → contract → E2E)
  • Make failures diagnosable (artifacts, logs/traces, ownership)
  • Operationalize reliability (flake SLO, quarantines, suite budgets)

Use Instead

NeedSkill
Debug failing tests or incidentsqa-debugging
Test LLM agents/personasqa-agent-testing
Perform security audit/threat modelsoftware-security-appsec
Design CI/CD pipelines and infraops-devops-platform

Quick Reference

Test TypeGoalTypical Use
UnitProve logic and invariants fastPure functions, core business rules
ComponentValidate UI behavior in isolationUI components and state transitions
IntegrationValidate boundaries with real depsAPI + DB, queues, external adapters
ContractPrevent breaking changes cross-teamOpenAPI/AsyncAPI/JSON Schema/Protobuf
E2EValidate critical user journeys1–2 “money paths” per product area
PerformanceEnforce budgets and capacityLoad, stress, soak, regression trends
VisualCatch UI regressionsLayout/visual diffs on stable pages
AccessibilityAutomate WCAG checksaxe smoke + targeted manual audits
SecurityCatch common web vulns earlyDAST smoke + critical checks in CI

Default Workflow

  1. Clarify scope and risk: critical journeys, failure modes, and non-functional risks (latency, data loss, auth).
  2. Define quality signals: SLOs/error budgets, contract/schema checks, and what blocks merge vs blocks deploy.
  3. Choose the smallest effective layer (unit → integration → contract → E2E).
  4. Make failures diagnosable: artifacts + correlation IDs (logs/traces/screenshots), clear ownership, deflake runbook.
  5. Operationalize: flake SLO, quarantine with expiry, suite budgets (PR gate vs scheduled), dashboards.

Test Pyramid

text
1 /\ 2 /E2E\ 5-10% - Critical journeys 3 /------\ 4 /Integr. \ 15-25% - API, DB, queues 5 /----------\ 6 /Component \ 20-30% - UI modules 7 /------------\ 8 / Unit \ 40-60% - Logic and invariants 9 /--------------\

Decision Tree: Test Strategy

text
1Need to test: [Feature Type] 23 ├─ Pure business logic/invariants? → Unit tests (mock boundaries) 45 ├─ UI component/state transitions? → Component tests 6 │ └─ Cross-page user journey? → E2E tests 78 ├─ API Endpoint? 9 │ ├─ Single service boundary? → Integration tests (real DB/deps) 10 │ └─ Cross-service compatibility? → Contract tests (schema/versioning) 1112 ├─ Event-driven/API schema evolution? → Contract + backward-compat tests 1314 └─ Performance-critical? → k6 load testing

Core QA Principles

Definition of Done

  • Strategy is risk-based: critical journeys + failure modes explicit
  • Test portfolio is layered: fast checks catch most defects
  • CI is economical: fast pre-merge gates, heavy suites scheduled
  • Failures are diagnosable: actionable artifacts (logs/trace/screenshots)
  • Flakes managed with SLO and deflake runbook

Shift-Left Gates (Pre-Merge)

  • Contracts: OpenAPI/AsyncAPI/JSON Schema validation
  • Static checks: lint, typecheck, secret scanning
  • Fast tests: unit + key integration (avoid full E2E as PR gate)

Shift-Right (Post-Deploy)

  • Synthetic checks for critical paths (monitoring-as-tests)
  • Canary analysis: compare SLO signals and key metrics before ramping
  • Feature flags for safe rollouts and fast rollback
  • Convert incidents into regression tests (prefer lower layers first)

CI Economics

BudgetTarget
PR gatep50 ≤ 10 min, p95 ≤ 20 min
Mainline health≥ 99% green builds/day

Flake Management

  • Define: test fails without product change, passes on rerun
  • Track weekly: flaky_failures / total_test_executions (where flaky_failure = fail_then_pass_on_rerun)
  • SLO: Suite flake rate ≤ 1% weekly
  • Quarantine policy with owner and expiry
  • Use the deflake runbook: template-flaky-test-triage-deflake-runbook.md

Common Patterns

AAA Pattern

javascript
1it('should apply discount', () => { 2 // Arrange 3 const order = { total: 150 }; 4 // Act 5 const result = calculateDiscount(order); 6 // Assert 7 expect(result.discount).toBe(15); 8});

Page Object Model (E2E)

typescript
1class LoginPage { 2 async login(email: string, password: string) { 3 await this.page.fill('[data-testid="email"]', email); 4 await this.page.fill('[data-testid="password"]', password); 5 await this.page.click('[data-testid="submit"]'); 6 } 7}

Anti-Patterns

Anti-PatternProblemSolution
Testing implementationBreaks on refactorTest behavior
Shared mutable stateFlaky testsIsolate test data
sleep() in testsSlow, unreliableUse proper waits
Everything E2ESlow, expensiveUse test pyramid
Ignoring flaky testsFalse confidenceFix or quarantine

Do / Avoid

Do

  • Write tests against stable contracts and user-visible behavior
  • Treat flaky tests as P1 reliability work
  • Make "how to debug this failure" part of every suite

Avoid

  • "Everything E2E" as default
  • Sleeps/time-based waits (use event-based)
  • Coverage % as primary quality KPI

Feature Matrix vs Test Matrix Gate (Release Blocking)

Before release, run a coverage audit that maps product features/backlog IDs to direct test evidence.

Gate Rules

  • Every release-scoped feature must map to at least one direct automated test, or an explicit waiver with owner/date.
  • Evidence must include file path and test identifier (suite/spec/case).
  • "Covered indirectly" is not accepted without written rationale and risk acknowledgment.
  • If critical features have no direct evidence, release is blocked.

Minimal Audit Output

  • feature/backlog id
  • coverage status (direct, indirect, none)
  • evidence reference
  • risk level
  • owner and due date for gaps

Resources

ResourcePurpose
comprehensive-testing-guide.mdEnd-to-end playbook across layers
operational-playbook.mdTesting pyramid, BDD, CI gates
shift-left-testing.mdContract-first, BDD, continuous testing
test-automation-patterns.mdReliable patterns and anti-patterns
playwright-webapp-testing.mdPlaywright patterns
chaos-resilience-testing.mdChaos engineering
observability-driven-testing.mdOpenTelemetry, trace-based
contract-testing-2026.mdPact, Specmatic
synthetic-test-data.mdPrivacy-safe, ephemeral test data
test-environment-management.mdEnvironment provisioning and lifecycle
quality-metrics-dashboard.mdQuality metrics and dashboards
compliance-testing.mdSOC2, HIPAA, GDPR, PCI-DSS testing
feature-matrix-vs-test-matrix-gate.mdRelease-blocking feature-to-test coverage audit

Templates

TemplatePurpose
template-test-case-design.mdGiven/When/Then and test oracles
test-strategy-template.mdRisk-based strategy
template-flaky-test-triage.mdFlake triage runbook
template-jest-vitest.mdUnit test patterns
template-api-integration.mdAPI + DB integration tests
template-playwright.mdPlaywright E2E
template-visual-testing.mdVisual regression testing
template-k6-load-testing.mdk6 performance
automation-pipeline-template.mdCI stages, budgets, gates
template-cucumber-gherkin.mdBDD feature files and steps
template-release-coverage-audit.mdFeature matrix vs test matrix release audit

Data

FilePurpose
sources.jsonExternal references

Ops Gate: Release-Safe Verification Sequence

Use this sequence for feature branches that touch user flows, pricing, localization, or analytics.

bash
1# 1) Static checks 2npm run lint 3npm run typecheck 4 5# 2) Fast correctness 6npm run test:unit 7 8# 3) Critical path checks 9npm run test:e2e -- --grep "@critical" 10 11# 4) Instrumentation gate (if configured) 12npm run test:analytics-gate 13 14# 5) Production build 15npm run build

If a Gate Fails

  1. Capture exact failing command and first error line.
  2. Classify: environment issue, baseline known failure, or regression.
  3. Re-run only the failed gate once after fix.
  4. Do not continue to later gates while earlier required gates are red.

Agent Output Contract for QA Handoff

Always report:

  • commands run,
  • pass/fail per gate,
  • whether failures are pre-existing or introduced,
  • next blocking action.

Fact-Checking

  • Use web search/web fetch to verify current external facts, versions, pricing, deadlines, regulations, or platform behavior before final answers.
  • Prefer primary sources; report source links and dates for volatile information.
  • If web access is unavailable, state the limitation and mark guidance as unverified.

Habilidades relacionadas

Looking for an alternative to qa-testing-strategy or another community skill for your workflow? Explore these related open-source skills.

Ver todo

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
Inteligencia Artificial

widget-generator

Logo of f
f

Generar complementos de widgets personalizables para el sistema de feeds de prompts.chat

149.6k
0
Inteligencia Artificial

flags

Logo of vercel
vercel

El Marco de React

138.4k
0
Navegador

pr-review

Logo of pytorch
pytorch

Tensores y redes neuronales dinámicas en Python con fuerte aceleración de GPU

98.6k
0
Desarrollador