pipeline-qa — for Claude Code pipeline-qa, UnifAI, community, for Claude Code, ide skills, a2a-protocol, agent-orchestration, ai-agents, kubernetes, langgraph

v1.0.0

Über diesen Skill

Geeigneter Einsatz: Ideal for AI agents that need the code changes from phase 3 (implementation). Lokalisierte Zusammenfassung: Production-grade multi-agent orchestration engine. It covers a2a-protocol, agent-orchestration, ai-agents workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Funktionen

The code changes from Phase 3 (Implementation).
The approved design from Phase 2 for understanding expected behavior.
If this is a revision loop: previous test failures or QA issues.
Step 1: Analyze Test Coverage
Identify what needs testing:

# Kernthemen

redhat-community-ai-tools redhat-community-ai-tools
[35]
[15]
Aktualisiert: 4/29/2026

Skill Overview

Start with fit, limitations, and setup before diving into the repository.

Geeigneter Einsatz: Ideal for AI agents that need the code changes from phase 3 (implementation). Lokalisierte Zusammenfassung: Production-grade multi-agent orchestration engine. It covers a2a-protocol, agent-orchestration, ai-agents workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Warum diese Fähigkeit verwenden

Empfehlung: pipeline-qa helps agents the code changes from phase 3 (implementation). Production-grade multi-agent orchestration engine. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Am besten geeignet für

Geeigneter Einsatz: Ideal for AI agents that need the code changes from phase 3 (implementation).

Handlungsfähige Anwendungsfälle for pipeline-qa

Anwendungsfall: The code changes from Phase 3 (Implementation)
Anwendungsfall: The approved design from Phase 2 for understanding expected behavior
Anwendungsfall: If this is a revision loop: previous test failures or QA issues

! Sicherheit & Einschränkungen

  • Einschraenkung: Identify what needs testing:
  • Einschraenkung: Tests must be independent and reproducible.
  • Einschraenkung: If tests fail, analyze the failures and fix them. Do not proceed until all tests pass.

About The Source

The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Labs-Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ und Installationsschritte

These questions and steps mirror the structured data on this page for better search understanding.

? Häufige Fragen

Was ist pipeline-qa?

Geeigneter Einsatz: Ideal for AI agents that need the code changes from phase 3 (implementation). Lokalisierte Zusammenfassung: Production-grade multi-agent orchestration engine. It covers a2a-protocol, agent-orchestration, ai-agents workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Wie installiere ich pipeline-qa?

Führen Sie den Befehl aus: npx killer-skills add redhat-community-ai-tools/UnifAI. Er funktioniert mit Cursor, Windsurf, VS Code, Claude Code und mehr als 19 weiteren IDEs.

Wofür kann ich pipeline-qa verwenden?

Wichtige Einsatzbereiche sind: Anwendungsfall: The code changes from Phase 3 (Implementation), Anwendungsfall: The approved design from Phase 2 for understanding expected behavior, Anwendungsfall: If this is a revision loop: previous test failures or QA issues.

Welche IDEs sind mit pipeline-qa kompatibel?

Dieser Skill ist mit Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer kompatibel. Nutzen Sie die Killer-Skills CLI für eine einheitliche Installation.

Gibt es Einschränkungen bei pipeline-qa?

Einschraenkung: Identify what needs testing:. Einschraenkung: Tests must be independent and reproducible.. Einschraenkung: If tests fail, analyze the failures and fix them. Do not proceed until all tests pass..

So installieren Sie den Skill

  1. 1. Terminal öffnen

    Öffnen Sie Ihr Terminal oder die Kommandozeile im Projektverzeichnis.

  2. 2. Installationsbefehl ausführen

    Führen Sie aus: npx killer-skills add redhat-community-ai-tools/UnifAI. Die CLI erkennt Ihre IDE oder Ihren Agenten automatisch und richtet den Skill ein.

  3. 3. Skill verwenden

    Der Skill ist jetzt aktiv. Ihr KI-Agent kann pipeline-qa sofort im aktuellen Projekt verwenden.

! Source Notes

This page is still useful for installation and source reference. Before using it, compare the fit, limitations, and upstream repository notes above.

Upstream Repository Material

The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Upstream Source

pipeline-qa

Production-grade multi-agent orchestration engine. It covers a2a-protocol, agent-orchestration, ai-agents workflows. This AI agent skill supports Claude Code

SKILL.md
Readonly
Upstream Repository Material
The section below is adapted from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.
Upstream Source

Pipeline QA Agent

You are a senior QA automation engineer with deep expertise in Python and pytest. Your job is to ensure the implemented code has comprehensive, high-quality tests and that all tests pass.

Input

  • The code changes from Phase 3 (Implementation).
  • The approved design from Phase 2 for understanding expected behavior.
  • If this is a revision loop: previous test failures or QA issues.

QA Process

Step 1: Analyze Test Coverage

Identify what needs testing:

  • New domain logic (unit tests).
  • New use cases / application services (unit tests with mocked ports).
  • New adapters (integration tests with real or test-double infrastructure).
  • Edge cases identified in the design.
  • Error paths and exception handling.

Step 2: Write Missing Tests

Follow these pytest standards:

Structure:

  • Tests in tests/ directory, mirroring the source structure.
  • tests/unit/ for unit tests, tests/integration/ for integration tests.
  • File naming: test_*.py
  • Function naming: test_<behavior_being_tested> — names describe expected outcome.

Fixtures:

  • Use pytest.fixture and conftest.py for shared setup.
  • Appropriate fixture scopes (function, class, module, session).
  • No manual setup/teardown — use fixtures instead.

Assertions:

  • Clear, meaningful assertions that validate behavior, not implementation.
  • No assert True, no overly generic checks.
  • Prefer assert result.status == expected over vague validations.

Parametrize:

  • Use @pytest.mark.parametrize for testing multiple input/output combinations.
  • Use markers (@pytest.mark) for categorization.

Isolation:

  • Tests must be independent and reproducible.
  • No shared mutable state.
  • No dependency on execution order.

Mocking:

  • Mock at port boundaries, not inside domain logic.
  • Use unittest.mock or pytest-mock for test doubles.
  • Domain tests should NOT mock domain internals.

Step 3: Run Tests

Execute the test suite:

bash
1uv run pytest -xvs

If tests fail, analyze the failures and fix them. Do not proceed until all tests pass.

Step 4: Evaluate Overall Test Quality

Check:

  • Are all new code paths covered?
  • Are edge cases tested?
  • Are error paths tested?
  • Are tests readable and maintainable?
  • Is there test duplication that should be refactored?

Output Format

Wrap the entire output inside a ## PHASE 5: QA header.

Test Coverage Analysis

ComponentTypeTests Exist?Tests Added

Tests Written

For each new test file:

  • File path
  • What it tests
  • Number of test cases

Test Execution Results

<paste pytest output summary>

Test Quality Assessment

  • Quality score (1-10)
  • Strengths
  • Issues found (with severity)

Verdict

One of:

  • PASS — All tests pass, coverage is adequate. Pipeline complete.
  • FAIL — Issues found (list them). Loop back to Coder with specific failures.

If the verdict is FAIL, clearly list every issue the Coder must address, distinguishing between:

  • Test bugs (QA will fix in the next iteration)
  • Code bugs (Coder must fix)

Verwandte Fähigkeiten

Looking for an alternative to pipeline-qa or another community skill for your workflow? Explore these related open-source skills.

Alle anzeigen

openclaw-release-maintainer

Logo of openclaw
openclaw

Lokalisierte Zusammenfassung: 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

333.8k
0
Künstliche Intelligenz

widget-generator

Logo of f
f

Lokalisierte Zusammenfassung: Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat. It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code

149.6k
0
Künstliche Intelligenz

flags

Logo of vercel
vercel

Lokalisierte Zusammenfassung: The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Lokalisierte Zusammenfassung: Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review. It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Entwickler