qa — for Claude Code seeknal, community, for Claude Code, ide skills, analytics-engineering, data-engineering, data-science, duckdb, feature-engineering, feature-management

v1.0.0

About this Skill

Ideal for AI agents that need qa automation: medallion e2e pipeline testing. qa is an AI agent skill for qa automation: medallion e2e pipeline testing.

Features

QA Automation: Medallion E2E Pipeline Testing
Run automated end-to-end tests for seeknal pipelines across all source types.
/qa skill (you are here)
Step 0: Parse input — if .md, spawn spec-interpreter to generate .yml
Orchestrator: discovers specs, health-checks infra, fans out workers

# Core Topics

mta-tech mta-tech
[7]
[1]
Updated: 4/29/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 10/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
10/11
Quality Score
52
Canonical Locale
en
Detected Body Locale
en

Ideal for AI agents that need qa automation: medallion e2e pipeline testing. qa is an AI agent skill for qa automation: medallion e2e pipeline testing.

Core Value

qa helps agents qa automation: medallion e2e pipeline testing. Seeknal is an all-in-one platform for data and AI/ML engineering # QA Automation: Medallion E2E Pipeline Testing Run automated end-to-end tests for seeknal pipelines across all source types. This AI agent skill supports Claude Code

Ideal Agent Persona

Ideal for AI agents that need qa automation: medallion e2e pipeline testing.

Capabilities Granted for qa

Applying QA Automation: Medallion E2E Pipeline Testing
Applying Run automated end-to-end tests for seeknal pipelines across all source types
Applying /qa skill (you are here)

! Prerequisites & Limits

  • Requires repository-specific context from the skill documentation
  • Works best when the underlying tools and dependencies are already configured

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is qa?

Ideal for AI agents that need qa automation: medallion e2e pipeline testing. qa is an AI agent skill for qa automation: medallion e2e pipeline testing.

How do I install qa?

Run the command: npx killer-skills add mta-tech/seeknal/qa. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for qa?

Key use cases include: Applying QA Automation: Medallion E2E Pipeline Testing, Applying Run automated end-to-end tests for seeknal pipelines across all source types, Applying /qa skill (you are here).

Which IDEs are compatible with qa?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for qa?

Requires repository-specific context from the skill documentation. Works best when the underlying tools and dependencies are already configured.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add mta-tech/seeknal/qa. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use qa immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

qa

Install qa, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

QA Automation: Medallion E2E Pipeline Testing

Run automated end-to-end tests for seeknal pipelines across all source types.

Architecture

/qa skill (you are here)
  ↓
Step 0: Parse input — if .md, spawn spec-interpreter to generate .yml
  ↓
Orchestrator: discovers specs, health-checks infra, fans out workers
  ↓
Worker agents (parallel): one per spec file, each scaffolds + executes + validates
  ↓
seeknal CLI + DAGBuilder: actual pipeline execution against live infrastructure

Input modes:

  • /qa — run all specs in qa/specs/
  • /qa qa/specs/foo.yml — run a specific YAML spec
  • /qa qa/specs/a.yml,qa/specs/b.yml — run multiple YAML specs (comma-separated)
  • /qa specs/feature.md — interpret feature spec, generate YAML, then run it

Default Infrastructure Credentials

Before running any health checks or spawning workers, export these environment variables in your shell:

bash
1export LAKEKEEPER_URL="http://172.19.0.9:8181" 2export LAKEKEEPER_WAREHOUSE_ID="c008ea5c-fb89-11f0-aa64-c32ca2f52144" 3export LAKEKEEPER_WAREHOUSE="seeknal-warehouse" 4export KEYCLOAK_TOKEN_URL="http://172.19.0.9:8080/realms/atlas/protocol/openid-connect/token" 5export KEYCLOAK_CLIENT_ID="duckdb" 6export KEYCLOAK_CLIENT_SECRET="duckdb-secret-change-in-production" 7export AWS_ACCESS_KEY_ID="minioadmin" 8export AWS_SECRET_ACCESS_KEY="CHANGE_THIS_STRONG_PASSWORD" 9export AWS_ENDPOINT_URL="http://172.19.0.9:9000" 10export AWS_REGION="us-east-1" 11export PG_HOST="localhost" 12export PG_PORT="5432" 13export PG_USER="seeknal" 14export PG_PASSWORD="seeknal_pass" 15export PG_DATABASE="seeknal_test"

These are the canonical credentials for atlas-dev-server (Lakekeeper, Keycloak, MinIO) and local PostgreSQL. Individual specs may override these in their env: sections.

Execution Flow

Step 0: Parse Input

Check if the /qa skill received a file argument.

Case A — No argument: Set target_specs = null, proceed to Step 1 (discovers all qa/specs/*.yml).

Case B — Argument contains .yml:

Split the argument by comma (,) to get a list of paths. Trim whitespace from each path.

  • Single spec: /qa qa/specs/foo.ymltarget_specs = ["qa/specs/foo.yml"]
  • Multiple specs: /qa qa/specs/a.yml,qa/specs/b.ymltarget_specs = ["qa/specs/a.yml", "qa/specs/b.yml"]
  • With spaces: /qa qa/specs/a.yml, qa/specs/b.yml → same result (trim whitespace)

For each path in the list:

  1. Validate the file exists. If not found, abort with: Error: File not found: {path}
  2. Validate it ends with .yml. If not, abort with: Error: Expected .yml file: {path}

Set target_specs = [list of validated paths], skip to Step 1.

Case C — Argument ends with .md:

  1. Validate file exists. If not found, abort with:

    Error: File not found: {path}
    
  2. Warn if not in specs/ directory:

    Note: Input file is not from specs/ directory. Proceeding anyway.
    
  3. Derive output name from the .md filename:

    • Strip date prefix matching YYYY-MM-DD- pattern
    • Strip type prefix matching feat-, fix-, refactor-
    • Use remaining slug as the spec name
    • Output path: qa/specs/{derived-name}.yml

    Examples:

    InputOutput
    specs/named-refs-common-config.mdqa/specs/named-refs-common-config.yml
    specs/2026-02-20-feat-source-defaults-environment-switching.mdqa/specs/source-defaults-environment-switching.yml
    specs/fix-integration-security-issues.mdqa/specs/integration-security-issues.yml
  4. Spawn spec-interpreter agent using the Task tool:

    Task(
      subagent_type="general-purpose",
      name="spec-interpreter",
      prompt=<see interpreter prompt below>
    )
    

    Interpreter prompt template:

    You are a QA spec interpreter agent. Your job is to:
    
    1. Read the feature spec at: {md_file_path}
    2. Read .claude/agents/qa-spec-interpreter.md for detailed instructions
    3. Follow those instructions to generate a QA test spec YAML
    4. Write the output to: {generated_spec_path}
    
    IMPORTANT: Read .claude/agents/qa-spec-interpreter.md first for detailed instructions.
    Feature spec: {md_file_path}
    Output YAML: {generated_spec_path}
    

    Wait for the agent to complete (do NOT use run_in_background — this must finish before proceeding).

  5. Validate generated YAML. Read the output file and confirm it is parseable YAML with required fields (name, source_type, pipeline, validation). If validation fails:

    Error: Generated spec at {path} is not valid YAML or missing required fields.
    
  6. Print summary:

    Spec generated: {generated_spec_path}
      Source type: {source_type}
      Pipeline nodes: {count}
      Features tested: {count}
    
  7. Set target_specs = [generated_spec_path], continue to Step 1.

Step 1: Discover Specs

If target_specs is set (from Step 0): Use only those spec files. Skip filesystem discovery.

If target_specs is null (no argument): Read all YAML files from qa/specs/ directory:

bash
1ls qa/specs/*.yml

For each spec file, read it and extract:

  • name: Test scenario name
  • source_type: csv, iceberg, or postgresql
  • infrastructure.requires: List of required services
  • description: What the test validates

Display discovery summary:

Discovered N spec(s):
  - csv-medallion (csv) - requires: none
  - iceberg-medallion (iceberg) - requires: lakekeeper
  - postgresql-medallion (postgresql) - requires: postgresql, lakekeeper

Step 2: Infrastructure Health Check

For each unique infrastructure requirement across all specs:

lakekeeper: Check atlas-dev-server Lakekeeper is reachable (accepts 200 or 401 as healthy — auth is handled by seeknal at runtime):

bash
1HTTP_CODE=$(curl -s --connect-timeout 5 -o /dev/null -w '%{http_code}' 'http://172.19.0.9:8181/catalog/v1/config?warehouse=seeknal-warehouse' 2>/dev/null) 2if echo "$HTTP_CODE" | grep -qE '^(200|401)'; then echo "OK"; else echo "UNREACHABLE"; fi

postgresql: Check local PostgreSQL is reachable:

bash
1pg_isready -h localhost -p 5432 -U seeknal -d seeknal_test 2>/dev/null && echo "OK" || echo "UNREACHABLE"

If a required service is unreachable, mark that spec as SKIPPED (best-effort — other specs still run).

Display health check results:

Infrastructure health:
  lakekeeper: OK
  postgresql: OK

Specs to run: N (M skipped due to unavailable infrastructure)

Step 3: Clean Previous Runs

Remove existing run directories for specs that will execute:

bash
1rm -rf qa/runs/csv-medallion qa/runs/iceberg-medallion qa/runs/postgresql-medallion

Step 4: Create Team and Fan Out Workers

Create a team for coordinated execution:

TeamCreate: team_name="qa-medallion-run"

For each spec that passes health checks:

  1. Create a task via TaskCreate describing what the worker should do
  2. Spawn a worker agent using the Task tool:
Task(
  subagent_type="general-purpose",
  team_name="qa-medallion-run",
  name="worker-{spec_name}",
  prompt=<see worker prompt below>,
  run_in_background=true
)

Worker prompt template (adapt per spec):

You are a QA worker agent. Your job is to:

1. Read the spec file at: qa/specs/{spec_name}.yml
2. Create the project directory at: qa/runs/{spec_name}/
3. Follow the qa-worker agent instructions in .claude/agents/qa-worker.md
4. Scaffold the seeknal project from the spec
5. Execute the pipeline
6. Validate all outputs
7. Report results back

IMPORTANT: Read .claude/agents/qa-worker.md first for detailed instructions.
Spec file: qa/specs/{spec_name}.yml
Output dir: qa/runs/{spec_name}/

Workers run in parallel (one per spec). Use run_in_background=true for parallelism.

Step 5: Collect Results

Wait for all worker agents to complete. Check their output files or messages.

Each worker reports a structured result:

  • Spec name
  • DAG validation: PASS/FAIL (node count, edge count)
  • Execution: PASS/FAIL (exit code)
  • Output validation: PASS/FAIL (file existence, row counts)
  • Feature coverage: list of features tested
  • Errors: any error messages

Step 6: Display Summary

Format and display the final QA summary table:

=== QA Automation Results ===

| Spec                  | Source     | DAG  | Execution | Outputs | Features | Status |
|-----------------------|------------|------|-----------|---------|----------|--------|
| csv-medallion         | csv        | PASS | PASS      | PASS    | 6/6      | PASS   |
| iceberg-medallion     | iceberg    | PASS | PASS      | PASS    | 10/10    | PASS   |
| postgresql-medallion  | postgresql | PASS | PASS      | PASS    | 15/15    | PASS   |

Overall: 3/3 PASSED

Step 7: Cleanup

Shut down all team members and delete the team:

SendMessage(type="shutdown_request", recipient="worker-csv-medallion")
SendMessage(type="shutdown_request", recipient="worker-iceberg-medallion")
SendMessage(type="shutdown_request", recipient="worker-postgresql-medallion")
TeamDelete()

Report final result to user:

  • Overall PASS/FAIL
  • Link to qa/runs/ for manual inspection
  • Any failed specs with error details

Adding New Tests

Option A: Write a YAML spec manually

Create a YAML file in qa/specs/:

yaml
1name: my-new-test 2description: What this test validates 3source_type: csv|iceberg|postgres 4infrastructure: 5 requires: [] 6seed_data: 7 my_file.csv: | 8 col1,col2 9 val1,val2 10pipeline: 11 bronze: [...] 12 silver: [...] 13 gold: [...] 14validation: 15 dag: 16 expected_nodes: N 17 expected_edges: [...] 18 execution: 19 success: true 20 outputs: 21 - node: transform.name 22 min_rows: N 23features_tested: 24 - feature_name

No code changes needed. The next /qa run will automatically discover and execute it.

Option B: Generate from a feature spec

Pass a feature implementation spec (.md file from specs/) directly:

/qa specs/my-feature.md

The spec-interpreter agent will read the feature spec, generate a QA test spec YAML at qa/specs/{feature-name}.yml, and then execute it. The generated spec persists for future reuse — subsequent /qa runs (with no args) will include it automatically.

Related Skills

Looking for an alternative to qa or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

openclaw-release-maintainer is an AI agent skill for openclaw release maintainer.

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

flags is an AI agent skill for use this skill when adding or changing framework feature flags in next.js internals.

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

pr-review is an AI agent skill for pytorch pr review skill.

98.6k
0
Developer