run-pipeline — computational-complexity run-pipeline, problem-reductions, community, computational-complexity, ide skills, np-complete, reduction, Claude Code, Cursor, Windsurf

v1.0.0

About this Skill

Perfect for Agile Development Agents needing automated issue tracking and pipeline management capabilities. Pick a Ready issue from the GitHub Project board, move it through In Progress -> issue-to-pr -> Review pool

# Core Topics

CodingThrust CodingThrust
[13]
[3]
Updated: 3/18/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 7/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Locale and body language aligned
Review Score
7/11
Quality Score
31
Canonical Locale
en
Detected Body Locale
en

Perfect for Agile Development Agents needing automated issue tracking and pipeline management capabilities. Pick a Ready issue from the GitHub Project board, move it through In Progress -> issue-to-pr -> Review pool

Core Value

Empowers agents to streamline GitHub project workflows using issue-to-pr and review-pipeline protocols, facilitating efficient collaboration and code review with Rust library integrations for computational problem definitions and reductions.

Ideal Agent Persona

Perfect for Agile Development Agents needing automated issue tracking and pipeline management capabilities.

Capabilities Granted for run-pipeline

Automating issue assignment and progression on GitHub project boards
Generating pull requests for specific issue numbers using issue-to-pr
Streamlining code review processes with review-pipeline for structural and quality checks

! Prerequisites & Limits

  • Requires access to GitHub Project board and repository
  • Dependent on issue-to-pr and review-pipeline protocols
  • Limited to Rust library for computational problem definitions and reductions

Why this page is reference-only

  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is run-pipeline?

Perfect for Agile Development Agents needing automated issue tracking and pipeline management capabilities. Pick a Ready issue from the GitHub Project board, move it through In Progress -> issue-to-pr -> Review pool

How do I install run-pipeline?

Run the command: npx killer-skills add CodingThrust/problem-reductions/run-pipeline. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for run-pipeline?

Key use cases include: Automating issue assignment and progression on GitHub project boards, Generating pull requests for specific issue numbers using issue-to-pr, Streamlining code review processes with review-pipeline for structural and quality checks.

Which IDEs are compatible with run-pipeline?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for run-pipeline?

Requires access to GitHub Project board and repository. Dependent on issue-to-pr and review-pipeline protocols. Limited to Rust library for computational problem definitions and reductions.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add CodingThrust/problem-reductions/run-pipeline. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use run-pipeline immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

run-pipeline

Install run-pipeline, an AI agent skill for AI agent workflows and automation. Works with Claude Code, Cursor, and Windsurf with one-command setup.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Run Pipeline

Pick a "Ready" issue from the GitHub Project board, claim it into "In Progress", run issue-to-pr, then move it to "Review pool". The separate review-pipeline handles agentic review (structural check, quality check, agentic feature tests).

Invocation

  • /run-pipeline -- pick the highest-ranked Ready issue (ranked by importance, relatedness, pending rules)
  • /run-pipeline 97 -- process a specific issue number from the Ready column

For Codex, open this SKILL.md directly and treat the slash-command forms above as aliases. The Makefile run-pipeline target already does this translation.

Constants

GitHub Project board IDs (for gh project item-edit):

ConstantValue
PROJECT_IDPVT_kwDOBrtarc4BRNVy
STATUS_FIELD_IDPVTSSF_lADOBrtarc4BRNVyzg_GmQc
STATUS_READYf37d0d80
STATUS_IN_PROGRESSa12cfc9c
STATUS_REVIEW_POOL7082ed60
STATUS_UNDER_REVIEWf04790ca
STATUS_FINAL_REVIEW51a3d8bb
STATUS_DONE6aca54fa

Autonomous Mode

This skill runs fully autonomously — no confirmation prompts, no user questions. It picks the next issue and processes it end-to-end. All sub-skills (issue-to-pr, check-issue, add-model, add-rule, etc.) should also auto-approve any confirmation prompts.

Steps

0. Generate the Project-Pipeline Report

Step 0 should be a single report-generation step. Do not manually list Ready items, list In-progress items, grep model declarations, or re-derive blocked rules with separate shell commands. The expensive full-context call here is python3 scripts/pipeline_skill_context.py project-pipeline ... (backed by build_project_pipeline_context()). For a single top-level run-pipeline invocation, call it once and reuse the packet for scoring, ranking, and choosing the issue. Do not rerun it in the single-issue path after the packet exists.

bash
1set -- python3 scripts/pipeline_skill_context.py project-pipeline --repo CodingThrust/problem-reductions --repo-root . --format text 2 3# If a specific issue number was provided, validate it through the same bundle: 4# set -- "$@" --issue <number> 5 6REPORT=$("$@") 7printf '%s\n' "$REPORT"

The report is the Step 0 packet. It should already include:

  • Queue Summary
  • Eligible Ready Issues
  • Blocked Ready Issues
  • In Progress Issues
  • Requested Issue validation when a specific issue was supplied

Branch from the report:

  • Bundle status: empty => STOP with No Ready issues are currently available.
  • Bundle status: no-eligible-issues => STOP with Ready issues exist, but all current rule candidates are blocked by missing models on main.
  • Bundle status: requested-missing => STOP with Issue #N is not currently in the Ready column.
  • Bundle status: requested-blocked => STOP with the blocking reason from the report
  • Bundle status: ready => continue

The report already handled the deterministic setup:

  • it loaded the Ready and In-progress issue sets
  • it scanned existing problems on main
  • it marked blocked [Rule] issues whose source or target model is still missing
  • it computed the pending-rule unblock counts used for C3

0a. Score Eligible Issues

Short-circuit: If there is only 1 eligible issue, skip scoring and pick it directly. Print "Only 1 eligible issue, picking it." and jump to Step 0c.

Score only eligible issues on three criteria. For [Model] issues, extract the problem name. For [Rule] issues, extract both source and target problem names.

CriterionWeightHow to Assess
C1: Industrial/Theoretical Importance3Read the report's issue summary for each eligible issue. Score 0-2: 2 = widely used in industry or foundational in complexity theory (e.g., ILP, SAT, MaxFlow, TSP, GraphColoring); 1 = moderately important or well-studied (e.g., SubsetSum, SetCover, Knapsack); 0 = niche or primarily academic
C2: Related to Existing Problems2Use the report's Ready/In-progress context plus pred list if needed. Score 0-2: 2 = directly related (shares input structure or has known reductions to/from ≥2 existing problems, but is NOT a trivial variant of an existing one); 1 = loosely related (same domain, connects to 1 existing problem); 0 = isolated or is essentially a variant/renaming of an existing problem
C3: Unblocks Pending Rules2Read the Pending rules unblocked count already printed in the report for each eligible issue. Score 0-2: 2 = unblocks ≥2 pending rules; 1 = unblocks 1 pending rule; 0 = does not unblock any pending rule

Final score = C1 × 3 + C2 × 2 + C3 × 2 (max = 12)

Tie-breaking: Models before Rules, then by lower issue number.

Important for C2: A problem that is merely a weighted/unweighted variant or a graph-subtype specialization of an existing problem scores 0 on C2, not 2. The goal is to add genuinely new problem types that expand the graph's reach.

0b. Print Ranked List

Print all Ready issues with their scores for visibility (no confirmation needed). Blocked rules appear at the bottom with their reason:

Ready issues (ranked):
  Score  Issue  Title                              C1  C2  C3
  ─────────────────────────────────────────────────────────────
    10   #117   [Model] GraphPartitioning           2   2   2
     8   #129   [Model] MultivariateQuadratic       2   1   1
     7   #97    [Rule] BinPacking to ILP            1   2   1
     6   #110   [Rule] LCS to ILP                   1   1   1
     4   #126   [Rule] KSatisfiability to SubsetSum  0   2   0

  Blocked:
     3   #130   [Rule] MultivariateQuadratic to ILP  -- model "MultivariateQuadratic" not yet implemented

0c. Pick Issues

If a specific issue number was provided: validate and claim it through the scripted bundle:

bash
1STATE_FILE=/tmp/problemreductions-ready-selection.json 2CLAIM=$(python3 scripts/pipeline_board.py claim-next ready "$STATE_FILE" --number <number> --format json)

The report should already have stopped you before this point if the requested issue was missing or blocked.

After successful validation, extract ITEM_ID, ISSUE, and TITLE from CLAIM using the same commands shown below.

Otherwise (no args): score the eligible issues from the report, pick the highest-scored one, and proceed immediately (no confirmation). After picking the issue number, claim it through the scripted bundle:

bash
1STATE_FILE=/tmp/problemreductions-ready-selection.json 2CLAIM=$(python3 scripts/pipeline_board.py claim-next ready "$STATE_FILE" --number <chosen-issue-number> --format json)

Extract the board item metadata from CLAIM:

bash
1ITEM_ID=$(printf '%s\n' "$CLAIM" | python3 -c "import sys,json; print(json.load(sys.stdin)['item_id'])") 2ISSUE=$(printf '%s\n' "$CLAIM" | python3 -c "import sys,json; data=json.load(sys.stdin); print(data['issue_number'] or data['number'])") 3TITLE=$(printf '%s\n' "$CLAIM" | python3 -c "import sys,json; print(json.load(sys.stdin)['title'])")

1. Create Worktree

Create an isolated worktree for this issue:

bash
1REPO_ROOT=$(pwd) 2WORKTREE_JSON=$(python3 scripts/pipeline_worktree.py enter --name "issue-$ISSUE" --format json) 3WORKTREE_DIR=$(printf '%s\n' "$WORKTREE_JSON" | python3 -c "import sys,json; print(json.load(sys.stdin)['worktree_dir'])") 4cd "$WORKTREE_DIR"

All subsequent steps run inside the worktree. This ensures the user's main checkout is never modified.

issue-to-pr (Step 3) handles all PR detection and branch management — if an existing open PR exists, it checks out that branch and resumes; otherwise it creates a fresh branch from origin/main.

2. Claim Result

claim-next ready has already moved the selected issue from Ready to In progress. Keep using ITEM_ID from the CLAIM JSON payload for later board transitions.

3. Run issue-to-pr

Invoke the issue-to-pr skill (working directory is the worktree):

/issue-to-pr "$ISSUE"

This handles the full pipeline: fetch issue, verify Good label, research, write plan, create PR, implement. If an existing open PR is detected, issue-to-pr will resume it (skip plan creation, jump to execution).

If issue-to-pr fails: move the issue to OnHold with a diagnostic comment (see Step 4).

4. Move to "Review pool"

After issue-to-pr fully succeeds, move the issue to the Review pool column. "Fully succeeds" means the implementation work is committed, the temporary plan file has been deleted, the PR implementation summary comment has been posted, the branch has been pushed, and the working tree is clean aside from ignored/generated files:

bash
1python3 scripts/pipeline_board.py move <ITEM_ID> review-pool

If issue-to-pr failed (whether or not a PR was created): move the issue to OnHold with a diagnostic comment explaining what went wrong:

bash
1gh issue comment <ISSUE> --body "run-pipeline: implementation failed. <brief reason>" 2python3 scripts/pipeline_board.py move <ITEM_ID> on-hold

Forward-only rule: never move items backward (e.g., back to Ready). All failures go to OnHold for human triage.

5. Clean Up Worktree

After the issue is processed (success or failure), clean up the worktree:

bash
1cd "$REPO_ROOT" 2python3 scripts/pipeline_worktree.py cleanup --worktree "$WORKTREE_DIR"

6. Report

Print a summary:

Pipeline complete:
  Issue:  #97 [Rule] BinPacking to ILP
  PR:     #200
  Status: Awaiting agentic review
  Board:  Moved Ready -> In Progress -> Review pool

Common Mistakes

MistakeFix
Issue not in Ready columnVerify status before processing; STOP if not Ready
Picking a Rule whose model doesn't existHard constraint: both source and target models must exist on main — pending Model issues do NOT count
Missing project scopesRun gh auth refresh -s read:project,project
Moving items backward to ReadyNever move backward — all failures go to OnHold with diagnostic comment
Scoring a variant as "related"Weighted/unweighted variants or graph-subtype specializations of existing problems score 0 on C2
Worktree left behind on failureAlways run pipeline_worktree.py cleanup in Step 5
Working in main checkoutAll work happens in the worktree — never modify the main checkout
Missing items from project boardgh project item-list defaults to 30 items — always use --limit 500
Inventing pipeline_board.py subcommandsOnly next, claim-next, ack, list, move, backlog exist

Related Skills

Looking for an alternative to run-pipeline or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer