iterate-pr — github cli automation iterate-pr, official, github cli automation, ide skills, ci check automation, pr workflow optimization, python script automation, github repository management, continuous integration fixes

Verified
v1.0.0

About this Skill

Perfect for CI/CD Agents needing automated PR iteration and GitHub integration. iterate-pr is a GitHub CLI-based skill that automates the feedback-fix-push cycle for continuous integration checks.

Features

Uses GitHub CLI (`gh`) for authentication
Runs scripts from the repository root directory
Fetches CI check status with `scripts/fetch_pr_checks.py`
Extracts failure snippets from logs
Continuously pushes fixes until all checks are green
Addresses review feedback

# Core Topics

getsentry getsentry
[467]
[16]
Updated: 3/25/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 11/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
11/11
Quality Score
74
Canonical Locale
en
Detected Body Locale
en

Perfect for CI/CD Agents needing automated PR iteration and GitHub integration. iterate-pr is a GitHub CLI-based skill that automates the feedback-fix-push cycle for continuous integration checks.

Core Value

Empowers agents to automate the feedback-fix-push-wait cycle using GitHub CLI and uv for python package management, ensuring continuous integration checks pass and review feedback is addressed, leveraging GitHub CLI (`gh`) and `uv` for seamless automation.

Ideal Agent Persona

Perfect for CI/CD Agents needing automated PR iteration and GitHub integration.

Capabilities Granted for iterate-pr

Automating CI failures resolution
Addressing review feedback iteratively
Continuously pushing fixes until all checks are green

! Prerequisites & Limits

  • Requires GitHub CLI (`gh`) authenticated
  • Requires `uv` CLI for python package management
  • Must be run from the repository root directory

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Curated Collection Review

Reviewed In Curated Collections

This section shows how Killer-Skills has already collected, reviewed, and maintained this skill inside first-party curated paths. For operators and crawlers alike, this is a stronger signal than treating the upstream README as the primary story.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is iterate-pr?

Perfect for CI/CD Agents needing automated PR iteration and GitHub integration. iterate-pr is a GitHub CLI-based skill that automates the feedback-fix-push cycle for continuous integration checks.

How do I install iterate-pr?

Run the command: npx killer-skills add getsentry/skills/iterate-pr. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for iterate-pr?

Key use cases include: Automating CI failures resolution, Addressing review feedback iteratively, Continuously pushing fixes until all checks are green.

Which IDEs are compatible with iterate-pr?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for iterate-pr?

Requires GitHub CLI (`gh`) authenticated. Requires `uv` CLI for python package management. Must be run from the repository root directory.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add getsentry/skills/iterate-pr. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use iterate-pr immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

iterate-pr

Install iterate-pr, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Iterate on PR Until CI Passes

Continuously iterate on the current branch until all CI checks pass and review feedback is addressed.

Requires: GitHub CLI (gh) authenticated.

Requires: The uv CLI for python package management, install guide at https://docs.astral.sh/uv/getting-started/installation/

Important: All scripts must be run from the repository root directory (where .git is located), not from the skill directory. Use the full path to the script via ${CLAUDE_SKILL_ROOT}.

Bundled Scripts

scripts/fetch_pr_checks.py

Fetches CI check status and extracts failure snippets from logs.

bash
1uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py [--pr NUMBER]

Returns JSON:

json
1{ 2 "pr": {"number": 123, "branch": "feat/foo"}, 3 "summary": {"total": 5, "passed": 3, "failed": 2, "pending": 0}, 4 "checks": [ 5 {"name": "tests", "status": "fail", "log_snippet": "...", "run_id": 123}, 6 {"name": "lint", "status": "pass"} 7 ] 8}

scripts/fetch_pr_feedback.py

Fetches and categorizes PR review feedback using the LOGAF scale.

bash
1uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py [--pr NUMBER]

Returns JSON with feedback categorized as:

  • high - Must address before merge (h:, blocker, changes requested)
  • medium - Should address (m:, standard feedback)
  • low - Optional (l:, nit, style, suggestion)
  • bot - Informational automated comments (Codecov, Dependabot, etc.)
  • resolved - Already resolved threads

Review bot feedback (from Sentry, Warden, Cursor, Bugbot, CodeQL, etc.) appears in high/medium/low with review_bot: true — it is NOT placed in the bot bucket.

Each feedback item may also include:

  • thread_id - GraphQL node ID for inline review comments (used for replies via reply_to_thread.py)

scripts/reply_to_thread.py

Replies to PR review threads. Batches multiple replies into a single GraphQL call.

bash
1uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py THREAD_ID "body" [THREAD_ID "body" ...]

Arguments are alternating (thread_id, body) pairs. The script automatically appends *— Claude Code* attribution if not already present. Example:

bash
1uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py \ 2 PRRT_abc "Fixed the null check." \ 3 PRRT_def "Replaced with path-segment counting."

Workflow

1. Identify PR

bash
1gh pr view --json number,url,headRefName

Stop if no PR exists for the current branch.

2. Gather Review Feedback

Run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py to get categorized feedback already posted on the PR.

3. Handle Feedback by LOGAF Priority

Auto-fix (no prompt):

  • high - must address (blockers, security, changes requested)
  • medium - should address (standard feedback)

When fixing feedback:

  • Understand the root cause, not just the surface symptom
  • Check for similar issues in nearby code or related files
  • Fix all instances, not just the one mentioned

This includes review bot feedback (items with review_bot: true). Treat it the same as human feedback:

  • Real issue found → fix it
  • False positive → skip, but explain why in a brief comment
  • Never silently ignore review bot feedback — always verify the finding

Prompt user for selection:

  • low - present numbered list and ask which to address:
Found 3 low-priority suggestions:
1. [l] "Consider renaming this variable" - @reviewer in api.py:42
2. [nit] "Could use a list comprehension" - @reviewer in utils.py:18
3. [style] "Add a docstring" - @reviewer in models.py:55

Which would you like to address? (e.g., "1,3" or "all" or "none")

Skip silently:

  • resolved threads
  • bot comments (informational only — Codecov, Dependabot, etc.)

Replying to Comments

After processing each inline review comment, reply on the PR thread to acknowledge the action taken. Only reply to items with a thread_id (inline review comments).

When to reply:

  • high and medium items — whether fixed or determined to be false positives
  • low items — whether fixed or declined by the user

How to reply: Use ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py. Batch all replies for a round into a single call:

bash
1uv run ${CLAUDE_SKILL_ROOT}/scripts/reply_to_thread.py \ 2 PRRT_abc "Fixed — description of change." \ 3 PRRT_def "Not applicable — reason."

Reply format:

  • 1-2 sentences: what was changed, why it's not an issue, or acknowledgment of declined items
  • The script automatically appends *— Claude Code* attribution if not already present
  • Before replying, check if the thread already has a reply ending with *- Claude Code* or *— Claude Code* to avoid duplicates on re-loops
  • If the script fails, log and continue — do not block the workflow

4. Check CI Status

Run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py to get structured failure data.

Wait if pending: If review bot checks (sentry, warden, cursor, bugbot, seer, codeql) are still running, wait before proceeding—they post actionable feedback that must be evaluated. Informational bots (codecov) are not worth waiting for.

5. Fix CI Failures

For each failure in the script output:

  1. Read the log_snippet and trace backwards from the error to understand WHY it failed — not just what failed
  2. Read the relevant code and check for related issues (e.g., if a type error in one call site, check other call sites)
  3. Fix the root cause with minimal, targeted changes
  4. Find existing tests for the affected code and run them. If the fix introduces behavior not covered by existing tests, extend them to cover it (add a test case, not a whole new test file)

Do NOT assume what failed based on check name alone—always read the logs. Do NOT "quick fix and hope" — understand the failure thoroughly before changing code.

6. Verify Locally, Then Commit and Push

Before committing, verify your fixes locally:

  • If you fixed a test failure: re-run that specific test locally
  • If you fixed a lint/type error: re-run the linter or type checker on affected files
  • For any code fix: run existing tests covering the changed code

If local verification fails, fix before proceeding — do not push known-broken code.

bash
1git add <files> 2git commit -m "fix: <descriptive message>" 3git push

7. Monitor CI and Address Feedback

Poll CI status and review feedback in a loop instead of blocking:

  1. Run uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_checks.py to get current CI status
  2. If all checks passed → proceed to exit conditions
  3. If any checks failed (none pending) → return to step 5
  4. If checks are still pending: a. Run uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py for new review feedback b. Address any new high/medium feedback immediately (same as step 3) c. If changes were needed, commit and push (this restarts CI), then continue polling d. Sleep 30 seconds (don't increase on subsequent iterations), then repeat from sub-step 1
  5. After all checks pass, do a final feedback check: sleep 10, then run uv run ${CLAUDE_SKILL_ROOT}/scripts/fetch_pr_feedback.py. Address any new high/medium feedback — if changes are needed, return to step 6.

8. Repeat

If step 7 required code changes (from new feedback after CI passed), return to step 2 for a fresh cycle. CI failures during monitoring are already handled within step 7's polling loop.

Exit Conditions

Success: All checks pass, post-CI feedback re-check is clean (no new unaddressed high/medium feedback including review bot findings), user has decided on low-priority items.

Ask for help: Same failure after 2 attempts, feedback needs clarification, infrastructure issues.

Stop: No PR exists, branch needs rebase.

Fallback

If scripts fail, use gh CLI directly:

  • gh pr checks name,state,bucket,link
  • gh run view <run-id> --log-failed
  • gh api repos/{owner}/{repo}/pulls/{number}/comments

Related Skills

Looking for an alternative to iterate-pr or another official skill for your workflow? Explore these related open-source skills.

View All

flags

Logo of facebook
facebook

Use when you need to check feature flag states, compare channels, or debug why a feature behaves differently across release channels.

244.2k
0
Developer

extract-errors

Logo of facebook
facebook

extract-errors is a React error handling skill that automates the process of extracting and assigning error codes, ensuring accurate and up-to-date error messages in React applications.

244.2k
0
Developer

fix

Logo of facebook
facebook

fix is a code optimization skill that automates formatting and linting using yarn prettier and linc.

244.2k
0
Developer

flow

Logo of facebook
facebook

Use when you need to run Flow type checking, or when seeing Flow type errors in React code.

244.2k
0
Developer