fix-ci — community fix-ci, llamafarm, community, ide skills

v1.0.0

About this Skill

Perfect for DevOps Agents needing automated CI troubleshooting and GitHub Actions failure analysis. Fetch GitHub CI failure information, analyze root causes, reproduce locally, and propose a fix plan. Use `/fix-ci` for current branch or `/fix-ci <run-id>` for a specific run.

llama-farm llama-farm
[0]
[0]
Updated: 3/12/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 9/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
9/11
Quality Score
60
Canonical Locale
en
Detected Body Locale
en

Perfect for DevOps Agents needing automated CI troubleshooting and GitHub Actions failure analysis. Fetch GitHub CI failure information, analyze root causes, reproduce locally, and propose a fix plan. Use `/fix-ci` for current branch or `/fix-ci <run-id>` for a specific run.

Core Value

Empowers agents to automate CI troubleshooting by fetching GitHub Actions failures, analyzing logs, reproducing issues locally, and creating a fix plan using GitHub CLI and authentication protocols like OAuth.

Ideal Agent Persona

Perfect for DevOps Agents needing automated CI troubleshooting and GitHub Actions failure analysis.

Capabilities Granted for fix-ci

Automating GitHub Actions failure analysis
Generating fix plans for CI issues
Reproducing CI issues locally for debugging

! Prerequisites & Limits

  • Requires GitHub CLI installation and authentication
  • Limited to GitHub Actions and GitHub CLI compatibility

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is fix-ci?

Perfect for DevOps Agents needing automated CI troubleshooting and GitHub Actions failure analysis. Fetch GitHub CI failure information, analyze root causes, reproduce locally, and propose a fix plan. Use `/fix-ci` for current branch or `/fix-ci <run-id>` for a specific run.

How do I install fix-ci?

Run the command: npx killer-skills add llama-farm/llamafarm/fix-ci. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for fix-ci?

Key use cases include: Automating GitHub Actions failure analysis, Generating fix plans for CI issues, Reproducing CI issues locally for debugging.

Which IDEs are compatible with fix-ci?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for fix-ci?

Requires GitHub CLI installation and authentication. Limited to GitHub Actions and GitHub CLI compatibility.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add llama-farm/llamafarm/fix-ci. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use fix-ci immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

fix-ci

Install fix-ci, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Fix CI Skill

Automates CI troubleshooting by fetching GitHub Actions failures, analyzing logs, reproducing issues locally, and creating a fix plan for user approval.


Execution Workflow

Step 1: Prerequisites Check

Verify the GitHub CLI is installed and authenticated:

bash
1gh --version && gh auth status

If gh is not installed:

  • Inform user: "GitHub CLI is required. Install with: brew install gh"
  • Exit gracefully

If not authenticated:

  • Inform user: "Please authenticate with: gh auth login"
  • Exit gracefully

Step 2: Parse Arguments

Determine the mode based on arguments:

  • No arguments (/fix-ci): Fetch failures for the current branch only
  • With run-id (/fix-ci <run-id>): Fetch specific run (bypasses branch scoping)

Step 3: Fetch Failed Run

Default mode (current branch):

bash
1BRANCH=$(git branch --show-current) 2gh run list --branch "$BRANCH" --status failure --limit 1 --json databaseId,name,headBranch,workflowName,createdAt

Specific run mode:

bash
1gh run view <run-id> --json databaseId,name,headBranch,workflowName,jobs,conclusion

If no failures found:

  • Report: "No failed runs found for branch $BRANCH. CI is green!"
  • Optionally show recent successful runs:
bash
1gh run list --branch "$BRANCH" --limit 3 --json databaseId,conclusion,workflowName,createdAt
  • Exit gracefully

Step 4: Get Failure Details

Once a failed run is identified, gather comprehensive details:

bash
1RUN_ID=<the-run-id> 2 3# Get failed jobs with their steps 4gh run view $RUN_ID --json jobs --jq '.jobs[] | select(.conclusion == "failure") | {name, conclusion, steps: [.steps[] | select(.conclusion == "failure")]}' 5 6# Get failed step logs (critical for debugging) 7gh run view $RUN_ID --log-failed 2>&1 | head -500 8 9# Get verbose run info 10gh run view $RUN_ID --verbose

Log handling:

  • Truncate logs to 500 lines to avoid context overflow
  • Note to user: "Showing first 500 lines of failed logs. Full logs available on GitHub."

Step 5: Download Artifacts (if available)

Attempt to download any debug artifacts:

bash
1# Try common artifact names - failures are OK (not all runs have artifacts) 2gh run download $RUN_ID -n "coverage" -D /tmp/ci-debug/ 2>/dev/null || true 3gh run download $RUN_ID -n "test-results" -D /tmp/ci-debug/ 2>/dev/null || true 4gh run download $RUN_ID -n "logs" -D /tmp/ci-debug/ 2>/dev/null || true

If artifacts downloaded, read them for additional context.

Step 6: Analyze Failure Type

Categorize the failure based on log patterns:

PatternFailure TypeRoot Cause Area
FAIL:, --- FAIL, FAILEDTest FailureSpecific test case
ruff check, ruff formatLint ErrorCode style/formatting
ModuleNotFoundError, ImportErrorImport ErrorMissing dependency
TypeError, AttributeErrorRuntime ErrorType mismatch
SyntaxErrorSyntax ErrorInvalid code
AssertionErrorAssertion FailureTest expectation mismatch
TimeoutError, timed outTimeoutPerformance/hang
PermissionError, EACCESPermission ErrorFile/resource access
ConnectionError, ECONNREFUSEDNetwork ErrorExternal service

Extract key information:

  • Failed test name/file (if applicable)
  • Error message
  • Stack trace location (file:line)
  • Environment variables or config issues

Step 7: Map to Local Test Commands

Determine the appropriate local command based on the CI job:

CI Workflow/JobLocal Command
test-clicd cli && go test ./...
test-python (server)cd server && uv run pytest -v
test-python (rag)cd rag && uv run pytest -v
test-python (config)cd config && uv run pytest -v
test-python (runtime)cd runtimes/universal && uv run pytest -v
lint (python)uv run ruff check .
lint (go)cd cli && golangci-lint run
type-checkuv run mypy .
build-clinx build cli
build-designercd designer && npm run build

For specific test failures, narrow down the command:

  • Python: cd <dir> && uv run pytest -v <test_file>::<test_name>
  • Go: cd cli && go test -v -run <TestName> ./...

Step 8: Reproduce Locally

Run the mapped local command to confirm the failure reproduces:

bash
1# Example for Python test 2cd server && uv run pytest -v tests/test_api.py::test_health_check

Outcome A - Failure reproduces locally:

  • Good! Continue to fix plan
  • Report: "Successfully reproduced failure locally"

Outcome B - Failure does NOT reproduce locally:

  • Note: "Could not reproduce locally. Possible causes:"
    • Flaky test (timing-dependent)
    • Environment difference (CI has different deps/config)
    • Race condition
  • Suggest: "Consider re-running CI with gh run rerun $RUN_ID"
  • Ask user how to proceed (investigate further or skip)

Step 9: Analyze Root Cause

Based on the failure type and logs, identify:

  1. What failed: Specific test, lint rule, or build step
  2. Why it failed: The actual error condition
  3. Where to fix: File(s) and line(s) that need changes
  4. How to fix: Proposed changes

Use available tools to explore:

  • Read the failing test file
  • Read the code being tested
  • Search for related patterns in the codebase
  • Check recent changes that might have caused the failure

Step 10: Enter Plan Mode

Use EnterPlanMode to create a formal fix plan. The plan should include:

markdown
1# CI Fix Plan 2 3## Problem Statement 4[Summary of the CI failure from logs] 5 6## Failure Details 7- **Run ID**: <run-id> 8- **Workflow**: <workflow-name> 9- **Job**: <job-name> 10- **Error Type**: <categorized-type> 11 12## Root Cause Analysis 13[Explanation of why the failure occurred] 14 15## Affected Files 16- `path/to/file1.py` (line X) 17- `path/to/file2.py` (line Y) 18 19## Proposed Changes 20 21### Change 1: [Brief description] 22[Specific edit to make] 23 24### Change 2: [Brief description] 25[Specific edit to make] 26 27## Verification Steps 281. Run: `<local-test-command>` 292. Expected: All tests pass 303. Optional: Run full test suite with `<full-suite-command>` 31 32## Notes 33- [Any caveats or considerations]

Step 11: User Approval Gate

Present the plan and wait for explicit user approval:

  • User approves: Proceed to execute fixes
  • User modifies: Incorporate feedback, update plan
  • User rejects: Exit gracefully without changes

CRITICAL: Never make code changes without user approval.

Step 12: Execute Fix (after approval only)

  1. Make the proposed code changes using Edit tool
  2. Run local tests to verify the fix:
bash
1<local-test-command>
  1. Report results:
    • Success: "Fix verified locally. Tests pass."
    • Failure: "Fix did not resolve the issue. [details]"

IMPORTANT: Do NOT auto-commit changes. Leave committing to the user or /commit-push-pr skill.


Error Handling

ScenarioAction
gh CLI not installedDirect user to install: brew install gh
gh not authenticatedDirect user to: gh auth login
No failures foundReport CI is green, exit gracefully
Rate limit exceededSuggest waiting or using gh auth refresh
Run not foundVerify run ID, suggest gh run list to find valid IDs
Large logs (>500 lines)Truncate, note full logs on GitHub
Local reproduction failsNote as flaky/env issue, offer re-run option
Network errorsSuggest retry, check connection

Output Format

On finding a failure:

CI Failure Found
Run: #12345 (workflow-name)
Branch: feature-branch
Failed Job: test-python
Error Type: Test Failure

Analyzing logs...
[Summary of failure]

Reproducing locally...
[Result]

Entering plan mode to propose fix...

On success (after fix):

Fix Applied
- Modified: path/to/file.py
- Verification: Tests pass locally

Next steps:
- Review the changes
- Run `/commit-push-pr` to commit and push
- CI will re-run automatically on push

Notes for the Agent

  1. Always scope to current branch by default - Users expect /fix-ci to fix their current work, not random failures
  2. Truncate logs wisely - CI logs can be huge; extract the relevant error sections
  3. Reproduce before fixing - Don't propose fixes for issues that can't be reproduced
  4. Plan mode is mandatory - Always use EnterPlanMode before making changes
  5. Never auto-commit - The user controls when changes are committed
  6. Be specific in analysis - Generic advice isn't helpful; identify exact files and lines
  7. Handle flaky tests - If reproduction fails, acknowledge it might be flaky

Related Skills

Looking for an alternative to fix-ci or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer