fix — ai-agents community, ai-agents, ide skills, claude-agents, claude-code, claude-skills, codex-agents, codex-cli

v1.0.0

About this Skill

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development. Bug-fixing workflow — diagnose the problem, reproduce it with a regression test, apply a targeted fix, then verify with linting, quality checks, and optional optimization.

# Core Topics

Borda Borda
[4]
[1]
Updated: 3/3/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 7/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Locale and body language aligned
Review Score
7/11
Quality Score
45
Canonical Locale
en
Detected Body Locale
en

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development. Bug-fixing workflow — diagnose the problem, reproduce it with a regression test, apply a targeted fix, then verify with linting, quality checks, and optional optimization.

Core Value

Empowers agents to fix software bugs using a disciplined approach with regression tests, minimal fixes, and quality checks, leveraging Python and ML libraries for comprehensive debugging.

Ideal Agent Persona

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development.

Capabilities Granted for fix

Debugging TypeError exceptions in Python code
Generating regression tests for ML model bugs
Applying minimal fixes to software bugs with linting and quality checks

! Prerequisites & Limits

  • Requires Python and ML open-source development environment
  • Limited to GitHub issues or plain text bug descriptions as input

Why this page is reference-only

  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Curated Collection Review

Reviewed In Curated Collections

This section shows how Killer-Skills has already collected, reviewed, and maintained this skill inside first-party curated paths. For operators and crawlers alike, this is a stronger signal than treating the upstream README as the primary story.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is fix?

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development. Bug-fixing workflow — diagnose the problem, reproduce it with a regression test, apply a targeted fix, then verify with linting, quality checks, and optional optimization.

How do I install fix?

Run the command: npx killer-skills add Borda/.home/fix. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for fix?

Key use cases include: Debugging TypeError exceptions in Python code, Generating regression tests for ML model bugs, Applying minimal fixes to software bugs with linting and quality checks.

Which IDEs are compatible with fix?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for fix?

Requires Python and ML open-source development environment. Limited to GitHub issues or plain text bug descriptions as input.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add Borda/.home/fix. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use fix immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

fix

Install fix, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence
<objective>

Fix software bugs with a disciplined reproduce-first workflow. Before touching any code, understand the root cause and capture the bug in a regression test. Then apply the minimal fix, verify all tests pass, and finish with linting and quality checks. The regression test stays in the codebase to prevent re-introduction.

</objective> <inputs>
  • $ARGUMENTS: required — one of:
    • A bug description in plain text (e.g., "TypeError when passing None to transform()")
    • A GitHub issue number (e.g., 123 — fetched via gh issue view)
    • An error message or traceback snippet
    • A failing test name (e.g., tests/test_transforms.py::test_none_input)
</inputs> <workflow>

Task tracking: per CLAUDE.md, create tasks (TaskCreate) for each major phase. Mark in_progress/completed throughout. On loop retry or scope change, create a new task.

Step 1: Understand the problem

Gather all available context about the bug:

bash
1# If issue number: fetch the full issue with comments 2gh issue view <number> --comments

If an error message or pattern was provided: use the Grep tool (pattern <error_pattern>, path .) to search the codebase for the failing code path. Adjust to src/, lib/, or app/ as appropriate for the project layout.

bash
1 2# If failing test: run it to capture the exact failure 3python -m pytest <test_path> -v --tb=long 2>&1 | tail -40

Spawn a sw-engineer agent to analyze the failing code path and identify:

  • The root cause (not just the symptom)
  • The minimal code surface that needs to change
  • Any related code that might be affected by the fix

Step 2: Reproduce the bug

Create or identify a test that demonstrates the failure:

bash
1# If a failing test already exists — run it to confirm it fails 2python -m pytest <test_file>::<test_name> -v --tb=short 3 4# If no test exists — write a regression test that captures the bug 5# Name it: test_<function>_<bug_description> (e.g., test_transform_none_input)

Spawn a qa-specialist agent to write the regression test if one doesn't exist:

  • The test must fail against the current code (proving the bug exists)
  • Use pytest.mark.parametrize if the bug affects multiple input patterns
  • Keep the test minimal — exercise exactly the broken behavior
  • Add a brief comment linking to the issue if applicable (e.g., # Regression test for #123)

Gate: the regression test must fail before proceeding. If it passes, the bug isn't properly captured — revisit Step 1.

Step 3: Apply the fix

Make the minimal change to fix the root cause:

  1. Edit only the code necessary to resolve the bug
  2. Run the regression test to confirm it now passes:
    bash
    1python -m pytest <test_file>::<test_name> -v --tb=short
  3. Run the full test suite for the affected module to check for regressions:
    bash
    1# Step 3: regression gate — confirms fix does not break existing tests 2python -m pytest <test_dir> -v --tb=short
  4. If any existing tests break: the fix has side effects — reconsider the approach

Step 4: Linting and quality

Spawn a linting-expert agent (or run directly) to ensure the fix meets code quality standards:

bash
1# Run ruff for linting and formatting 2ruff check <changed_files> --fix 3ruff format <changed_files> 4 5# Run mypy for type checking if configured 6mypy <changed_files> --no-error-summary 2>&1 | head -20 7 8# Step 4: final full-suite clean run before commit 9python -m pytest <test_dir> -v --tb=short

Step 5: Verify and report

Output a structured report:

## Fix Report: <bug summary>

### Root Cause
[1-2 sentence explanation of what was wrong and why]

### Regression Test
- File: <test_file>
- Test: <test_name>
- Confirms: [what behavior the test locks in]

### Changes Made
| File | Change | Lines |
|------|--------|-------|
| path/to/file.py | description of fix | -N/+M |

### Test Results
- Regression test: PASS
- Full suite: PASS (N tests)
- Lint: clean

### Follow-up
- [any related issues or code that should be reviewed]

## Confidence
**Score**: [0.N]
**Gaps**: [e.g., could not reproduce locally, partial traceback only, fix not runtime-tested]

Team Mode (--team)

Use when the bug has competing root-cause hypotheses or spans multiple modules. Skip for single-file bugs — use the default workflow above.

When to trigger: root cause is unclear after Step 1, OR the bug manifests across 3+ modules.

Workflow with --team:

  1. Lead spawns 2–3 sw-engineer teammates, each investigating a distinct hypothesis
  2. Broadcast current evidence to all teammates: broadcast {bug: <description>, traceback: <key lines>}
  3. Each teammate investigates independently — announces with alpha PROTO:v2.0 and claims a hypothesis
  4. Teammates report findings via lead (hub-and-spoke); lead facilitates cross-challenge between competing analyses
  5. Lead synthesizes the consensus root cause, then proceeds with Steps 2–5 above (regression test, fix, lint, report) — all in lead context

Spawn prompt template:

You are a sw-engineer teammate debugging: [bug description].
Read .claude/TEAM_PROTOCOL.md — use AgentSpeak v2 for inter-agent messages.
Your hypothesis: [hypothesis N]. Investigate ONLY this root cause.
Report findings to @lead using deltaT# or epsilonT# codes.
Compact Instructions: preserve file paths, errors, line numbers. Discard verbose tool output.
</workflow> <notes>
  • Reproduce first: never fix a bug you can't demonstrate with a test — the test is the proof
  • Minimal fix: change only what's necessary to resolve the root cause — avoid incidental refactoring
  • The regression test is a permanent contribution — it prevents the bug from recurring
  • If the bug is in .claude/ config files: run self-mentor audit + /sync after fixing
  • Related agents: sw-engineer (root cause analysis), qa-specialist (regression test), linting-expert (quality)
  • Follow-up chains:
    • Fix involves structural improvements beyond the bug → /refactor for test-first code quality pass
    • Fix touches non-trivial code paths → /review for full multi-agent quality validation
    • Fix required consistent renames or annotation changes across many files → /codex to delegate the mechanical sweep
</notes>

Related Skills

Looking for an alternative to fix or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

The React Framework

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

Tensors and Dynamic neural networks in Python with strong GPU acceleration

98.6k
0
Developer