KS
Killer-Skills

fix — how to use fix for bug fixing how to use fix for bug fixing, fix setup guide for Python developers, fix vs Codex for AI-powered coding, regression test-driven development with fix, fix install for ML open-source projects, what is fix and how it works, fix alternative for automated workflows, fix for GitHub issue tracking and management

v1.0.0
GitHub

About this Skill

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development. fix is a set of AI agent configurations and automated workflows optimized for Python and ML development, focused on regression test-driven bug fixing.

Features

Applies minimal fix after capturing bug in a regression test
Verifies all tests pass before finishing with linting and quality checks
Utilizes GitHub for bug description and issue tracking
Supports plain text input for bug descriptions
Creates regression tests to prevent re-introduction of bugs
Works with Codex and Claude agents for AI-powered coding assistance

# Core Topics

Borda Borda
[4]
[1]
Updated: 3/3/2026

Quality Score

Top 5%
45
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add Borda/.home/fix

Agent Capability Analysis

The fix MCP Server by Borda is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use fix for bug fixing, fix setup guide for Python developers, fix vs Codex for AI-powered coding.

Ideal Agent Persona

Perfect for Debugging Agents needing a reproduce-first workflow for Python and ML open-source development.

Core Value

Empowers agents to fix software bugs using a disciplined approach with regression tests, minimal fixes, and quality checks, leveraging Python and ML libraries for comprehensive debugging.

Capabilities Granted for fix MCP Server

Debugging TypeError exceptions in Python code
Generating regression tests for ML model bugs
Applying minimal fixes to software bugs with linting and quality checks

! Prerequisites & Limits

  • Requires Python and ML open-source development environment
  • Limited to GitHub issues or plain text bug descriptions as input
Project
SKILL.md
6.2 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8
SKILL.md
Readonly
<objective>

Fix software bugs with a disciplined reproduce-first workflow. Before touching any code, understand the root cause and capture the bug in a regression test. Then apply the minimal fix, verify all tests pass, and finish with linting and quality checks. The regression test stays in the codebase to prevent re-introduction.

</objective> <inputs>
  • $ARGUMENTS: required — one of:
    • A bug description in plain text (e.g., "TypeError when passing None to transform()")
    • A GitHub issue number (e.g., 123 — fetched via gh issue view)
    • An error message or traceback snippet
    • A failing test name (e.g., tests/test_transforms.py::test_none_input)
</inputs> <workflow>

Task tracking: per CLAUDE.md, create tasks (TaskCreate) for each major phase. Mark in_progress/completed throughout. On loop retry or scope change, create a new task.

Step 1: Understand the problem

Gather all available context about the bug:

bash
1# If issue number: fetch the full issue with comments 2gh issue view <number> --comments

If an error message or pattern was provided: use the Grep tool (pattern <error_pattern>, path .) to search the codebase for the failing code path. Adjust to src/, lib/, or app/ as appropriate for the project layout.

bash
1 2# If failing test: run it to capture the exact failure 3python -m pytest <test_path> -v --tb=long 2>&1 | tail -40

Spawn a sw-engineer agent to analyze the failing code path and identify:

  • The root cause (not just the symptom)
  • The minimal code surface that needs to change
  • Any related code that might be affected by the fix

Step 2: Reproduce the bug

Create or identify a test that demonstrates the failure:

bash
1# If a failing test already exists — run it to confirm it fails 2python -m pytest <test_file>::<test_name> -v --tb=short 3 4# If no test exists — write a regression test that captures the bug 5# Name it: test_<function>_<bug_description> (e.g., test_transform_none_input)

Spawn a qa-specialist agent to write the regression test if one doesn't exist:

  • The test must fail against the current code (proving the bug exists)
  • Use pytest.mark.parametrize if the bug affects multiple input patterns
  • Keep the test minimal — exercise exactly the broken behavior
  • Add a brief comment linking to the issue if applicable (e.g., # Regression test for #123)

Gate: the regression test must fail before proceeding. If it passes, the bug isn't properly captured — revisit Step 1.

Step 3: Apply the fix

Make the minimal change to fix the root cause:

  1. Edit only the code necessary to resolve the bug
  2. Run the regression test to confirm it now passes:
    bash
    1python -m pytest <test_file>::<test_name> -v --tb=short
  3. Run the full test suite for the affected module to check for regressions:
    bash
    1# Step 3: regression gate — confirms fix does not break existing tests 2python -m pytest <test_dir> -v --tb=short
  4. If any existing tests break: the fix has side effects — reconsider the approach

Step 4: Linting and quality

Spawn a linting-expert agent (or run directly) to ensure the fix meets code quality standards:

bash
1# Run ruff for linting and formatting 2ruff check <changed_files> --fix 3ruff format <changed_files> 4 5# Run mypy for type checking if configured 6mypy <changed_files> --no-error-summary 2>&1 | head -20 7 8# Step 4: final full-suite clean run before commit 9python -m pytest <test_dir> -v --tb=short

Step 5: Verify and report

Output a structured report:

## Fix Report: <bug summary>

### Root Cause
[1-2 sentence explanation of what was wrong and why]

### Regression Test
- File: <test_file>
- Test: <test_name>
- Confirms: [what behavior the test locks in]

### Changes Made
| File | Change | Lines |
|------|--------|-------|
| path/to/file.py | description of fix | -N/+M |

### Test Results
- Regression test: PASS
- Full suite: PASS (N tests)
- Lint: clean

### Follow-up
- [any related issues or code that should be reviewed]

## Confidence
**Score**: [0.N]
**Gaps**: [e.g., could not reproduce locally, partial traceback only, fix not runtime-tested]

Team Mode (--team)

Use when the bug has competing root-cause hypotheses or spans multiple modules. Skip for single-file bugs — use the default workflow above.

When to trigger: root cause is unclear after Step 1, OR the bug manifests across 3+ modules.

Workflow with --team:

  1. Lead spawns 2–3 sw-engineer teammates, each investigating a distinct hypothesis
  2. Broadcast current evidence to all teammates: broadcast {bug: <description>, traceback: <key lines>}
  3. Each teammate investigates independently — announces with alpha PROTO:v2.0 and claims a hypothesis
  4. Teammates report findings via lead (hub-and-spoke); lead facilitates cross-challenge between competing analyses
  5. Lead synthesizes the consensus root cause, then proceeds with Steps 2–5 above (regression test, fix, lint, report) — all in lead context

Spawn prompt template:

You are a sw-engineer teammate debugging: [bug description].
Read .claude/TEAM_PROTOCOL.md — use AgentSpeak v2 for inter-agent messages.
Your hypothesis: [hypothesis N]. Investigate ONLY this root cause.
Report findings to @lead using deltaT# or epsilonT# codes.
Compact Instructions: preserve file paths, errors, line numbers. Discard verbose tool output.
</workflow> <notes>
  • Reproduce first: never fix a bug you can't demonstrate with a test — the test is the proof
  • Minimal fix: change only what's necessary to resolve the root cause — avoid incidental refactoring
  • The regression test is a permanent contribution — it prevents the bug from recurring
  • If the bug is in .claude/ config files: run self-mentor audit + /sync after fixing
  • Related agents: sw-engineer (root cause analysis), qa-specialist (regression test), linting-expert (quality)
  • Follow-up chains:
    • Fix involves structural improvements beyond the bug → /refactor for test-first code quality pass
    • Fix touches non-trivial code paths → /review for full multi-agent quality validation
    • Fix required consistent renames or annotation changes across many files → /codex to delegate the mechanical sweep
</notes>

Related Skills

Looking for an alternative to fix or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication