KS
Killer-Skills

aif-fix — how to use aif-fix how to use aif-fix, aif-fix alternative, aif-fix setup guide, what is aif-fix, aif-fix vs pomodoro timers, install aif-fix on MacOS

v1.0.0
GitHub

About this Skill

Perfect for Development Agents needing streamlined bug fixing workflows with Pomodoro focus timer integration. aif-fix is a MacOS menubar application that implements the Pomodoro technique for focused bug fixing, utilizing a flexible timer and plan-first approach.

Features

Checks for existing fix plans in .ai-factory/FIX_PLAN.md
Executes fixes based on predefined plans, skipping unnecessary steps
Supports two modes: immediate fix or plan-first approach
Investigates the codebase using the plan as a guide
Informs the user of existing fix plans and execution progress

# Core Topics

ArtemYurov ArtemYurov
[54]
[2]
Updated: 3/5/2026

Quality Score

Top 5%
65
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add ArtemYurov/TomoBar/aif-fix

Agent Capability Analysis

The aif-fix MCP Server by ArtemYurov is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use aif-fix, aif-fix alternative, aif-fix setup guide.

Ideal Agent Persona

Perfect for Development Agents needing streamlined bug fixing workflows with Pomodoro focus timer integration.

Core Value

Empowers agents to execute immediate fix or plan-first approaches for bug fixing, utilizing `.ai-factory/FIX_PLAN.md` files for efficient workflow management and supporting MacOS menu bar integration.

Capabilities Granted for aif-fix MCP Server

Automating bug fix workflows with Pomodoro timers
Generating fix plans based on existing `.ai-factory/FIX_PLAN.md` files
Streamlining codebase investigation with planned fix approaches

! Prerequisites & Limits

  • Requires MacOS environment
  • Dependent on `.ai-factory/FIX_PLAN.md` file existence for planned fix approach
SKILL.md
Readonly

Fix - Bug Fix Workflow

Fix a specific bug or problem in the codebase. Supports two modes: immediate fix or plan-first approach.

Workflow

Step 0: Check for Existing Fix Plan

BEFORE anything else, check if .ai-factory/FIX_PLAN.md exists.

If the file EXISTS:

  • Read .ai-factory/FIX_PLAN.md
  • Inform the user: "Found existing fix plan. Executing fix based on the plan."
  • Skip Steps 0.1 through 1 — go directly to Step 2: Investigate the Codebase, using the plan as your guide
  • Follow each step of the plan sequentially
  • After the fix is fully applied and verified, delete .ai-factory/FIX_PLAN.md:
    bash
    1rm .ai-factory/FIX_PLAN.md
  • Continue to Step 4 (Verify), Step 5 (Test suggestion), Step 6 (Patch)

If the file DOES NOT exist AND $ARGUMENTS is empty:

  • Tell the user: "No fix plan found and no problem description provided. Please either provide a bug description (/aif-fix <description>) or create a fix plan first."
  • STOP.

If the file DOES NOT exist AND $ARGUMENTS is provided:

  • Continue to Step 0.1 below.

Step 0.1: Load Project Context & Past Experience

Read .ai-factory/DESCRIPTION.md if it exists to understand:

  • Tech stack (language, framework, database)
  • Project architecture
  • Coding conventions

Read all patches from .ai-factory/patches/ if the directory exists:

  • Use Glob to find all *.md files in .ai-factory/patches/
  • Read each patch file to learn from past fixes
  • Pay attention to recurring patterns, root causes, and solutions
  • If the current problem resembles a past patch — apply the same approach or avoid the same mistakes
  • This is your accumulated experience. Use it.

Step 1: Understand the Problem & Choose Mode

From $ARGUMENTS, identify:

  • Error message or unexpected behavior
  • Where it occurs (file, function, endpoint)
  • Steps to reproduce (if provided)

If unclear, ask:

To fix this effectively, I need more context:

1. What is the expected behavior?
2. What actually happens?
3. Can you share the error message/stack trace?
4. When did this start happening?

After understanding the problem, ask the user to choose a mode using AskUserQuestion:

Question: "How would you like to proceed with the fix?"

Options:

  1. Fix now — Investigate and apply the fix immediately
  2. Plan first — Create a fix plan for review, then fix later

If user chooses "Plan first":

  • Proceed to Step 1.1: Create Fix Plan

If user chooses "Fix now":

  • Skip Step 1.1, proceed directly to Step 2: Investigate the Codebase

Step 1.1: Create Fix Plan

Investigate the codebase enough to understand the problem and create a plan.

Use the same parallel exploration approach as Step 2 — launch Explore agents to investigate the problem area, related code, and past patterns simultaneously.

After agents return, synthesize findings to:

  1. Identify the root cause (or most likely candidates)
  2. Map affected files and functions
  3. Assess impact scope

Then create .ai-factory/FIX_PLAN.md with this structure:

markdown
1# Fix Plan: [Brief title] 2 3**Problem:** [What's broken — from user's description] 4**Created:** YYYY-MM-DD HH:mm 5 6## Analysis 7 8What was found during investigation: 9- Root cause (or suspected root cause) 10- Affected files and functions 11- Impact scope 12 13## Fix Steps 14 15Step-by-step plan for implementing the fix: 16 171. [ ] Step one — what to change and why 182. [ ] Step two — ... 193. [ ] Step three — ... 20 21## Files to Modify 22 23- `path/to/file.ts` — what changes are needed 24- `path/to/another.ts` — what changes are needed 25 26## Risks & Considerations 27 28- Potential side effects 29- Things to verify after the fix 30- Edge cases to watch for 31 32## Test Coverage 33 34- What tests should be added 35- What edge cases to cover

After creating the plan, output:

## Fix Plan Created ✅

Plan saved to `.ai-factory/FIX_PLAN.md`.

Review the plan and when you're ready to execute, run:

/aif-fix

STOP here. Do NOT apply the fix.

Step 2: Investigate the Codebase

Use Task tool with subagent_type: Explore to investigate the problem in parallel. This keeps the main context clean and allows simultaneous investigation of multiple angles.

Launch 2-3 Explore agents simultaneously:

Agent 1 — Locate the problem area:
Task(subagent_type: Explore, model: sonnet, prompt:
  "Find code related to [error location / affected functionality].
   Read the relevant functions, trace the data flow.
   Thoroughness: medium.")

Agent 2 — Related code & side effects:
Task(subagent_type: Explore, model: sonnet, prompt:
  "Find all callers/consumers of [affected function/module].
   Identify what else might break or be affected.
   Thoroughness: medium.")

Agent 3 — Similar past patterns (if patches exist):
Task(subagent_type: Explore, model: sonnet, prompt:
  "Search for similar error patterns or related fixes in the codebase.
   Check git log for recent changes to [affected files].
   Thoroughness: quick.")

After agents return, synthesize findings to identify:

  • The root cause (not just symptoms)
  • Related code that might be affected
  • Existing error handling

Fallback: If Task tool is unavailable, investigate directly:

  • Find relevant files using Glob/Grep
  • Read the code around the issue
  • Trace the data flow
  • Check for similar patterns elsewhere

Step 3: Implement the Fix

Apply the fix with logging:

typescript
1// ✅ REQUIRED: Add logging around the fix 2console.log('[FIX] Processing user input', { userId, input }); 3 4try { 5 // The actual fix 6 const result = fixedLogic(input); 7 console.log('[FIX] Success', { userId, result }); 8 return result; 9} catch (error) { 10 console.error('[FIX] Error in fixedLogic', { 11 userId, 12 input, 13 error: error.message, 14 stack: error.stack 15 }); 16 throw error; 17}

Logging is MANDATORY because:

  • User needs to verify the fix works
  • If it doesn't work, logs help debug further
  • Feedback loop: user provides logs → we iterate

Step 4: Verify the Fix

  • Check the code compiles/runs
  • Verify the logic is correct
  • Ensure no regressions introduced

Step 5: Suggest Test Coverage

ALWAYS suggest covering this case with a test:

## Fix Applied ✅

The issue was: [brief explanation]
Fixed by: [what was changed]

### Logging Added
The fix includes logging with prefix `[FIX]`.
Please test and share any logs if issues persist.

### Recommended: Add a Test

This bug should be covered by a test to prevent regression:

\`\`\`typescript
describe('functionName', () => {
  it('should handle [the edge case that caused the bug]', () => {
    // Arrange
    const input = /* the problematic input */;

    // Act
    const result = functionName(input);

    // Assert
    expect(result).toBe(/* expected */);
  });
});
\`\`\`

Would you like me to create this test?
- [ ] Yes, create the test
- [ ] No, skip for now

Logging Requirements

All fixes MUST include logging:

  1. Log prefix: Use [FIX] or [FIX:<issue-id>] for easy filtering
  2. Log inputs: What data was being processed
  3. Log success: Confirm the fix worked
  4. Log errors: Full context if something fails
  5. Configurable: Use LOG_LEVEL if available
typescript
1// Pattern for fixes 2const LOG_FIX = process.env.LOG_LEVEL === 'debug' || process.env.DEBUG_FIX; 3 4function fixedFunction(input) { 5 if (LOG_FIX) console.log('[FIX] Input:', input); 6 7 // ... fix logic ... 8 9 if (LOG_FIX) console.log('[FIX] Output:', result); 10 return result; 11}

Examples

Example 1: Null Reference Error

User: /aif-fix TypeError: Cannot read property 'name' of undefined in UserProfile

Actions:

  1. Search for UserProfile component/function
  2. Find where .name is accessed
  3. Add null check with logging
  4. Suggest test for null user case

Example 2: API Returns Wrong Data

User: /aif-fix /api/orders returns empty array for authenticated users

Actions:

  1. Find orders API endpoint
  2. Trace the query logic
  3. Find the bug (e.g., wrong filter)
  4. Fix with logging
  5. Suggest integration test

Example 3: Form Validation Not Working

User: /aif-fix email validation accepts invalid emails

Actions:

  1. Find email validation logic
  2. Check regex or validation library usage
  3. Fix the validation
  4. Add logging for validation failures
  5. Suggest unit test with edge cases

Important Rules

  1. Check FIX_PLAN.md first - Always check for existing plan before anything else
  2. Plan mode = plan only - When user chooses "Plan first", create the plan and STOP. Do NOT fix.
  3. Execute mode = follow the plan - When FIX_PLAN.md exists, follow it step by step, then delete it
  4. NO reports - Don't create summary documents (patches are learning artifacts, not reports)
  5. ALWAYS log - Every fix must have logging for feedback
  6. ALWAYS suggest tests - Help prevent regressions
  7. Root cause - Fix the actual problem, not symptoms
  8. Minimal changes - Don't refactor unrelated code
  9. One fix at a time - Don't scope creep
  10. Clean up - Delete FIX_PLAN.md after successful fix execution

After Fixing

## Fix Applied ✅

**Issue:** [what was broken]
**Cause:** [why it was broken]
**Fix:** [what was changed]

**Files modified:**
- path/to/file.ts (line X)

**Logging added:** Yes, prefix `[FIX]`
**Test suggested:** Yes

Please test the fix and share logs if any issues.

To add the suggested test:
- [ ] Yes, create test
- [ ] No, skip

Step 6: Create Self-Improvement Patch

ALWAYS create a patch after every fix. This builds a knowledge base for future fixes.

Create the patch:

  1. Create directory if it doesn't exist:

    bash
    1mkdir -p .ai-factory/patches
  2. Create a patch file with the current timestamp as filename. Format: YYYY-MM-DD-HH.mm.md (e.g., 2026-02-07-14.30.md)

  3. Use this template:

markdown
1# [Brief title describing the fix] 2 3**Date:** YYYY-MM-DD HH:mm 4**Files:** list of modified files 5**Severity:** low | medium | high | critical 6 7## Problem 8 9What was broken. How it manifested (error message, wrong behavior). 10Be specific — include the actual error or symptom. 11 12## Root Cause 13 14WHY the problem occurred. This is the most valuable part. 15Not "what was wrong" but "why it was wrong": 16- Logic error? Why was the logic incorrect? 17- Missing check? Why was it missing? 18- Wrong assumption? What was assumed? 19- Race condition? What sequence caused it? 20 21## Solution 22 23How the fix was implemented. Key code changes and reasoning. 24Include the approach, not just "changed line X". 25 26## Prevention 27 28How to prevent this class of problems in the future: 29- What pattern/practice should be followed? 30- What should be checked during code review? 31- What test would catch this? 32 33## Tags 34 35Space-separated tags for categorization, e.g.: 36`#null-check` `#async` `#validation` `#typescript` `#api` `#database`

Example patch:

markdown
1# Null reference in UserProfile when user has no avatar 2 3**Date:** 2026-02-07 14:30 4**Files:** src/components/UserProfile.tsx 5**Severity:** medium 6 7## Problem 8 9TypeError: Cannot read property 'url' of undefined when rendering 10UserProfile for users without an uploaded avatar. 11 12## Root Cause 13 14The `user.avatar` field is optional in the database schema but the 15component accessed `user.avatar.url` without a null check. This was 16introduced in commit abc123 when avatar display was added — the 17developer tested only with users that had avatars. 18 19## Solution 20 21Added optional chaining: `user.avatar?.url` with a fallback to a 22default avatar URL. Also added a null check in the Avatar sub-component. 23 24## Prevention 25 26- Always check if database fields marked as `nullable` / `optional` 27 are handled with null checks in the UI layer 28- Add test cases for "empty state" — user with minimal data 29- Consider a lint rule for accessing nested optional properties 30 31## Tags 32 33`#null-check` `#react` `#optional-field` `#typescript`

This is NOT optional. Every fix generates a patch. The patch is your learning.

Context Cleanup

Context is heavy after investigation, fix, and patch generation. All results are saved — suggest freeing space:

AskUserQuestion: Free up context before continuing?

Options:
1. /clear — Full reset (recommended)
2. /compact — Compress history
3. Continue as is

DO NOT:

  • ❌ Apply a fix when user chose "Plan first" — only create FIX_PLAN.md and stop
  • ❌ Skip the FIX_PLAN.md check at the start
  • ❌ Leave FIX_PLAN.md after successful fix execution — always delete it
  • ❌ Generate reports or summaries (patches are NOT reports — they are learning artifacts)
  • ❌ Refactor unrelated code
  • ❌ Add features while fixing
  • ❌ Skip logging
  • ❌ Skip test suggestion
  • ❌ Skip patch creation

Related Skills

Looking for an alternative to aif-fix or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication