issue-resolution — Next.js pipeline setup issue-resolution, groaly, mthangtr, community, Next.js pipeline setup, ai agent skill, ide skills, agent automation, Supabase database management, Claude AI task analysis, PostgreSQL issue resolution, iterative diagnosis workflow

v1.0.0
GitHub

About this Skill

Perfect for Debugging Agents needing systematic issue resolution capabilities using Next.js, Supabase, and Claude AI. issue-resolution is a systematic process using iterative diagnosis and verified fixes to resolve issues, leveraging technologies like Next.js and Supabase.

Features

Uses Next.js for scalable and performant issue resolution pipelines
Leverages Supabase for database management and PostgreSQL compatibility
Employs Claude AI for intelligent task analysis and prioritization
Supports iterative loops for reproduction and root cause analysis
Generates actionable tasks across multiple life goals

# Core Topics

mthangtr mthangtr
[0]
[0]
Updated: 2/4/2026

Quality Score

Top 5%
60
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
> npx killer-skills add mthangtr/groaly/issue-resolution
Supports 19+ Platforms
Cursor
Windsurf
VS Code
Trae
Claude
OpenClaw
+12 more

Agent Capability Analysis

The issue-resolution skill by mthangtr is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for Next.js pipeline setup, Supabase database management, Claude AI task analysis.

Ideal Agent Persona

Perfect for Debugging Agents needing systematic issue resolution capabilities using Next.js, Supabase, and Claude AI.

Core Value

Empowers agents to iteratively diagnose and resolve issues through verified fixes, utilizing Root Cause Analysis and Impact Assessment, while integrating with Supabase for data management and Claude AI for enhanced decision-making.

Capabilities Granted for issue-resolution

Triage and reproducing complex issues
Conducting Root Cause Analysis for efficient problem-solving
Verifying fixes through iterative testing and validation

! Prerequisites & Limits

  • Requires integration with Next.js and Supabase
  • Dependent on Claude AI for enhanced analysis
  • Iterative loops may require additional computational resources
Project
SKILL.md
11.9 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8
SKILL.md
Readonly

Issue Resolution Pipeline

Systematically resolve issues through iterative diagnosis and verified fixes.

Pipeline Overview

INPUT → Triage → Reproduction → Root Cause Analysis → Impact → Fix → Verify
              ◄──────────────►◄────────────────────►
                    (Iterative loops allowed)
PhasePurposeOutput
0. TriageNormalize input, classify severityIssue Brief
1. ReproductionProve the bug, trace code pathRepro Report + Test
2. Root Cause AnalysisFind WHY, not just WHERERCA Report
3. Impact AssessmentBlast radius, regression riskImpact Report
4. Fix DecompositionBreak into beads.beads/*.md
5. VerificationProve fix works, no regressionsPassing tests

Phase 0: Triage

Normalize different input types to a structured Issue Brief.

Input Types

TypeTriage Strategy
Vague reportClarify → Explore → Reproduce
Error/Stack traceParse trace → Locate code → Reproduce
Failing testRun test → Extract assertion → Trace

Vague Report Triage

User: "Login is broken"
         │
         ▼
Ask clarification questions:
• What error do you see?
• When did it start working / stop working?
• What steps trigger it?
• Specific user/browser/environment?
         │
         ▼ (if user can't clarify)
Explore:
• mcp__gkg__search_codebase_definitions: Find auth/login related code
• git log: Recent changes in area
• Check logs if available

Error/Stack Trace Triage

Parse the stack trace:
• Extract file:line locations
• mcp__gkg__get_definition on functions in trace
• Read surrounding context
         │
         ▼
Identify reproduction conditions:
• What input caused this?
• Can we write a test?

Failing Test Triage

Run test in isolation:
• bun test <file> --filter "<test name>"
• Read test file for setup/assertions
• Check git log: was it passing before?
         │
         ▼
Trace implementation:
• What code does test exercise?
• mcp__gkg__get_references on tested function
• Recent changes to implementation?

Severity Classification

Determines reproduction requirements:

SeverityReproduction Required
CRITICAL (production, security)Failing test REQUIRED
REGRESSION (was working)Failing test REQUIRED
RACE CONDITION (timing)Failing test REQUIRED
LOGIC BUGFailing test PREFERRED
UI/VISUALManual + screenshot OK
PERFORMANCEBenchmark/profile OK
QUICK FIX (obvious cause)Manual repro OK

Issue Brief Template

Save to history/issues/<id>/brief.md:

markdown
1# Issue Brief: <Short Title> 2 3**Severity**: CRITICAL / HIGH / MEDIUM / LOW 4**Type**: Regression / Edge case / Race condition / UI / Performance / Other 5**Repro Required**: Failing test / Manual OK 6 7## Symptom 8 9<What is happening> 10 11## Expected Behavior 12 13<What should happen> 14 15## Reproduction 16 17<Steps, test command, or code path> 18 19## Evidence 20 21<Error message, stack trace, test output> 22 23## Affected Area 24 25<Files, modules, features involved> 26 27## Timeline 28 29<When started, recent changes if known>

Phase 1: Reproduction

Prove the bug exists and trace the code path.

If Failing Test Required

bash
1# Create test file 2# packages/<area>/src/__tests__/<feature>.regression.test.ts 3 4# Test should: 5# 1. Set up conditions that trigger bug 6# 2. Assert expected behavior (currently fails) 7# 3. Be deterministic

Reproduction Checklist

  • Bug is reproducible on demand
  • Exact error/behavior captured
  • Minimal reproduction (simplest case that fails)
  • Code path identified (stack trace or tracing)

Code Path Tracing

mcp__gkg__get_definition     → Find where error originates
mcp__gkg__get_references     → Find callers
git blame <file>       → Who changed it, when
git log -p <file>      → What changed recently

Repro Report Template

Save to history/issues/<id>/repro.md:

markdown
1# Reproduction Report: <Issue Title> 2 3## Reproduction Method 4 5☐ Failing test: `<test file and name>` 6☐ Manual: <steps> 7 8## Error/Behavior Captured 9 10<Exact error message, stack trace> 11 12## Code Path 13 141. Entry: `<file>:<line>` - <function> 152. Calls: `<file>:<line>` - <function> 163. Fails at: `<file>:<line>` - <reason> 17 18## Recent Changes (if relevant) 19 20- <commit>: <summary>

Phase 2: Root Cause Analysis

Find WHY the bug happens, not just WHERE.

RCA Framework

STEP 1: Generate hypotheses (3-5)
         │
         ▼
STEP 2: Gather evidence for/against each
         │
         ▼
STEP 3: Eliminate hypotheses
         │
         ▼
STEP 4: Confirm root cause

Bug Type → RCA Strategy

Bug TypeStrategyKey Tools
RegressionFind breaking changegit bisect, git blame
Edge caseAnalyze boundary inputsType inspection, boundary tests
Race conditionTrace async flowTiming logs, async analysis
Data corruptionTrace state changesData flow analysis
External depCheck version/API changesChangelogs, API docs

Oracle for RCA

Hypothesis Generation:

oracle(
  task: "Generate root cause hypotheses",
  context: """
    Symptom: <error>
    Code path: <trace>
    Recent changes: <git log>

    Generate 3-5 hypotheses ranked by likelihood.
    For each, what evidence would support/refute it?
  """,
  files: ["<affected files>"]
)

Hypothesis Validation:

oracle(
  task: "Validate root cause hypothesis",
  context: """
    Hypothesis: <proposed cause>
    Evidence: <gathered evidence>

    1. Does evidence support or refute?
    2. Explain causal chain: cause → symptom
    3. What would confirm this?
  """,
  files: ["<relevant files>"]
)

Iteration: RCA → Reproduction Loop

If hypothesis needs more evidence:

IN RCA: "Need timing logs to confirm race condition"
    │
    ▼
BACK TO REPRO:
• Add instrumentation
• Run with specific conditions
• Capture new evidence
    │
    ▼
RETURN TO RCA with new evidence

RCA Report Template

Save to history/issues/<id>/rca.md:

markdown
1# Root Cause Analysis: <Issue Title> 2 3## Iteration: <N> 4 5## Hypotheses Considered 6 7### Hypothesis A: <Description> 8 9- **Likelihood**: HIGH / MEDIUM / LOW 10- **Supporting evidence**: ... 11- **Refuting evidence**: ... 12- **Verdict**: ✓ CONFIRMED / ✗ ELIMINATED 13 14### Hypothesis B: ... 15 16## Root Cause (Confirmed) 17 18**Cause**: <Clear statement> 19 20**Causal chain**: 21 221. <Step> leads to 232. <Step> leads to 243. <Symptom> 25 26## Why This Happened 27 28<Underlying reason - missing validation, wrong assumption, etc.> 29 30## Fix Approach 31 32**Immediate**: <What to change> 33**Preventive**: <How to prevent similar bugs>

Phase 3: Impact Assessment

Before fixing, understand blast radius.

Impact Analysis

mcp__gkg__get_references <affected function>
    → Who else calls this?

Grep for related patterns
    → Similar code that might have same bug?

Review test coverage
    → What tests cover this area?

Regression Risk

FactorRisk Level
High usage functionHIGH
Shared utilityHIGH
Public API changeHIGH
Internal helperLOW
Isolated moduleLOW

Spike for Complex Fixes

If fix approach is uncertain:

bash
1bd create "Spike: Validate fix approach for <issue>" -t task -p 0

Execute via MULTI_AGENT_WORKFLOW, write to .spikes/<issue-id>/.

Impact Report Template

Save to history/issues/<id>/impact.md:

markdown
1# Impact Assessment: <Issue Title> 2 3## Blast Radius 4 5### Direct Impact 6 7- <File/function directly changed> 8 9### Callers Affected 10 11- <List from mcp__gkg__get_references> 12 13### Related Code 14 15- <Similar patterns that may need same fix> 16 17## Regression Risk 18 19**Level**: HIGH / MEDIUM / LOW 20**Reason**: <Why this risk level> 21 22## Test Coverage 23 24- Existing tests: <list> 25- Tests to add: <list> 26 27## Fix Validation 28 29☐ Spike completed (if needed): `.spikes/<id>/` 30☐ Fix approach validated

Phase 4: Fix Decomposition

Break fix into beads.

Simple Fix (Single Bead)

bash
1bd create "Fix: <issue title>" -t bug -p <priority>

Bead includes:

  • Root cause reference
  • Fix implementation
  • Test (failing → passing)
  • Docs update (if behavior change)

Complex Fix (Multiple Beads)

bash
1bd create "Epic: Fix <issue>" -t epic -p <priority> 2bd create "Add regression test for <issue>" -t task --blocks <epic> 3bd create "Fix <component A>" -t bug --blocks <epic> --deps <test> 4bd create "Fix <component B>" -t bug --blocks <epic> --deps <test> 5bd create "Update docs for <behavior change>" -t task --blocks <epic> --deps <fix-a>,<fix-b>

Fix Bead Template

markdown
1# Fix: <Issue Title> 2 3**Type**: bug 4**Priority**: <0-4> 5**Fixes**: <issue reference> 6 7## Root Cause 8 9<From RCA report> 10 11## Fix Implementation 12 13<What to change and why> 14 15## Files to Modify 16 17- `<file>`: <change description> 18 19## Acceptance Criteria 20 21- [ ] Regression test passes 22- [ ] Original symptom no longer reproducible 23- [ ] No new test failures 24- [ ] `bun run check-types` passes 25- [ ] `bun run build` passes

Phase 5: Verification

Prove fix works and nothing else broke.

Verification Checklist

bash
1# 1. Regression test passes 2bun test <regression-test-file> 3 4# 2. Original symptom gone 5<manual verification or test> 6 7# 3. No new failures 8bun run test 9 10# 4. Types and build 11bun run check-types 12bun run build

Iteration: Verify → RCA Loop

If fix doesn't work:

Test still fails after fix
    │
    ▼
Root cause was wrong or incomplete
    │
    ▼
BACK TO RCA:
• Eliminate current hypothesis
• Generate new hypotheses
• Update RCA Report with iteration

Verify → Impact Loop

If fix causes regressions:

New test failures after fix
    │
    ▼
Fix has unintended side effects
    │
    ▼
BACK TO IMPACT:
• Reassess blast radius
• Consider alternative fix approach

Loop Limits

Prevent infinite iteration:

LoopSoft LimitHard LimitAt Hard Limit
RCA → Repro24Escalate / pair debug
RCA → Triage12Re-evaluate original report
Verify → RCA23Oracle deep review

Quick Reference

Tool Selection

NeedTool
Parse stack traceRead + mcpgkgget_definition
Find callersmcpgkgget_references
Recent changesgit log, git blame
Binary search commitsgit bisect
Reasoning about causeoracle
Validate fix approachSpike via MULTI_AGENT_WORKFLOW

Common Mistakes

  • Fixing symptom, not cause → Leads to recurrence
  • Skipping reproduction → Can't verify fix
  • No regression test → Bug returns later
  • Ignoring impact → Fix breaks other things
  • Not iterating → Wrong diagnosis persists

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is issue-resolution?

Perfect for Debugging Agents needing systematic issue resolution capabilities using Next.js, Supabase, and Claude AI. issue-resolution is a systematic process using iterative diagnosis and verified fixes to resolve issues, leveraging technologies like Next.js and Supabase.

How do I install issue-resolution?

Run the command: npx killer-skills add mthangtr/groaly/issue-resolution. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for issue-resolution?

Key use cases include: Triage and reproducing complex issues, Conducting Root Cause Analysis for efficient problem-solving, Verifying fixes through iterative testing and validation.

Which IDEs are compatible with issue-resolution?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for issue-resolution?

Requires integration with Next.js and Supabase. Dependent on Claude AI for enhanced analysis. Iterative loops may require additional computational resources.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add mthangtr/groaly/issue-resolution. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use issue-resolution immediately in the current project.

Related Skills

Looking for an alternative to issue-resolution or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
Design

linear

Logo of lobehub
lobehub

Linear issue management. MUST USE when: (1) user mentions LOBE-xxx issue IDs (e.g. LOBE-4540), (2) user says linear, linear issue, link linear, (3) creating PRs that reference Linear issues. Provides

73.4k
0
Communication

testing

Logo of lobehub
lobehub

Testing guide using Vitest. Use when writing tests (.test.ts, .test.tsx), fixing failing tests, improving test coverage, or debugging test issues. Triggers on test creation, test debugging, mock setup

73.3k
0
Communication

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication