KS
Killer-Skills

reproduce — how to use reproduce how to use reproduce, reproduce setup guide, reproduce alternative, reproduce vs Claude Code, clog CLI tutorial, debugging with reproduce, reproduce install, what is reproduce

v1.0.0
GitHub

About this Skill

Perfect for Debugging Agents needing advanced log analysis capabilities with clog reproduce is a CLI-based skill for debugging with Claude Code, utilizing clog for log ingestion and analysis.

Features

Instruments code with log statements that POST to clog
Uses clog status command to check server status
Starts clog server with clog start command
Analyzes captured logs to find the root cause of bugs
Requires clog installation via carg if not already installed

# Core Topics

ferrucc-io ferrucc-io
[2]
[0]
Updated: 2/27/2026

Quality Score

Top 5%
42
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add ferrucc-io/clog/reproduce

Agent Capability Analysis

The reproduce MCP Server by ferrucc-io is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use reproduce, reproduce setup guide, reproduce alternative.

Ideal Agent Persona

Perfect for Debugging Agents needing advanced log analysis capabilities with clog

Core Value

Empowers agents to debug issues by instrumenting code with log statements that POST to clog, reproducing bugs, and analyzing captured logs using the clog CLI

Capabilities Granted for reproduce MCP Server

Debugging complex issues by analyzing log data
Reproducing bugs in a controlled environment with clog
Analyzing captured logs to identify root causes of errors

! Prerequisites & Limits

  • Requires clog server to be running
  • Needs clog to be installed
  • Limited to local log ingestion
Project
SKILL.md
4.7 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Debug with clog

You are a debugging assistant. You use clog — a local log ingestion CLI — to help the user find the root cause of a bug. The workflow is: instrument code with log statements that POST to clog, have the user reproduce the bug, then analyze the captured logs.

Prerequisites

Before starting, make sure the clog server is running:

bash
1clog status

If it's not running, start it:

bash
1clog start

If clog is not installed, tell the user to install it:

cargo install --path <path-to-clog-repo>

The clog server always runs on port 2999.

Step 1: Understand the bug

Ask the user:

  • What is the bug? What's the expected vs actual behavior?
  • Where in the codebase do they think the problem is? (file, function, flow)
  • How do they reproduce it?

If the user already described the bug (e.g. as an argument to /debug), skip straight to investigating the relevant code area. Use $ARGUMENTS as the bug description if provided.

Step 2: Instrument the code

Read the relevant source files and add logging statements that POST JSON to clog. Choose the right language for the user's codebase:

Python:

python
1import urllib.request, json 2def _clog(data): 3 try: 4 urllib.request.urlopen(urllib.request.Request( 5 "http://localhost:2999/log", 6 data=json.dumps(data).encode(), 7 headers={"Content-Type": "application/json"}, 8 method="POST")) 9 except: pass

JavaScript/TypeScript (Node):

javascript
1function _clog(data) { 2 fetch("http://localhost:2999/log", { 3 method: "POST", 4 headers: { "Content-Type": "application/json" }, 5 body: JSON.stringify(data), 6 }).catch(() => {}); 7}

Rust:

rust
1fn _clog(data: &impl serde::Serialize) { 2 let _ = reqwest::blocking::Client::new() 3 .post("http://localhost:2999/log") 4 .json(data) 5 .send(); 6}

Shell/curl:

bash
1curl -s -X POST http://localhost:2999/log \ 2 -H 'Content-Type: application/json' \ 3 -d '{"step":"description","value":"..."}'

What to log

Place log statements at key points in the suspected code path:

  • Function entry/exit with argument values
  • Branch decisions (which if/else/match arm was taken)
  • Variable values before and after transformations
  • Loop iterations with index and relevant state
  • Error catch blocks with the error details
  • API request/response payloads

Each log payload should include a "step" field describing where in the flow it is, plus whatever data is relevant. Example:

python
1_clog({"step": "validate_input", "user_id": user_id, "payload": payload}) 2_clog({"step": "db_query_result", "rows": len(rows), "first": rows[0] if rows else None}) 3_clog({"step": "transform_output", "before": raw, "after": transformed})

Keep log statements minimal and non-invasive — they should not change control flow.

Step 3: Ask the user to reproduce

Once instrumentation is in place, tell the user:

I've added debug logging to the code. Please reproduce the bug now — do exactly what triggers the issue. Let me know when you're done.

Wait for the user to confirm they've reproduced the bug before proceeding.

Step 4: Analyze the logs

Clear any old logs first if you started fresh, otherwise look at recent entries:

bash
1clog latest -n 50

For targeted searches, use grep on the log file directly:

bash
1grep "step" ~/.clog/logs/clog.ndjson

Or use clog latest with a filter:

bash
1clog latest -n 100 -q "error" 2clog latest -n 100 -q "step_name"

For more powerful searches, use ripgrep:

bash
1rg "pattern" ~/.clog/logs/clog.ndjson

Analysis approach

  1. Trace the flow — read logs chronologically to see what path the code took
  2. Find the divergence — identify where actual behavior deviated from expected
  3. Inspect values — look at variable states at the divergence point
  4. Check for missing logs — if an expected log step is absent, that code path wasn't reached
  5. Correlate timestamps — use the ts field to identify timing issues or ordering problems

Step 5: Report findings and fix

Once you've identified the root cause:

  1. Explain to the user what you found, referencing specific log entries
  2. Propose a fix
  3. Remove all the _clog instrumentation you added (the logging was temporary)
  4. Clean up: clog clear

Important notes

  • Always remove instrumentation after debugging. The _clog calls are not production code.
  • If the first round of logs isn't enough, add more targeted instrumentation and ask the user to reproduce again.
  • If clog status shows the server is dead mid-session, restart it with clog start.
  • The log file is at ~/.clog/logs/clog.ndjson — each line is {"ts":"...","data":{...}}.

Related Skills

Looking for an alternative to reproduce or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication