reflection — how to use reflection in code review how to use reflection in code review, what is reflection in software development, reflection alternative to manual note-taking, reflection vs code review tools, reflection install guide for AI agents, reflection setup for efficient knowledge retention, reflection for developers, reflection and Git integration, reflection for code lesson capture

v1.0.0
GitHub

About this Skill

Ideal for Code Review Agents seeking to enhance knowledge retention through systematic reflection and documentation in CLAUDE.md Reflection is a process of reviewing entire conversation history and code diffs to capture key lessons and decisions in CLAUDE.md.

Features

Reviews entire conversation history for lesson capture
Utilizes code diffs as supplementary context
Integrates with Git to review current branch and changes
Runs commands like `git branch --show-current` and `git diff --stat HEAD`
Captures lessons in CLAUDE.md for future reference
Supports efficient code review and knowledge retention

# Core Topics

straubt1 straubt1
[91]
[16]
Updated: 3/7/2026

Quality Score

Top 5%
56
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
> npx killer-skills add straubt1/tfx/reflection
Supports 18+ Platforms
Cursor
Windsurf
VS Code
Trae
Claude
OpenClaw
+12 more

Agent Capability Analysis

The reflection MCP Server by straubt1 is an open-source Community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use reflection in code review, what is reflection in software development, reflection alternative to manual note-taking.

Ideal Agent Persona

Ideal for Code Review Agents seeking to enhance knowledge retention through systematic reflection and documentation in CLAUDE.md

Core Value

Empowers agents to systematically review conversation histories and code diffs, capturing key lessons in CLAUDE.md for future reference, thereby enhancing collaborative development and reducing repetitive mistakes through Git state analysis and documentation

Capabilities Granted for reflection MCP Server

Reviewing conversation histories for lesson identification
Capturing code review insights in CLAUDE.md
Analyzing Git state for supplementary context

! Prerequisites & Limits

  • Requires access to conversation history and code diffs
  • CLAUDE.md integration necessary
  • Git state analysis capabilities needed
Project
SKILL.md
2.9 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Reflection

The goal is to identify lessons from this session that should be permanently captured in CLAUDE.md — so future sessions benefit without repeating the same mistakes, clarifications, or decisions.

Review the entire conversation history — every message, correction, and preference — as the primary source. Code diffs are supplementary.

Current Git State (supplementary context)

  • Branch: !git branch --show-current
  • Changes since last commit: !git diff --stat HEAD
  • Staged changes: !git diff --stat --cached

Process

  1. Read the full conversation from top to bottom. Pay attention to:

    • Questions the user had to answer that should have been obvious from CLAUDE.md
    • Corrections the user made to your approach or output
    • Preferences or constraints the user stated (even casually)
    • Things you got wrong on the first attempt and had to revise
    • Decisions made about architecture, naming, tooling, or workflow
    • Anything the user explicitly said to always/never do
  2. Review the diffs (supplementary) — Read modified files for context on what was built and why, but do not let this overshadow lessons from the conversation itself.

  3. Identify lessons in these categories:

    • Patterns & conventions — Things that worked well and should be encoded as rules
    • Gotchas & pitfalls — Things that caused confusion, required retries, or were non-obvious
    • Architecture decisions — Choices made that future sessions should know about
    • Workflow & communication preferences — How the user prefers to work, communicate, or receive output
    • Outdated/wrong memory — Anything in CLAUDE.md or MEMORY.md that turned out to be incorrect or missing
  4. Read the current CLAUDE.md to avoid duplicating what's already there and to find gaps.

  5. Propose edits to CLAUDE.md — For each lesson worth keeping, suggest the specific text to add, change, or remove, and where it belongs.

  6. If no lessons are found, explicitly state: "No CLAUDE.md updates needed from this session." This confirms the session was considered.

  7. Do not apply edits automatically. Present proposals to the user and wait for approval.

  8. After applying approved edits, print a brief summary in chat of what changed.

Output Format

When lessons are found:

## Reflection

### [Category]
**Lesson**: <what was learned>
**Proposed CLAUDE.md change**: <exact text, with target section>

---
(one block per lesson)

When no lessons are found:

## Reflection

No CLAUDE.md updates needed from this session. The following were considered but already covered or not worth persisting:
- <item> — already in CLAUDE.md / too session-specific / etc.

After applying approved changes:

## CLAUDE.md updated

- Added: "<description>" under ## Section
- Modified: "<what changed>" in ## Section
- Removed: "<what was removed>"

Related Skills

Looking for an alternative to reflection or building a Community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

linear

Logo of lobehub
lobehub

Linear is a workflow management system that enables multi-agent collaboration, effortless agent team design, and introduces agents as the unit of work interaction.

73.4k
0
Communication

testing

Logo of lobehub
lobehub

Testing is a process for verifying AI agent functionality using commands like bunx vitest run and optimizing workflows with targeted test runs.

73.3k
0
Communication

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication