KS
Killer-Skills

learn — how to use learn skill how to use learn skill, what is learn ai agent, learn alternative for researchers, learn vs other skill extraction tools, learn install guide for academics, learn setup for LaTeX/Beamer + R, learn skill for multi-agent review, learn workflow for adversarial QA

v1.0.0
GitHub

About this Skill

Perfect for Research Agents needing advanced documentation and workflow extraction capabilities using LaTeX/Beamer and R. learn is a Claude Code template for extracting non-obvious discoveries into reusable skills that persist across sessions

Features

Extracts non-obvious discoveries into reusable skills using LaTeX/Beamer + R
Supports multi-agent review and quality gates for collaborative research
Utilizes adversarial QA and replication protocols for rigorous testing
Enables trial-and-error experimentation with undocumented API usage
Facilitates workaround implementation for creative problem-solving

# Core Topics

pedrohcgs pedrohcgs
[476]
[635]
Updated: 2/28/2026

Quality Score

Top 5%
75
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add pedrohcgs/claude-code-my-workflow/learn

Agent Capability Analysis

The learn MCP Server by pedrohcgs is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use learn skill, what is learn ai agent, learn alternative for researchers.

Ideal Agent Persona

Perfect for Research Agents needing advanced documentation and workflow extraction capabilities using LaTeX/Beamer and R.

Core Value

Empowers agents to extract non-obvious discoveries into reusable skills, leveraging LaTeX/Beamer for presentation and R for data analysis, facilitating the documentation of complex workflows and tool integrations.

Capabilities Granted for learn MCP Server

Extracting insights from trial-and-error processes
Documenting workarounds for undocumented API usage
Debugging misleading errors and non-obvious debugging investigations

! Prerequisites & Limits

  • Requires LaTeX/Beamer and R setup
  • Limited to extracting discoveries into reusable skills
Project
SKILL.md
4.1 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

/learn — Skill Extraction Workflow

Extract non-obvious discoveries into reusable skills that persist across sessions.

When to Use This Skill

Invoke /learn when you encounter:

  • Non-obvious debugging — Investigation that took significant effort, not in docs
  • Misleading errors — Error message was wrong, found the real cause
  • Workarounds — Found a limitation with a creative solution
  • Tool integration — Undocumented API usage or configuration
  • Trial-and-error — Multiple attempts before success
  • Repeatable workflows — Multi-step task you'd do again
  • User-facing automation — Reports, checks, or processes users will request

Workflow Phases

PHASE 1: Evaluate (Self-Assessment)

Before creating a skill, answer these questions:

  1. "What did I just learn that wasn't obvious before starting?"
  2. "Would future-me benefit from this being documented?"
  3. "Was the solution non-obvious from documentation alone?"
  4. "Is this a multi-step workflow I'd repeat?"

Continue only if YES to at least one question.

PHASE 2: Check Existing Skills

Search for related skills to avoid duplication:

bash
1# Check project skills 2ls .claude/skills/ 2>/dev/null 3 4# Search for keywords 5grep -r -i "KEYWORD" .claude/skills/ 2>/dev/null

Outcomes:

  • Nothing related → Create new skill (continue to Phase 3)
  • Same trigger & fix → Update existing skill (bump version)
  • Partial overlap → Update with new variant

PHASE 3: Create Skill

Create the skill file at .claude/skills/[skill-name]/SKILL.md:

yaml
1--- 2name: descriptive-kebab-case-name 3description: | 4 [CRITICAL: Include specific triggers in the description] 5 - What the skill does 6 - Specific trigger conditions (exact error messages, symptoms) 7 - When to use it (contexts, scenarios) 8author: Claude Code Academic Workflow 9version: 1.0.0 10argument-hint: "[expected arguments]" # Optional 11--- 12 13# Skill Name 14 15## Problem 16[Clear problem description — what situation triggers this skill] 17 18## Context / Trigger Conditions 19[When to use — exact error messages, symptoms, scenarios] 20[Be specific enough that you'd recognize it again] 21 22## Solution 23[Step-by-step solution] 24[Include commands, code snippets, or workflows] 25 26## Verification 27[How to verify it worked] 28[Expected output or state] 29 30## Example 31[Concrete example of the skill in action] 32 33## References 34[Documentation links, related files, or prior discussions]

PHASE 4: Quality Gates

Before finalizing, verify:

  • Description has specific trigger conditions (not vague)
  • Solution was verified to work (tested)
  • Content is specific enough to be actionable
  • Content is general enough to be reusable
  • No sensitive information (credentials, personal data)
  • Skill name is descriptive and uses kebab-case

Output

After creating the skill, report:

✓ Skill created: .claude/skills/[name]/SKILL.md
  Trigger: [when to use]
  Problem: [what it solves]

Example: Creating a Skill

User discovers that a specific R package silently drops observations:

markdown
1--- 2name: fixest-missing-covariate-handling 3description: | 4 Handle silent observation dropping in fixest when covariates have missing values. 5 Use when: estimates seem wrong, sample size unexpectedly small, or comparing 6 results between packages. 7author: Claude Code Academic Workflow 8version: 1.0.0 9--- 10 11# fixest Missing Covariate Handling 12 13## Problem 14The fixest package silently drops observations when covariates have NA values, 15which can produce unexpected results when comparing to other packages. 16 17## Context / Trigger Conditions 18- Sample size in fixest is smaller than expected 19- Results differ from Stata or other R packages 20- Model has covariates with potential missing values 21 22## Solution 231. Check for NA patterns before regression: 24 ```r 25 summary(complete.cases(data[, covariates]))
  1. Explicitly handle NA values or use na.action parameter
  2. Document the expected sample size in comments

Verification

Compare nobs(model) with nrow(data) — difference indicates dropped obs.

References

  • fixest documentation on missing values
  • [LEARN:r-code] entry in MEMORY.md

Related Skills

Looking for an alternative to learn or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication