skill-optimizer — for Claude Code skill-optimizer, doan3-webquanlynhahang, community, for Claude Code, ide skills, optimize-skill, optimize-skill my-skill, optimize-skill skill-a skill-b, SKILL.md, triggering

v1.0.0

Об этом навыке

Подходящий сценарий: Ideal for AI agents that need when to use this skill. Локализованное описание: If data is insufficient, report "N/A — insufficient session data" rather than omitting. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Возможности

When to Use This Skill
Use when skills are not triggering as expected or seem broken
Use when you want to audit and improve your skill library's quality
Use when you want to understand which skills are underperforming or wasting context tokens
Read-only : never modify skill files. Only output report.

# Core Topics

nguoikhongten02022005-cell nguoikhongten02022005-cell
[0]
[0]
Updated: 4/23/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
70
Canonical Locale
en
Detected Body Locale
en

Подходящий сценарий: Ideal for AI agents that need when to use this skill. Локализованное описание: If data is insufficient, report "N/A — insufficient session data" rather than omitting. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Зачем использовать этот навык

Рекомендация: skill-optimizer helps agents when to use this skill. If data is insufficient, report "N/A — insufficient session data" rather than omitting. This AI agent skill supports Claude Code, Cursor, and Windsurf

Подходит лучше всего

Подходящий сценарий: Ideal for AI agents that need when to use this skill.

Реализуемые кейсы использования for skill-optimizer

Сценарий использования: Applying When to Use This Skill
Сценарий использования: Applying Use when skills are not triggering as expected or seem broken
Сценарий использования: Applying Use when you want to audit and improve your skill library's quality

! Безопасность и ограничения

  • Ограничение: Read-only : never modify skill files. Only output report.
  • Ограничение: Suggest, don't prescribe : give specific wording suggestions for description improvements, but frame as suggestions.
  • Ограничение: If user specified skill names, filter to only those.

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is skill-optimizer?

Подходящий сценарий: Ideal for AI agents that need when to use this skill. Локализованное описание: If data is insufficient, report "N/A — insufficient session data" rather than omitting. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

How do I install skill-optimizer?

Run the command: npx killer-skills add nguoikhongten02022005-cell/doan3-webquanlynhahang/skill-optimizer. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for skill-optimizer?

Key use cases include: Сценарий использования: Applying When to Use This Skill, Сценарий использования: Applying Use when skills are not triggering as expected or seem broken, Сценарий использования: Applying Use when you want to audit and improve your skill library's quality.

Which IDEs are compatible with skill-optimizer?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for skill-optimizer?

Ограничение: Read-only : never modify skill files. Only output report.. Ограничение: Suggest, don't prescribe : give specific wording suggestions for description improvements, but frame as suggestions.. Ограничение: If user specified skill names, filter to only those..

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add nguoikhongten02022005-cell/doan3-webquanlynhahang/skill-optimizer. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use skill-optimizer immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

skill-optimizer

If data is insufficient, report "N/A — insufficient session data" rather than omitting. This AI agent skill supports Claude Code, Cursor, and Windsurf

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

When to Use This Skill

  • Use when skills are not triggering as expected or seem broken
  • Use when you want to audit and improve your skill library's quality
  • Use when you want to understand which skills are underperforming or wasting context tokens

Rules

  • Read-only: never modify skill files. Only output report.
  • All 8 dimensions: do not skip any. If data is insufficient, report "N/A — insufficient session data" rather than omitting.
  • Quantify: "you had 12 research tasks last week but the skill never triggered" beats "you often do research".
  • Suggest, don't prescribe: give specific wording suggestions for description improvements, but frame as suggestions.
  • Show evidence: for undertrigger claims, quote the actual user message that should have triggered the skill.
  • Evidence-based suggestions: when suggesting description rewrites, cite the specific research finding that motivates the change (e.g., "front-load trigger keywords — MCP study shows 3.6x selection rate improvement").

Overview

Analyze skills using historical session data + static quality checks, output a diagnostic report with P0/P1/P2 prioritized fixes. Scores each skill on a 5-point composite scale across 8 dimensions.

CSO (Claude/Agent Search Optimization) = writing skill descriptions so agents select the right skill at the right time. This skill checks for CSO violations.

Usage

  • /optimize-skill → scan all skills
  • /optimize-skill my-skill → single skill
  • /optimize-skill skill-a skill-b → multiple specified skills

Data Sources

Auto-detect the current agent platform and scan the corresponding paths:

SourceClaude CodeCodexShared
Session transcripts~/.claude/projects/**/*.jsonl~/.codex/sessions/**/*.jsonl
Skill files~/.claude/skills/*/SKILL.md~/.codex/skills/*/SKILL.md~/.agents/skills/*/SKILL.md

Platform detection: Check which directories exist. Scan all available sources — a user may have both Claude Code and Codex installed.

Workflow

Identify target skills
        ↓
Collect session data (python3 scripts scan JSONL transcripts)
        ↓
Run 8 analysis dimensions
        ↓
Compute composite scores
        ↓
Output report with P0/P1/P2

Step 1: Identify Target Skills

Scan skill directories in order: ~/.claude/skills/, ~/.codex/skills/, ~/.agents/skills/. Deduplicate by skill name (same name in multiple locations = same skill). For each, read SKILL.md and extract:

  • name, description (from YAML frontmatter)
  • trigger keywords (from description field)
  • defined workflow steps (Step 1/2/3... or ### sections under Workflow)
  • word count

If user specified skill names, filter to only those.

Step 2: Collect Session Data

Use python3 scripts via Bash to scan session JSONL files. Extract:

Claude Code sessions (~/.claude/projects/**/*.jsonl):

  • Skill tool_use calls (which skills were invoked)
  • User messages (full text)
  • Assistant messages after skill invocation (for workflow tracking)
  • User messages after skill invocation (for reaction analysis)

Codex sessions (~/.codex/sessions/**/*.jsonl):

  • session_meta events → extract base_instructions for skill loading evidence
  • response_item events → assistant outputs (workflow tracking)
  • event_msg events → tool execution and skill-related events
  • User messages from turn_context events (for reaction analysis)

Note: Codex injects skills via context rather than explicit Skill tool calls. Skill loading (present in base_instructions) does NOT equal active invocation. To detect actual use, search for skill-specific workflow markers (step headers, output formats) in response_item content within that session. A skill is "invoked" only if the agent produced output following the skill's defined workflow.

Aggregated:

  • Per-skill: invocation count, trigger keyword match count
  • Per-skill: user reaction sentiment after invocation
  • Per-skill: workflow step completion markers

Step 3: Run 8 Analysis Dimensions

You MUST run ALL 8 dimensions. The baseline behavior without this skill is to skip dimensions 4.2, 4.3, 4.5b, and 4.8. These are the most valuable dimensions — do not skip them.

4.1 Trigger Rate

Count how many times each skill was actually invoked vs how many times its trigger keywords appeared in user messages.

Claude Code: count Skill tool_use calls in transcripts. Codex: count sessions where the agent produced output following the skill's workflow markers (not merely loaded in context).

Diagnose:

  • Never triggered → skill may be useless or trigger words wrong
  • Keywords match >> actual invocations → undertrigger problem, description needs work
  • High frequency → core skill, worth optimizing

4.2 Post-Invocation User Reaction

This dimension is critical and easy to skip. Do not skip it.

After a skill is invoked in a session, read the user's next 3 messages. Classify:

  • Negative: "no", "wrong", "never mind", "not what I wanted", user interrupts
  • Correction: user re-describes their intent, manually overrides skill output
  • Positive: "good", "ok", "continue", "nice", user follows the workflow
  • Silent switch: user changes topic entirely (likely false positive trigger)

Report per-skill satisfaction rate.

4.3 Workflow Completion Rate

This dimension is critical and easy to skip. Do not skip it.

For each skill invocation found in session data:

  1. Extract the skill's defined steps from SKILL.md
  2. Search the assistant messages in that session for step markers (Step N, specific output formats defined in the skill)
  3. Calculate: how far did execution get?

Report: {skill-name} (N steps): avg completed Step X/N (Y%)

If a specific step is frequently where execution stops, flag it.

4.4 Static Quality Analysis

Check each SKILL.md against these 14 rules:

CheckPass Criteria
Frontmatter formatOnly name + description, total < 1024 chars
Name formatLetters, numbers, hyphens only
Description triggerStarts with "Use when..." or has explicit trigger conditions
Description workflow leakDescription does NOT summarize the skill's workflow steps (CSO violation)
Description pushinessDescription actively claims scenarios where it should be used, not just passive
Overview sectionPresent
Rules sectionPresent
MUST/NEVER densityCount ALL-CAPS directive words; >5 per 100 words = flag
Word count< 500 words (flag if over)
Narrative anti-patternNo "In session X, we found..." storytelling
YAML quoting safetydescription containing : must be wrapped in double quotes
Critical info positionCore trigger conditions and primary actions must be in the first 20% of SKILL.md
Description 250-char checkPrimary trigger keywords must appear within the first 250 characters of description
Trigger condition count≤ 2 trigger conditions in description is ideal

4.5a False Positive Rate (Overtrigger)

Skill was invoked but user immediately rejected or ignored it.

4.5b Undertrigger Detection

This is the highest-value dimension. For each skill, extract its capability keywords (not just trigger keywords — what the skill CAN do). Then scan user messages for tasks that match those capabilities but where the skill was NOT invoked.

Report: which user messages SHOULD have triggered the skill but didn't, and suggest description improvements.

Compounding Risk Assessment: For skills with chronic undertriggering (0 triggers across 5+ sessions where relevant tasks appeared), flag as "compounding risk" — undertriggered skills cannot self-improve through usage feedback, causing the gap to widen over time. Recommend immediate description rewrite as P0.

4.6 Cross-Skill Conflicts

Compare all skill pairs:

  • Trigger keyword overlap (same keywords in two descriptions)
  • Workflow overlap (two skills teach similar processes)
  • Contradictory guidance

4.7 Environment Consistency

For each skill, extract referenced:

  • File paths → check if they exist (test -e)
  • CLI tools → check if installed (which)
  • Directories → check if they exist

Flag any broken references.

4.8 Token Economics

This dimension is critical and easy to skip. Do not skip it.

For each skill:

  • Word count (from Step 1)
  • Trigger frequency (from 4.1)
  • Cost-effectiveness = trigger count / word count
  • Flag: large + never-triggered skills as candidates for removal or compression

Progressive Disclosure Tier Check: Evaluate each skill against the 3-tier loading model:

  • Tier 1 (frontmatter): ~100 tokens. Check: is description ≤ 1024 chars?
  • Tier 2 (SKILL.md body): <500 lines recommended. Check: word count.
  • Tier 3 (reference files): loaded on demand. Check: does skill use reference files for detailed content, or cram everything into SKILL.md?

Flag skills that put 500+ words in SKILL.md without using reference files as "poor progressive disclosure".

Step 4: Composite Score

Rate each skill on a 5-point scale:

ScoreMeaning
5Healthy: high trigger rate, positive reactions, complete workflows, clean static
4Good: minor issues in 1-2 dimensions
3Needs attention: significant gap in 1 dimension or minor gaps in 3+
2Problematic: never triggered, or negative user reactions, or major static issues
1Broken: doesn't work, references missing, or fundamentally misaligned

Scored dimensions (weighted average):

  • Trigger rate: 25%
  • User reaction: 20%
  • Workflow completion: 15%
  • Static quality: 15%
  • Undertrigger: 15%
  • Token economics: 10%

Qualitative dimensions (reported but not scored):

  • 4.5a Overtrigger: reported as count + examples
  • 4.6 Cross-Skill Conflicts: reported as conflict pairs
  • 4.7 Environment Consistency: reported as pass/fail per reference

Report Format

markdown
1# Skill Optimization Report 2**Date**: {date} 3**Scope**: {all / specified skills} 4**Session data**: {N} sessions, {date range} 5 6## Overview 7| Skill | Triggers | Reaction | Completion | Static | Undertrigger | Token | Score | 8|-------|----------|----------|------------|--------|--------------|-------|-------| 9| example-skill | 2 | 100% | 86% | B+ | 1 miss | 486w | 4/5 | 10 11## P0 Fixes (blocking usage) 121. ... 13 14## P1 Improvements (better experience) 151. ... 16 17## P2 Optional Optimizations 181. ... 19 20## Per-Skill Diagnostics 21### {skill-name} 22#### 4.1 Trigger Rate 23... 24#### 4.2 User Reaction 25... 26(all 8 dimensions)

Research Background

The analysis dimensions in this report are grounded in the following research:

  • Undertrigger detection: Memento-Skills (arXiv:2603.18743) — skills as structured files require accurate routing; unrouted skills cannot self-improve via the read-write learning loop
  • Description quality: MCP Description Quality (arXiv:2602.18914) — well-written descriptions achieve 72% tool selection rate vs. 20% random baseline (3.6x improvement)
  • Information position: Lost in the Middle (Liu et al., TACL 2024) — U-shaped LLM attention curve
  • Format impact: He et al. (arXiv:2411.10541) — format changes alone can cause 9-40% performance variance
  • Instruction compliance: IFEval (arXiv:2311.07911) — LLMs struggle with multi-constraint prompts

Limitations

  • Use this skill only when the task clearly matches the scope described above.
  • Do not treat the output as a substitute for environment-specific validation, testing, or expert review.
  • Stop and ask for clarification if required inputs, permissions, safety boundaries, or success criteria are missing.

Связанные навыки

Looking for an alternative to skill-optimizer or another community skill for your workflow? Explore these related open-source skills.

Показать все

openclaw-release-maintainer

Logo of openclaw
openclaw

Локализованное описание: 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

widget-generator

Logo of f
f

Локализованное описание: Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat . It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor

flags

Logo of vercel
vercel

Локализованное описание: The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Браузер

pr-review

Logo of pytorch
pytorch

Локализованное описание: Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review . It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Разработчик