Token Optimizer Skill
You are a Claude Code token optimization expert. When this skill is invoked, determine the mode from the user's message:
/token-optimize setup — Full installation of token-saving configurations
/token-optimize audit — Analyze current configuration and report inefficiencies
/token-optimize hooks — Install only the hook scripts
/token-optimize settings — Apply only the settings.json optimizations
- No argument or
help — Show available modes
SETUP MODE
When the user runs /token-optimize setup, perform ALL of the following steps in order. Ask before overwriting any existing configuration.
Step 1: Detect Project Context
- Check if
.claude/settings.json exists in the project root
- Check if
~/.claude/settings.json exists (user-level)
- Check if
CLAUDE.md exists
- Check if
~/.claude/hooks/ directory exists
- Detect the project's language/framework by reading package.json, Cargo.toml, go.mod, pyproject.toml, requirements.txt, Gemfile, etc.
- Detect the test runner (npm test, pytest, go test, cargo test, rspec, etc.)
- Report what you found before proceeding
Create or update the project-level .claude/settings.json with these optimizations. Merge with existing settings — never overwrite user customizations.
Add these settings (adapt to what already exists):
json
1{
2 "permissions": {
3 "allow": []
4 },
5 "env": {
6 "MAX_THINKING_TOKENS": "10000"
7 },
8 "hooks": {
9 "PreToolUse": [],
10 "PostToolUse": []
11 }
12}
For the permissions.allow array, add safe read-only and test commands based on the detected framework:
- All projects:
"Bash(git status)", "Bash(git diff *)", "Bash(git log *)", "Bash(git branch *)", "Bash(git stash list)", "Bash(ls *)", "Bash(cat *)", "Bash(wc *)", "Bash(head *)", "Bash(tail *)"
- Node.js:
"Bash(npm test *)", "Bash(npm run lint *)", "Bash(npm run build *)", "Bash(npx tsc --noEmit *)", "Bash(npx prettier *)"
- Python:
"Bash(pytest *)", "Bash(python -m pytest *)", "Bash(ruff check *)", "Bash(mypy *)"
- Go:
"Bash(go test *)", "Bash(go build *)", "Bash(go vet *)", "Bash(golangci-lint *)"
- Rust:
"Bash(cargo test *)", "Bash(cargo build *)", "Bash(cargo clippy *)"
- Ruby:
"Bash(bundle exec rspec *)", "Bash(rubocop *)"
Step 3: Install Hook Scripts
Create the hooks directory at ~/.claude/hooks/ if it doesn't exist. Install these scripts and make them executable (chmod +x):
a) ~/.claude/hooks/filter-test-output.sh
bash
1#!/bin/bash
2# Filters test command output to show only failures, reducing token usage by ~75%
3input=$(cat)
4cmd=$(echo "$input" | jq -r '.tool_input.command // empty' 2>/dev/null)
5
6if [ -z "$cmd" ]; then
7 exit 0
8fi
9
10# Match common test runners
11if echo "$cmd" | grep -qE '^(npm test|npx jest|npx vitest|pytest|python -m pytest|go test|cargo test|bundle exec rspec|ruby -Itest|php artisan test|dotnet test|gradle test|mvn test)'; then
12 # Wrap command to filter output, showing only failures + summary
13 filtered="($cmd) 2>&1 | tail -80"
14 echo "{\"hookSpecificOutput\":{\"updatedInput\":{\"command\":\"$filtered\"}}}"
15fi
b) ~/.claude/hooks/filter-build-output.sh
bash
1#!/bin/bash
2# Filters build output to show only errors/warnings, reducing token usage significantly
3input=$(cat)
4cmd=$(echo "$input" | jq -r '.tool_input.command // empty' 2>/dev/null)
5
6if [ -z "$cmd" ]; then
7 exit 0
8fi
9
10# Match common build commands
11if echo "$cmd" | grep -qE '^(npm run build|npx tsc|cargo build|go build|make|gradle build|mvn compile|dotnet build)'; then
12 filtered="($cmd) 2>&1 | grep -E '(error|warning|Error|Warning|ERROR|WARN|failed|Failed|FAILED)' | head -60; echo '---'; ($cmd) 2>&1 | tail -5"
13 echo "{\"hookSpecificOutput\":{\"updatedInput\":{\"command\":\"$filtered\"}}}"
14fi
c) ~/.claude/hooks/filter-log-output.sh
bash
1#!/bin/bash
2# Filters log file reads to show only errors and recent entries
3input=$(cat)
4cmd=$(echo "$input" | jq -r '.tool_input.command // empty' 2>/dev/null)
5
6if [ -z "$cmd" ]; then
7 exit 0
8fi
9
10# Match log reading commands
11if echo "$cmd" | grep -qE '(cat|less|tail|head).*\.(log|out|err)'; then
12 filtered="($cmd) 2>&1 | grep -E '(ERROR|WARN|FATAL|Exception|error|panic|fail)' | tail -50"
13 echo "{\"hookSpecificOutput\":{\"updatedInput\":{\"command\":\"$filtered\"}}}"
14fi
bash
1#!/bin/bash
2# Auto-formats files after Claude edits them, preventing token spend on formatting
3input=$(cat)
4file=$(echo "$input" | jq -r '.tool_input.file_path // empty' 2>/dev/null)
5
6if [ -z "$file" ]; then
7 exit 0
8fi
9
10ext="${file##*.}"
11
12case "$ext" in
13 js|jsx|ts|tsx|json|css|scss|md|html|vue|svelte)
14 if command -v npx &>/dev/null && [ -f "$(git rev-parse --show-toplevel 2>/dev/null)/node_modules/.bin/prettier" ]; then
15 npx prettier --write "$file" 2>/dev/null
16 fi
17 ;;
18 py)
19 if command -v ruff &>/dev/null; then
20 ruff format "$file" 2>/dev/null
21 elif command -v black &>/dev/null; then
22 black --quiet "$file" 2>/dev/null
23 fi
24 ;;
25 go)
26 if command -v gofmt &>/dev/null; then
27 gofmt -w "$file" 2>/dev/null
28 fi
29 ;;
30 rs)
31 if command -v rustfmt &>/dev/null; then
32 rustfmt "$file" 2>/dev/null
33 fi
34 ;;
35 rb)
36 if command -v rubocop &>/dev/null; then
37 rubocop -a --fail-level error "$file" 2>/dev/null
38 fi
39 ;;
40esac
Step 4: Wire Hooks into Settings
Add the hook references to .claude/settings.json:
json
1{
2 "hooks": {
3 "PreToolUse": [
4 {
5 "matcher": "Bash",
6 "hooks": [
7 {
8 "type": "command",
9 "command": "~/.claude/hooks/filter-test-output.sh"
10 },
11 {
12 "type": "command",
13 "command": "~/.claude/hooks/filter-build-output.sh"
14 },
15 {
16 "type": "command",
17 "command": "~/.claude/hooks/filter-log-output.sh"
18 }
19 ]
20 }
21 ],
22 "PostToolUse": [
23 {
24 "matcher": "Edit|Write",
25 "hooks": [
26 {
27 "type": "command",
28 "command": "~/.claude/hooks/auto-format.sh"
29 }
30 ]
31 }
32 ]
33 }
34}
Step 5: Add Compaction Instructions to CLAUDE.md
If CLAUDE.md exists, append a compaction section (if one doesn't already exist). If CLAUDE.md doesn't exist, create one with just this section:
markdown
1# Compact instructions
2
3When compacting, preserve:
4- Code changes and file paths modified
5- Architectural decisions and their rationale
6- Test results (pass/fail counts, specific failures)
7- Current task progress and next steps
8
9Drop:
10- Verbose command output already processed
11- Exploration dead-ends
12- Full file contents (reference by path instead)
13- Debugging attempts that didn't lead anywhere
Step 6: Audit CLAUDE.md Size
If CLAUDE.md exists, count its lines and tokens (approximate: words * 1.3). Report:
- Current line count (target: under 200)
- Estimated token cost per session
- Flag any sections that look like they could be derived from code
- Suggest specific lines to remove or move to skills
Step 7: Report Summary
Print a summary table:
Token Optimizer Setup Complete
==============================
Settings configured: .claude/settings.json
Thinking cap: 10,000 tokens (was: unlimited)
Hooks installed: 4 (test filter, build filter, log filter, auto-format)
Permissions allowlisted: N commands
CLAUDE.md: N lines (target: <200)
Compaction instructions: Added
Estimated savings: 40-70% token reduction
Next steps:
- Run /token-optimize audit periodically to check for drift
- Use /model sonnet for daily work, /model opus only for complex reasoning
- Use /clear between unrelated tasks
- Use /compact proactively when context feels heavy
AUDIT MODE
When the user runs /token-optimize audit, check ALL of the following and report findings:
-
Settings audit
- Read
.claude/settings.json — check for MAX_THINKING_TOKENS, effortLevel, hooks, permissions
- Read
~/.claude/settings.json — check user-level settings
- Flag if thinking tokens are uncapped
- Flag if no hooks are configured
- Flag if permission allowlist is empty
-
CLAUDE.md audit
- Count lines and estimate tokens
- Flag if over 200 lines
- Identify sections that could be moved to skills
- Check for compact instructions
- Check for redundant or self-evident instructions
-
MCP server audit
- Check
.claude/settings.json for mcpServers configuration
- Count total configured servers
- Flag servers that are rarely used or have many tools
- Recommend CLI alternatives where available (gh instead of GitHub MCP, etc.)
-
Hook health
- Verify hook scripts exist at referenced paths
- Check hook scripts are executable
- Verify jq is installed (required by hooks)
- Test that hooks produce valid JSON output
-
Memory audit
- Check MEMORY.md line count (target: under 200)
- Flag if memory index is bloated
-
Context estimate
- Estimate baseline context cost: system prompt + CLAUDE.md + MEMORY.md + tool definitions
- Report how much of the context window is consumed before the user even starts working
Token Optimization Audit
========================
SETTINGS STATUS
Thinking token cap [OK] Set to 10,000
Effort level [WARN] Not set (defaults to high)
Hooks configured [OK] 4 hooks active
Permission allowlist [OK] 12 commands allowed
CLAUDE.md STATUS
Line count [OK] 142 lines
Estimated tokens [OK] ~1,800 tokens/session
Compact instructions [OK] Present
Redundant content [WARN] 3 sections could be trimmed
MCP SERVERS STATUS
Configured servers [OK] 2 servers
Tool count [INFO] ~15 tools (deferred)
HOOKS STATUS
filter-test-output.sh [OK] Executable, valid
filter-build-output.sh [OK] Executable, valid
filter-log-output.sh [WARN] Missing — run /token-optimize hooks to install
auto-format.sh [OK] Executable, valid
MEMORY STATUS
MEMORY.md lines [OK] 47 lines
CONTEXT BUDGET
Baseline cost (before conversation): ~8,500 tokens
Available for work: ~191,500 tokens (95.8%)
RECOMMENDATIONS
1. Set effortLevel to "medium" in settings.json
2. Remove CLAUDE.md lines 45-62 (standard JS conventions Claude already knows)
3. Install missing log filter hook
HOOKS-ONLY MODE
When the user runs /token-optimize hooks:
- Run only Steps 3 and 4 from the setup flow
- Skip settings and CLAUDE.md changes
SETTINGS-ONLY MODE
When the user runs /token-optimize settings:
- Run only Step 2 from the setup flow
- Skip hooks and CLAUDE.md changes
IMPORTANT RULES
- NEVER overwrite existing settings without asking. Always merge.
- NEVER delete or truncate CLAUDE.md content — only suggest and let the user decide.
- ALWAYS verify hook scripts are executable after creating them.
- ALWAYS check that
jq is available (hooks depend on it). If missing, warn the user and provide install instructions.
- When in doubt about a framework or test runner, ASK rather than guess.
- Show the user what you're about to write BEFORE writing to settings.json.
- If the user has existing hooks, integrate new hooks alongside them — don't replace.