titan-gauntlet — code intelligence cli titan-gauntlet, codegraph, optave, community, code intelligence cli, ai agent skill, ide skills, agent automation, function level dependency graph, complexity metrics analysis, ai agent development, hybrid semantic search

v1.0.0
GitHub

About this Skill

Perfect for Code Analysis Agents needing advanced function-level dependency graph analysis across multiple languages. titan-gauntlet is a code analysis tool that generates function-level dependency graphs, complexity metrics, and enforces architecture boundaries for AI agents.

Features

Generates function-level dependency graphs across 11 languages
Supports hybrid semantic search for efficient code querying
Enforces architecture boundaries using a 30-tool MCP server
Provides complexity metrics for code quality assessment
Offers CI quality gates for automated testing and validation
Performs git diff impact analysis with co-change analysis for incremental builds

# Core Topics

optave optave
[27]
[3]
Updated: 3/17/2026

Quality Score

Top 5%
50
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
> npx killer-skills add optave/codegraph/titan-gauntlet
Supports 19+ Platforms
Cursor
Windsurf
VS Code
Trae
Claude
OpenClaw
+12 more

Agent Capability Analysis

The titan-gauntlet skill by optave is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for code intelligence cli, function level dependency graph, complexity metrics analysis.

Ideal Agent Persona

Perfect for Code Analysis Agents needing advanced function-level dependency graph analysis across multiple languages.

Core Value

Empowers agents to enforce architecture boundaries, calculate complexity metrics, and implement CI quality gates using function-level dependency graph analysis across 11 languages, including features like work batches and context limits to ensure efficient analysis.

Capabilities Granted for titan-gauntlet

Analyzing high-priority targets against 4 pillars of quality
Enforcing architecture boundaries in large-scale codebases
Calculating complexity metrics for optimized code refactoring

! Prerequisites & Limits

  • Requires CLI setup and configuration
  • Limited to 11 supported languages
  • Context capacity limits may require re-invocation
SKILL.md
Readonly

Titan GAUNTLET — The Perfectionist Manifesto

You are running the GAUNTLET phase of the Titan Paradigm.

Your goal: audit every high-priority target from the RECON phase against 4 pillars of quality, using work batches to stay within context limits. Each batch writes results to disk before starting the next. If context reaches ~80% capacity, stop and tell the user to re-invoke — the state machine ensures no work is lost.

Batch size: $ARGUMENTS (default: 5)

Context budget: Process $ARGUMENTS targets per batch. Write results to NDJSON after each batch. If context grows large, save state and stop — the user re-invokes to continue.


Step 0 — Pre-flight: find or create the Titan worktree

  1. Locate the Titan session. A prior phase (RECON) may have run in a different worktree or branch. Search for it:

    bash
    1git worktree list

    For each worktree, check if it contains Titan artifacts:

    bash
    1ls <worktree-path>/.codegraph/titan/titan-state.json 2>/dev/null

    Also check branches for titan state:

    bash
    1git branch -a --list '*titan*'

    Decision logic:

    • Found exactly one worktree with titan-state.json: Read the state. If currentPhase is "recon" (RECON completed), this is the right one. Switch to it or merge its branch into your worktree.
    • Found a worktree but currentPhase is NOT "recon": This worktree may be mid-phase or from a different pipeline run. Keep searching other worktrees/branches. If no better match, ask the user: "Found Titan state at <path> but phase is <phase>. Is this the session to continue?"
    • Found multiple worktrees with titan-state.json: List them with their currentPhase and lastUpdated. Ask the user: "Multiple Titan sessions found. Which one should GAUNTLET continue?"
    • Found a branch (not worktree) with titan artifacts: Merge it into your current worktree:
      bash
      1git merge <titan-branch> --no-edit
    • Found nothing: Warn: "No RECON artifacts found in any worktree or branch. Run /titan-recon first for best results." Fall back: codegraph triage -T --limit 50 --json for a minimal queue.
  2. Ensure worktree isolation:

    bash
    1git rev-parse --show-toplevel && git worktree list

    If you are NOT in a worktree, stop: "Run /worktree first."

  3. Sync with main:

    bash
    1git fetch origin main && git merge origin/main --no-edit

    If there are merge conflicts, stop: "Merge conflict detected. Resolve conflicts and re-run /titan-gauntlet."

  4. Load state. Read .codegraph/titan/titan-state.json.

  5. Load architecture. Read .codegraph/titan/GLOBAL_ARCH.md for domain context.

  6. Resume logic. If titan-state.json has completed batches, skip them. Start from the first pending batch.

  7. Validate state. If titan-state.json fails to parse, stop: "State file corrupted. Run /titan-reset to start over, or /titan-recon to rebuild."


Step 1 — Drift detection: has main moved since the last phase?

The codebase may have changed significantly since RECON ran. Detect this before auditing stale data.

  1. Compare main SHA:

    bash
    1git rev-parse origin/main

    Compare against titan-state.json → mainSHA. If identical, skip to Step 2.

  2. If main has advanced, find what changed:

    bash
    1git diff --name-only <mainSHA>..origin/main
  3. Cross-reference with Titan targets. Check which changed files overlap with:

    • Files in the priority queue (titan-state.json → priorityQueue)
    • Files in pending batches (titan-state.json → batches)
    • Hot files (titan-state.json → hotFiles)
    • Core symbols (titan-state.json → roles.coreCount files)
  4. Run structural diff to detect architecture-level changes:

    bash
    1codegraph diff-impact <mainSHA>..origin/main -T --json 2>/dev/null || echo "DIFF_UNAVAILABLE"

    If unavailable (e.g., old SHA not in graph), fall back to file-count heuristics.

  5. Classify staleness:

    LevelConditionAction
    nonemain unchangedContinue normally
    lowmain changed but <5 files, none in priority queueContinue — note in issues.ndjson
    moderateSome overlap: changed files appear in priority queue or batches (<20% of batches affected)Continue but mark affected batches for re-audit
    high>20% of batches affected OR core symbols changedWarn user: "Significant changes on main since RECON. Recommend re-running affected batches or /titan-recon."
    criticalNew files in src/, deleted files that were in batches, or >50% of priority queue affectedStop and recommend: "Main has diverged significantly. Run /titan-recon to rebuild the baseline."
  6. Append drift report to .codegraph/titan/drift-report.json (the file is a JSON array — read existing entries first, push the new entry, write back):

    json
    1{ 2 "timestamp": "<ISO 8601>", 3 "detectedBy": "gauntlet", 4 "mainAtLastPhase": "<stored SHA>", 5 "mainNow": "<current SHA>", 6 "commitsBehind": 0, 7 "changedFiles": ["<files changed on main>"], 8 "impactedTargets": ["<priority queue targets in changed files>"], 9 "impactedBatches": [1, 3], 10 "impactedDomains": ["<domains containing changed files>"], 11 "staleness": "none|low|moderate|high|critical", 12 "recommendation": "continue|reassess-targets|rerun-gauntlet|rerun-recon", 13 "reassessmentScope": { 14 "targets": ["<specific targets to re-audit>"], 15 "batches": [1, 3], 16 "files": ["<specific files to re-check>"], 17 "fullRecon": false 18 } 19}
  7. Update state: Set titan-state.json → mainSHA to current origin/main after merging.

  8. If moderate: Mark impacted batches as "stale" in titan-state.json (change status from "completed" to "stale" or from "pending" to "pending-reassess"). These get re-audited during the batch loop.

  9. If re-running GAUNTLET and a previous drift-report.json exists with reassessmentScope, only re-audit the targets listed — don't repeat the entire pipeline. Clear completed targets from the scope as you go.


Step 2 — The Four Pillars

Every file must be checked against all four pillars. A file FAILS if it has any fail-level violation.

Pillar I: Structural Purity & Logic

Rule 1 — Complexity (multi-metric)

bash
1codegraph complexity --file <f> --health -T --json

This returns ALL metrics in one call — use them all:

MetricWarnFailWhy it matters
cognitive> 15> 30How hard to understand
cyclomatic> 10> 20How many paths to test
maxNesting> 3> 5Flatten with guards/extraction
halstead.effort> 5000> 15000Information-theoretic complexity
halstead.bugs> 0.5> 1.0Estimated defect count
mi (Maintainability Index)< 50< 20Composite health score
loc.sloc> 50> 100Function too long — split it

Rule 2 — Async hygiene (every Promise caught)

bash
1codegraph ast --kind await --file <f> -T --json 2codegraph ast --kind call --file <f> -T --json

Cross-reference: .then() calls without .catch() on the same chain; async functions without try/catch wrapping await calls. Also grep:

bash
1grep -n "\.then(" <f> 2grep -n "async " <f>

Fail: uncaught promise chains or async functions without error handling.

Rule 3 — Dependency direction (no upward imports)

bash
1codegraph check --boundaries -T --json 2codegraph deps <f> --json

Cross-reference with GLOBAL_ARCH.md layer rules. Fail: import from a higher layer.

Rule 4 — Dead code (no unused exports)

bash
1codegraph roles --role dead --file <f> -T --json 2codegraph exports <f> -T --json

Fail: dead exports or unreferenced symbols.

Rule 5 — Resource hygiene

bash
1codegraph ast --kind call --file <f> -T --json

Find addEventListener, setInterval, setTimeout, createReadStream, .on( — verify matching cleanup. Fail: resource acquired without cleanup.

Rule 6 — Immutability

bash
1codegraph dataflow <f> -T --json

Also grep for mutation patterns:

bash
1grep -n "\.push(\|\.splice(\|\.sort(\|\.reverse(\|delete " <f>

Fail: direct mutation of function arguments or external state.

Pillar II: Data & Type Sovereignty

Rule 7 — Magic values

bash
1codegraph ast --kind string --file <f> -T --json

Also grep for numeric literals in logic branches:

bash
1grep -nE "[^a-zA-Z_][0-9]{2,}[^a-zA-Z_]" <f>

Filter out imports, log format strings, test assertions. Warn: present. Fail: in if/switch conditions.

Rule 8 — Boundary validation

bash
1codegraph roles --role entry --file <f> -T --json 2codegraph where --file <f> -T --json

For entry-point functions, verify schema validation before processing. Fail: missing validation at system boundaries.

Rule 9 — Secret hygiene

bash
1grep -niE "api.?key|secret|password|token|credential" <f>

Verify values come from config/env, not literals. Fail: hardcoded secret values.

Rule 10 — Error integrity (no empty catches)

bash
1grep -nA2 "catch" <f>

Fail: empty catch block or catch with only // ignore or // TODO.

Pillar III: Ecosystem Synergy

Rule 11 — DRY (no duplicated logic)

bash
1codegraph search "<function purpose>" -T --json 2codegraph co-change <f> -T --json

Find semantically similar functions. If codegraph search fails (no embeddings), use grep for function signature patterns. Warn: similar patterns. Fail: near-verbatim copy.

Note: requires embeddings from /titan-recon. If titan-state.json → embeddingsAvailable is false, skip semantic search and note it.

Rule 12 — Naming symmetry

bash
1codegraph where --file <f> -T --json

Scan function names in the domain. Flag mixed get/fetch/retrieve or create/make/build for the same concept. Warn: inconsistent. Advisory — not a fail condition.

Rule 13 — Config over code

bash
1codegraph deps <f> --json

Also grep:

bash
1grep -n "process.env\|NODE_ENV\|production\|development" <f>

Verify env-specific behavior driven by config, not inline branches. Warn: inline env branch.

Pillar IV: The Quality Vigil

Rule 14 — Naming quality

bash
1codegraph where --file <f> -T --json

Flag vague names: data, obj, temp, res, val, item, result, single-letter vars (except i/j/k). Warn: present. Advisory.

Rule 15 — Structured logging

bash
1codegraph ast --kind call --file <f> -T --json

Also grep:

bash
1grep -n "console\.\(log\|warn\|error\|info\)" <f>

Warn: console.log in source files. Fail: in production code paths (non-debug, non-test).

Rule 16 — Testability

bash
1codegraph fn-impact <fn> -T --json 2codegraph query <fn> -T --json

High fan-out correlates with many mocks needed. Also read corresponding test file and count mock/stub/spy calls. Warn: > 10 mocks. Fail: > 15 mocks.

Rule 17 — Critical path coverage

bash
1codegraph roles --role core --file <f> -T --json

If file contains core symbols (high fan-in), note whether test files exist for it. Warn: core symbol with no test file. Advisory.

Audit trail (per file)

For every file, the NDJSON record MUST include:

  • Verdict and pillar verdicts (pass/warn/fail per pillar)
  • All metrics from codegraph complexity --health (cognitive, cyclomatic, nesting, MI, halstead.bugs, halstead.effort, loc.sloc)
  • Violation list with rule number, detail, and level
  • Recommendation for FAIL/DECOMPOSE targets

Codegraph provides all the data needed for a verifiable audit — no need to manually traverse files for line counts or nesting proof.


Step 3 — Batch audit loop

For each pending batch (from titan-state.json):

3a. Save pre-batch snapshot

bash
1codegraph snapshot save titan-batch-<N>

Delete the previous batch snapshot if it exists:

bash
1codegraph snapshot delete titan-batch-<N-1>

3b. Collect all metrics in one call

bash
1codegraph batch complexity <target1> <target2> ... -T --json

This returns complexity + health metrics for all targets in one call. Parse the results.

For deeper context on high-risk targets:

bash
1codegraph batch context <target1> <target2> ... -T --json

3c. Run Pillar I checks

For each file in the batch:

  • Parse complexity metrics from batch output (Rule 1 — all 7 metric thresholds)
  • Run AST queries for async hygiene (Rule 2), resource cleanup (Rule 5)
  • Check boundary violations (Rule 3): codegraph check --boundaries -T --json
  • Check dead code (Rule 4): codegraph roles --role dead --file <f> -T --json
  • Check immutability (Rule 6): codegraph dataflow + grep

3d. Run Pillar II checks

For each file:

  • Magic values (Rule 7): codegraph ast --kind string + grep
  • Boundary validation (Rule 8): check entry points
  • Secret hygiene (Rule 9): grep
  • Empty catches (Rule 10): grep

3e. Run Pillar III checks

  • DRY (Rule 11): codegraph search (if embeddings available) + co-change
  • Naming symmetry (Rule 12): codegraph where --file
  • Config over code (Rule 13): codegraph deps + grep

3f. Run Pillar IV checks

  • Naming quality (Rule 14): codegraph where --file
  • Structured logging (Rule 15): codegraph ast --kind call + grep
  • Testability (Rule 16): codegraph fn-impact + test file mock count
  • Critical path coverage (Rule 17): codegraph roles --role core

3g. Score each target

VerdictCondition
PASSNo fail-level violations
WARNWarn-level violations only
FAILOne or more fail-level violations
DECOMPOSEComplexity fail + halstead.bugs > 1.0 + high fan-out (needs splitting)

For FAIL/DECOMPOSE targets, capture blast radius:

bash
1codegraph fn-impact <target> -T --json

3h. Write batch results

Append to .codegraph/titan/gauntlet.ndjson (one line per target):

json
1{"target": "<name>", "file": "<path>", "verdict": "FAIL", "pillarVerdicts": {"I": "fail", "II": "warn", "III": "pass", "IV": "pass"}, "metrics": {"cognitive": 35, "cyclomatic": 15, "maxNesting": 4, "mi": 32, "halsteadEffort": 12000, "halsteadBugs": 1.2, "sloc": 85}, "violations": [{"rule": 1, "pillar": "I", "metric": "cognitive", "detail": "35 > 30 threshold", "level": "fail"}], "blastRadius": {"direct": 5, "transitive": 18}, "recommendation": "Split: halstead.bugs 1.2 suggests ~1 defect. Separate validation from I/O."}

3i. Update state machine

Update titan-state.json:

  • Set batch status to "completed"
  • Increment progress.audited, .passed, .warned, .failed
  • Add entries to fileAudits map
  • Update snapshots.lastBatch
  • Update lastUpdated

3j. Progress check

Print: Batch N/M: X pass, Y warn, Z fail

Context budget: If context is growing large:

  1. Write all state to disk
  2. Print: Context budget reached after Batch N. Run /titan-gauntlet to continue.
  3. Stop.

Step 4 — Clean up batch snapshots

After all batches complete, delete the last batch snapshot:

bash
1codegraph snapshot delete titan-batch-<N>

Keep titan-baseline — GATE may need it.

If stopping early for context, keep the last batch snapshot for safety.


Step 5 — Aggregate and report

Compute from gauntlet.ndjson:

  • Pass / Warn / Fail / Decompose counts
  • Top 10 worst offenders (by violation count or halstead.bugs)
  • Most common violations by pillar
  • Files with the most failing functions

Write .codegraph/titan/gauntlet-summary.json:

json
1{ 2 "phase": "gauntlet", 3 "timestamp": "<ISO 8601>", 4 "complete": true, 5 "summary": {"totalAudited": 0, "pass": 0, "warn": 0, "fail": 0, "decompose": 0}, 6 "worstOffenders": [], 7 "commonViolations": {"I": [], "II": [], "III": [], "IV": []} 8}

Set "complete": false if stopping early.

Print summary to user:

  • Pass/Warn/Fail/Decompose counts
  • Top 5 worst (with their halstead.bugs and mi scores)
  • Most common violation per pillar
  • Next step: /titan-gauntlet to continue (if incomplete) or /titan-sync

Issue Tracking

Throughout this phase, if you encounter any of the following, append a JSON line to .codegraph/titan/issues.ndjson:

  • Codegraph bugs: wrong metrics, incorrect role classification, missing symbols, parse failures
  • Tooling issues: batch command failures, AST query errors, embedding unavailability
  • Process suggestions: threshold adjustments, missing rules, pillar improvements
  • Codebase observations: patterns not covered by the manifesto, architectural smells

Format (one JSON object per line, append-only):

json
1{"phase": "gauntlet", "timestamp": "<ISO 8601>", "severity": "bug|limitation|suggestion", "category": "codegraph|tooling|process|codebase", "description": "<what happened>", "context": "<command, target, or file involved>"}

Log issues as they happen — don't batch them. The /titan-close phase compiles these into the final report.


Rules

  • Batch processing is mandatory. Never audit more than $ARGUMENTS targets at once.
  • Write NDJSON incrementally. Partial results survive crashes.
  • Always use --json and -T on codegraph commands.
  • Use codegraph batch <command> <targets> for multi-target queries — not separate calls.
  • Leverage --health and --above-threshold — they give you all metrics in one call.
  • Context budget: Stop at ~80%, save state, tell user to re-invoke.
  • Lint runs once in GATE, not per-batch here. Don't run npm run lint.
  • Advisory rules (12, 14, 17) produce warnings, never failures.
  • Dead symbols from RECON should be flagged for removal, not skipped.
  • If any command fails or produces unexpected output, log it to issues.ndjson before continuing.

Self-Improvement

This skill lives at .claude/skills/titan-gauntlet/SKILL.md. Adjust thresholds or rules after dogfooding.

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is titan-gauntlet?

Perfect for Code Analysis Agents needing advanced function-level dependency graph analysis across multiple languages. titan-gauntlet is a code analysis tool that generates function-level dependency graphs, complexity metrics, and enforces architecture boundaries for AI agents.

How do I install titan-gauntlet?

Run the command: npx killer-skills add optave/codegraph/titan-gauntlet. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for titan-gauntlet?

Key use cases include: Analyzing high-priority targets against 4 pillars of quality, Enforcing architecture boundaries in large-scale codebases, Calculating complexity metrics for optimized code refactoring.

Which IDEs are compatible with titan-gauntlet?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for titan-gauntlet?

Requires CLI setup and configuration. Limited to 11 supported languages. Context capacity limits may require re-invocation.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add optave/codegraph/titan-gauntlet. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use titan-gauntlet immediately in the current project.

Related Skills

Looking for an alternative to titan-gauntlet or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
Design

linear

Logo of lobehub
lobehub

Linear issue management. MUST USE when: (1) user mentions LOBE-xxx issue IDs (e.g. LOBE-4540), (2) user says linear, linear issue, link linear, (3) creating PRs that reference Linear issues. Provides

73.4k
0
Communication

testing

Logo of lobehub
lobehub

Testing guide using Vitest. Use when writing tests (.test.ts, .test.tsx), fixing failing tests, improving test coverage, or debugging test issues. Triggers on test creation, test debugging, mock setup

73.3k
0
Communication

zustand

Logo of lobehub
lobehub

Zustand state management guide. Use when working with store code (src/store/**), implementing actions, managing state, or creating slices. Triggers on Zustand store development, state management questions, or action implementation.

72.8k
0
Communication