KS
Killer-Skills

context-hunter — how to use context-hunter how to use context-hunter, what is context-hunter, context-hunter alternative, context-hunter vs git, context-hunter install, context-hunter setup guide, context-hunter for AI Agents, git ecosystem project management, focused discovery loop, task complexity classification

v1.0.0
GitHub

About this Skill

Perfect for AI Agents needing efficient project collaboration and task complexity management in git ecosystems. context-hunter is a tool that facilitates project collaboration by running a focused discovery loop, classifying task complexity into L0, L1, and L2 levels

Features

Classifies task complexity into L0 (trivial), L1 (moderate), and L2 (high-risk) levels
Runs a focused discovery loop to find the right files
Supports output by level, including no context brief for L0 tasks
Enables efficient workflow by proceeding directly for L0 tasks
Facilitates project collaboration between humans and AI Agents in a git ecosystem
Utilizes a git ecosystem for streamlined project management

# Core Topics

MrLesk MrLesk
[0]
[0]
Updated: 3/7/2026

Quality Score

Top 5%
29
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add MrLesk/Backlog.md/context-hunter

Agent Capability Analysis

The context-hunter MCP Server by MrLesk is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use context-hunter, what is context-hunter, context-hunter alternative.

Ideal Agent Persona

Perfect for AI Agents needing efficient project collaboration and task complexity management in git ecosystems.

Core Value

Empowers agents to classify task complexity using levels such as L0, L1, and L2, and run focused discovery loops, streamlining workflow and enhancing collaboration with humans through git ecosystems and complexity gates.

Capabilities Granted for context-hunter MCP Server

Classifying task complexity for efficient workflow management
Running focused discovery loops for targeted file analysis
Streamlining collaboration between humans and AI agents in git ecosystems

! Prerequisites & Limits

  • Requires git ecosystem integration
  • Limited to task complexity classification and discovery loops
Project
SKILL.md
5.1 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Context Hunter

Before writing code, run a focused discovery loop. Do not load everything. Find the right files.

Complexity Gate

Classify task complexity first:

  • L0 (trivial): typos, renames, copy-only edits, obvious single-line fixes with no behavior change.
  • L1 (moderate): behavior changes in one bounded area.
  • L2 (high-risk): cross-module changes, data semantics, refactors, architecture-impacting work.

Output by level:

  • L0: no context brief, proceed directly.
  • L1: write a micro-brief.
  • L2: write a full context brief.

Re-evaluate level during discovery and implementation. If new evidence shows higher complexity than initially classified, upgrade the level and apply the stricter workflow.

Core Behavior

Act like a senior engineer who asks the next useful question:

  1. Assess completeness: Check whether the request omits expected concerns seen in analogous code.
  2. Discover selectively: Read the minimum set of relevant files.
  3. Validate assumptions: Confirm with tests/config/history.
  4. Synthesize: Capture findings before coding for L1/L2.

Discovery Workflow (Before Coding)

1) Assess Request Completeness

Ask: "What is likely missing?"

Examples:

  • Similar endpoints include auth/validation. Is that expected here?
  • This area uses soft-delete semantics. Should this operation follow that?
  • Similar flows emit telemetry/error states. Should this change do the same?
  • Existing module boundaries suggest a different placement. Is current request still correct?

2) Run Targeted Discovery

Prioritize these in order:

  1. Find analogous implementations and copy their structure.
  2. Trace data flow for similar features end-to-end.
  3. Identify reusable utilities before creating new helpers.
  4. Inspect nearby tests to infer team priorities and edge cases.
  5. Read recent commits in the same area for current direction.

Portable discovery actions:

  • Search for feature/domain terms in relevant directories.
  • Enumerate nearby files in the affected area.
  • Inspect recent change history for touched paths.
  • Run targeted validation first, then broader project checks as needed.

3) Probe for Silent Knowledge

Look for implicit rules encoded in code:

  • Soft-delete, audit, or historical retention patterns (for data-touching changes).
  • Naming conventions (userId vs user_id) and file placement norms.
  • Existing design system choices (for this repo: Nuxt + Vue + Tailwind 4.1 patterns).
  • “Dead but dangerous” APIs/functions that exist but are no longer preferred.

4) Confidence-Based Stop Rule

Stop discovery when confidence is high enough to predict likely review feedback. If you cannot anticipate reviewer concerns yet, keep looking.

5) Produce Scaled Discovery Output

For L1, write a micro-brief:

  • Closest analog and chosen pattern.
  • Main risk or ambiguity.

For L2, write a full context brief:

  • Analogous files reviewed (with paths).
  • Patterns to follow (state/data/error handling/naming).
  • Reusable utilities/components/composables identified.
  • Risks and unknowns.

For L1/L2, keep an internal discovery log:

  • Files checked.
  • Patterns inferred.
  • Decisions made from evidence.
  • Naming evidence for new identifiers (new name -> analog paths -> extracted pattern).

Clarification Policy

  • Prefer fewer questions.
  • Ask only when the answer would change implementation approach.
  • If convention is clear, proceed silently.
  • Escalate only genuine ambiguity/conflict or product-level tradeoffs.

During Implementation

Changes should look native to the codebase:

  • Reuse existing abstractions first.
  • Match existing module boundaries and naming.
  • Follow established error-handling and testing style.
  • Prefer consistency over novelty.

Naming derivation rule:

  • Do not invent names from general priors.
  • For each new identifier family (file/function/variable/class/route), derive naming from closest local analogs.
  • Use at least 2 analogous examples when available before finalizing a new naming pattern.
  • If no analog exists, introduce the new term explicitly and record it as a no-analog exception.

If requirements conflict with discovered conventions:

  • Flag the conflict explicitly.
  • Propose 1-2 alternatives aligned with existing patterns.
  • Ask for decision when tradeoffs are product/architecture-level.

Verification

After coding:

  • Run targeted validation for changed area first.
  • Run broader checks appropriate for risk (typecheck, lint/check, tests as needed).
  • Confirm no new pattern drift was introduced.

Checklist

L0:

  • Confirmed change is truly trivial and safe to execute without full discovery.

L1/L2:

  • Classified complexity (L0/L1/L2) before discovery
  • Studied analogous features in the codebase
  • Checked for reusable utilities
  • Reviewed test patterns for similar functionality
  • Assessed request completeness before implementation
  • Identified at least one silent convention/risk
  • Produced the required artifact for the chosen level
  • Kept an internal discovery log for L1/L2
  • All new names were derived from codebase analogs, or marked as intentional no-analog exceptions
  • Verified final approach matches existing patterns

Related Skills

Looking for an alternative to context-hunter or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication