ground — how to use ground how to use ground, ground alternative, ground setup guide, what is ground, ground vs other AI agents, ground install, ground documentation, ground development stack, ground AI agent

v1.0.0
GitHub

About this Skill

Perfect for AI Agents needing streamlined development and workflow management through opinionated development stacks. Ground is an opinionated AI agent development stack with tools and guides for streamlined development.

Features

Executes directly without spawning subagents for simple queries
Verifies external dependencies against current documentation
Follows design rationale for efficient performance
Applies Lita research for optimized code reduction
Supports direct execution for query + verification sequences

# Core Topics

Mburdo Mburdo
[0]
[0]
Updated: 3/8/2026

Quality Score

Top 5%
42
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add Mburdo/knowledge_and_vibes/ground

Agent Capability Analysis

The ground MCP Server by Mburdo is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use ground, ground alternative, ground setup guide.

Ideal Agent Persona

Perfect for AI Agents needing streamlined development and workflow management through opinionated development stacks.

Core Value

Empowers agents to execute direct queries and verify external dependencies against current documentation using tools, guides, templates, and workflows, streamlining development with ~300 token query sequences.

Capabilities Granted for ground MCP Server

Streamlining AI agent development workflows
Verifying external dependencies against documentation
Executing direct queries for efficient development

! Prerequisites & Limits

  • Direct execution only, no subagent spawning
  • Simple query + verification sequence, not suitable for substantial analytical work
Project
SKILL.md
8.0 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Ground — External Documentation

Verify external dependencies against current documentation before implementation. Direct execution.

Design rationale: This skill executes directly rather than spawning subagents because grounding is a simple query + verification sequence (~300 tokens), not substantial analytical work. Per Lita research: "Simple agents achieve 97% of complex system performance with 15x less code."

When This Applies

SignalAction
About to write import for external libGround first
Using API/SDK methodsVerify current syntax
Framework-specific patternsCheck version compatibility
Auth/security codeAlways verify current best practices
User says "ground" or "verify"Run full grounding check
New library or major versionDeep grounding

Default: When uncertain about external APIs, ground.


Tool Reference

Exa MCP Tools

ToolPurpose
web_search_exa(query)General documentation search
get_code_context_exa(query)Code examples from GitHub/tutorials
crawling(url)Extract content from specific URLs

Decision Tree

Where does truth live?

EXTERNAL DOCS ──► /ground (this skill)
                 "What's the current API for X?"

CODEBASE ───────► /explore (warp-grep)
                 "How does X work in our code?"

HISTORY ────────► /recall (cm + cass)
                 "How did we do this before?"

TASKS ──────────► bv --robot-*
                 "What should I work on?"

Execution Flow

Execute these steps directly. No subagents needed.

Step 1: Identify What Needs Grounding

Categories that need verification:

CategoryWhyRisk
Imports/InitializationSyntax changes between versionsHigh
API MethodsMethods get renamed/deprecatedHigh
ConfigurationOptions/flags evolveMedium
Async PatternsAsync APIs vary significantlyMedium
Auth/SecuritySecurity best practices changeCritical
Data ValidationValidators/schemas evolveMedium

Step 2: Construct Query

Formula:

{library_name} {specific_feature} {version_if_known} 2024 2025

Examples:

FastAPI Pydantic v2 model_validator 2024 2025
Next.js 14 app router server components
React useOptimistic hook 2024
Prisma findMany where clause 2024

Strengthening queries:

  • Add version number if known
  • Add year for recency (2024, 2025)
  • Add "official" or "docs" for authoritative sources
  • Add "migration" if moving between versions

Step 3: Execute Search

For documentation:

python
1web_search_exa("{library} {feature} {version} 2024 2025")

For code examples:

python
1get_code_context_exa("{library} {pattern} implementation example")

For specific page:

python
1crawling("{url}")

Step 4: Verify Results

CriterionPass If
SourceOfficial docs or reputable repo
FreshnessUpdated within 12 months
VersionMatches your dependency
CompletenessFull import + usage pattern
StatusNot deprecated

Step 5: Record Grounding Status

Track in your work:

markdown
1## Grounding Status 2| Pattern | Query | Source | Status | 3|---------|-------|--------|--------| 4| `@model_validator` | "Pydantic v2 2024" | docs.pydantic.dev | ✅ Verified | 5| `useOptimistic` | "React 19 2024" | react.dev | ✅ Verified |

Status values:

  • ✅ Verified — Matches current docs
  • ⚠️ Changed — API changed, updated approach
  • ❌ Deprecated — Found alternative
  • ❓ Unverified — Couldn't confirm, flagged

Query Patterns

Current API Documentation

"{library} {method} documentation 2024"
"{library} {feature} API reference"
"{library} official docs {feature}"

Migration Between Versions

"{library} v{old} to v{new} migration"
"{library} {version} breaking changes"
"{library} upgrade guide {version}"

Code Examples

python
1get_code_context_exa("{library} {pattern} implementation example") 2get_code_context_exa("{library} {use_case} tutorial")

Security/Auth Patterns

"{auth_method} best practices 2024"
"{library} authentication {pattern} security"
"OAuth PKCE {language} 2024"

Error Resolution

"{library} {error_message} fix"
"{library} {error_type} troubleshooting"

Grounding Depth Levels

DepthWhenWhat to Check
QuickFamiliar pattern, just confirmingOne query, verify method exists
StandardNormal implementationQuery + check for deprecation warnings
DeepSecurity/auth, new library, major versionMultiple queries, read changelog, check issues

Version Sensitivity Signals

Ground more carefully when you see:

SignalRisk
Major version in deps (v1 → v2)Breaking changes likely
Library < 2 years oldAPI still evolving
"experimental" or "beta" in docsMay change without notice
Security-related codeBest practices evolve
AI training data gapLibs released after training cutoff

Failure Handling

IssueResponse
No resultsBroaden query, try alternate terms
Conflicting infoOfficial docs > GitHub > tutorials
Only outdated infoMark ❓, proceed with caution, add TODO
Can't verifyFlag for human review

Query Anti-Patterns

Bad QueryProblemBetter Query
"how to use {library}"Too vague"{library} {specific_feature} 2024"
"{library} tutorial"May be outdated"{library} {feature} official docs"
"best {library}"Opinion, not docs"{library} {pattern} documentation"
"{library}" aloneNo specificityAdd feature + version + year

Query Strengthening

If initial query returns poor results:

  1. Add version: "React 19 useOptimistic" vs "React useOptimistic"
  2. Add year: "FastAPI middleware 2024" vs "FastAPI middleware"
  3. Add "official": "Next.js official docs app router"
  4. Be more specific: "Prisma findMany where clause" vs "Prisma queries"
  5. Try alternate terms: "authentication" vs "auth" vs "login"

Progressive Grounding

For large implementations:

  1. Start: Ground the core imports/setup
  2. As you go: Ground each new external method before using
  3. Before commit: Review grounding table, verify nothing ❓

Don't try to ground everything upfront — ground just-in-time as you encounter external deps.


Integration with Workflow

Before implementing (via /advance)

python
1# Check external dependencies 2web_search_exa("{library} {feature} 2024")

During implementation

python
1# Just-in-time verification 2get_code_context_exa("{specific_method} example")

Before commit

Review grounding table, ensure all ❓ are resolved or documented.


Requirements

Requires Exa API key configured:

bash
1claude mcp add exa -s user \ 2 -e EXA_API_KEY=your-key \ 3 -- npx -y @anthropic-labs/exa-mcp-server

Quick Reference

python
1# Documentation search 2web_search_exa("{library} {feature} {version} 2024 2025") 3 4# Code examples 5get_code_context_exa("{library} {pattern} implementation example") 6 7# Specific page 8crawling("{url}")

Query formula:

{library} {feature} {version} 2024 2025

Anti-Patterns

Don'tWhyDo Instead
Skip grounding for external APIsTraining data may be staleGround before using
Use tutorials over docsTutorials get outdatedPrefer official docs
Ignore version mismatchesBreaking changes existVerify version compatibility
Ground everything upfrontWastes timeGround just-in-time
Skip grounding for "familiar" libsAPIs changeQuick verify is still worth it

See Also

  • /recall — Past session patterns
  • /explore — Codebase search
  • /advance — Bead workflow (includes grounding step)

Related Skills

Looking for an alternative to ground or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication