telemetry — markdown schema inference telemetry, mdvs, edochi, community, markdown schema inference, ai agent skill, ide skills, agent automation, frontmatter validation, semantic search, Rust development, Word2Vec embeddings

v1.0.0
GitHub

About this Skill

Perfect for AI Agents needing advanced telemetry capabilities for markdown directories, such as those leveraging Rust and Word2Vec. Telemetry is a skill that provides schema inference, frontmatter validation, and semantic search capabilities for markdown directories

Features

Performs schema inference for markdown directories
Validates frontmatter using dedicated conventions
Supports semantic search with Word2Vec embeddings
Utilizes Rust for efficient processing
Exports data to Parquet format for analysis
Integrates with CLI for streamlined workflows

# Core Topics

edochi edochi
[1]
[0]
Updated: 3/17/2026

Quality Score

Top 5%
33
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
> npx killer-skills add edochi/mdvs/telemetry
Supports 19+ Platforms
Cursor
Windsurf
VS Code
Trae
Claude
OpenClaw
+12 more

Agent Capability Analysis

The telemetry skill by edochi is an open-source community AI agent skill for Claude Code and other IDE workflows, helping agents execute tasks with better context, repeatability, and domain-specific guidance. Optimized for markdown schema inference, frontmatter validation, semantic search.

Ideal Agent Persona

Perfect for AI Agents needing advanced telemetry capabilities for markdown directories, such as those leveraging Rust and Word2Vec.

Core Value

Empowers agents to perform schema inference, frontmatter validation, and semantic search on markdown directories, utilizing technologies like Rust and Word2Vec for comprehensive content analysis, and enabling effective log level management with error, warn, and info levels for robust telemetry and observability.

Capabilities Granted for telemetry

Performing schema inference on markdown directories
Validating frontmatter for consistency and accuracy
Conducting semantic search for efficient content retrieval

! Prerequisites & Limits

  • Requires dedicated telemetry/observability spec or conventions document
  • Limited to markdown directories
  • Dependent on technologies like Rust and Word2Vec
SKILL.md
Readonly

Telemetry & Observability Conventions

If the project has a dedicated telemetry/observability spec or conventions document, read it first — it takes precedence over the generic guidance below.

Log Levels

LevelWhen to UseExample
error!Unrecoverable failures requiring operator attentionExternal service call failed permanently
warn!Recoverable issues, degraded operationRetry triggered, fallback activated
info!Lifecycle events, operational milestonesServer started, connection established
debug!Internal state useful during developmentState machine transition details
trace!Per-frame, hot-path data (high volume)Frame encoded, bytes serialized

Instrumentation Depth

Not all code deserves the same instrumentation level. Choose based on the crate's role:

  • Hot-path code (serialization, framing, tight loops): trace! only, no spans. Spans add overhead that matters here.
  • Type definition crates (pure data types, no logic): trace! only if anything at all.
  • Core business logic (servers, handlers, state machines): Full instrumentation — #[instrument] + spans.
  • Client-facing code (SDKs, CLI flows): Full instrumentation for debuggability.

#[instrument] Rules

  • Use on async functions that represent logical operations or flow steps
  • Skip on hot-path synchronous functions (serialization, encoding, tight loops)
  • Always skip sensitive fields: tokens, keys, passwords, raw payloads
  • Include identifying fields that aid correlation (IDs, resource names)
  • Set appropriate level — default is INFO, use level = "debug" or level = "trace" for noisy functions

Structured Logging

Always use named fields, never string interpolation:

rust
1// Good: named fields — searchable, parseable 2trace!(payload_bytes = payload.len(), frame_bytes = buf.len(), "frame encoded"); 3 4// Bad: string interpolation — opaque to log aggregators 5trace!("frame encoded, payload={}, frame={}", payload.len(), buf.len());

Span Design

For request/message processing pipelines, use a three-tier span pattern:

  • Inbound span: one per received message/request (message type, size, sender ID)
  • Process span: business logic processing (operation type, affected resources)
  • Outbound span: one per response/forwarded message (recipient, payload size)

This gives visibility into where time is spent and enables per-hop latency analysis.

General Principles

  • Prefer tracing over log — structured spans enable distributed tracing
  • Log at the point of decision, not at every intermediate step
  • Include enough context to diagnose without reproducing: IDs, sizes, error details
  • Never log secrets, tokens, keys, or raw user data — even at trace! level
  • Use Display for user-facing context, Debug for developer diagnostics

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is telemetry?

Perfect for AI Agents needing advanced telemetry capabilities for markdown directories, such as those leveraging Rust and Word2Vec. Telemetry is a skill that provides schema inference, frontmatter validation, and semantic search capabilities for markdown directories

How do I install telemetry?

Run the command: npx killer-skills add edochi/mdvs/telemetry. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for telemetry?

Key use cases include: Performing schema inference on markdown directories, Validating frontmatter for consistency and accuracy, Conducting semantic search for efficient content retrieval.

Which IDEs are compatible with telemetry?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for telemetry?

Requires dedicated telemetry/observability spec or conventions document. Limited to markdown directories. Dependent on technologies like Rust and Word2Vec.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add edochi/mdvs/telemetry. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use telemetry immediately in the current project.

Related Skills

Looking for an alternative to telemetry or another community skill for your workflow? Explore these related open-source skills.

View All

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
Design

linear

Logo of lobehub
lobehub

Linear issue management. MUST USE when: (1) user mentions LOBE-xxx issue IDs (e.g. LOBE-4540), (2) user says linear, linear issue, link linear, (3) creating PRs that reference Linear issues. Provides

73.4k
0
Communication

testing

Logo of lobehub
lobehub

Testing guide using Vitest. Use when writing tests (.test.ts, .test.tsx), fixing failing tests, improving test coverage, or debugging test issues. Triggers on test creation, test debugging, mock setup

73.3k
0
Communication

zustand

Logo of lobehub
lobehub

Zustand state management guide. Use when working with store code (src/store/**), implementing actions, managing state, or creating slices. Triggers on Zustand store development, state management questions, or action implementation.

72.8k
0
Communication