observability-guidelines — how to use observability-guidelines how to use observability-guidelines, what is observability-guidelines, observability-guidelines alternative, observability-guidelines vs OpenTracing, observability-guidelines install, observability-guidelines setup guide, OpenTelemetry integration tutorial, distributed system observability best practices, microservices monitoring with observability-guidelines

v1.0.0
GitHub

About this Skill

Perfect for Distributed System Agents needing advanced observability and monitoring capabilities with OpenTelemetry integration. Observability-guidelines is a set of principles and guidelines for ensuring comprehensive visibility into distributed systems and microservices, promoting modular design and test-driven development

Features

Applies core observability principles for idiomatic, maintainable, and high-performance code
Enforces modular design and separation of concerns through Clean Architecture
Promotes test-driven development and robust observability from the start
Integrates OpenTelemetry for distributed system visibility
Guides development of built-in observability for comprehensive system insights
Supports development of high-performance code with built-in observability

# Core Topics

mvappshub mvappshub
[0]
[0]
Updated: 3/9/2026

Quality Score

Top 5%
23
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add mvappshub/chalange/observability-guidelines

Agent Capability Analysis

The observability-guidelines MCP Server by mvappshub is an open-source Community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use observability-guidelines, what is observability-guidelines, observability-guidelines alternative.

Ideal Agent Persona

Perfect for Distributed System Agents needing advanced observability and monitoring capabilities with OpenTelemetry integration.

Core Value

Empowers agents to ensure comprehensive visibility into distributed systems and microservices by applying core observability principles and integrating OpenTelemetry, promoting modular design and separation of concerns through Clean Architecture.

Capabilities Granted for observability-guidelines MCP Server

Implementing test-driven development with robust observability
Enforcing modular design for complex systems
Integrating OpenTelemetry for distributed tracing

! Prerequisites & Limits

  • Requires OpenTelemetry compatibility
  • Clean Architecture implementation necessary
Project
SKILL.md
3.4 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Observability Guidelines

Apply these observability principles to ensure comprehensive visibility into distributed systems and microservices.

Core Observability Principles

  • Guide the development of idiomatic, maintainable, and high-performance code with built-in observability
  • Enforce modular design and separation of concerns through Clean Architecture
  • Promote test-driven development and robust observability from the start

OpenTelemetry Integration

  • Use OpenTelemetry for distributed tracing, metrics, and structured logging
  • Start and propagate tracing spans across all service boundaries
  • Use otel.Tracer for creating spans and otel.Meter for collecting metrics
  • Export data to OpenTelemetry Collector, Jaeger, or Prometheus
  • Configure appropriate sampling rates for production environments

Distributed Tracing

  • Trace all incoming requests and propagate context through internal calls
  • Use middleware to instrument HTTP and gRPC endpoints automatically
  • Include trace context in all downstream service calls
  • Create child spans for significant operations within a service
  • Add relevant attributes to spans for debugging and analysis

Metrics Collection

Monitor these key metrics across all services:

  • Request latency: Track p50, p90, p95, and p99 percentiles
  • Throughput: Measure requests per second by endpoint
  • Error rate: Track 4xx and 5xx responses separately
  • Resource usage: Monitor CPU, memory, disk, and network utilization
  • Custom business metrics: Track domain-specific KPIs

Structured Logging

  • Include unique request IDs and trace context in all logs for correlation
  • Use structured logging formats (JSON) for machine parseability
  • Include relevant context: timestamp, service name, trace ID, span ID
  • Log at appropriate levels: DEBUG, INFO, WARN, ERROR
  • Avoid logging sensitive information (PII, credentials)

Architecture Patterns

  • Apply Clean Architecture with handlers, services, repositories, and domain models
  • Use domain-driven design principles for clear boundaries
  • Prioritize interface-driven development with explicit dependency injection
  • Prefer composition over inheritance; favor small, purpose-specific interfaces

Correlation and Context

  • Propagate context through the entire request lifecycle
  • Use correlation IDs for request tracking across services
  • Include service version and deployment information in telemetry
  • Tag traces with relevant business context for filtering
  • Enable trace-to-log and log-to-trace correlation

Alerting and Dashboards

  • Create dashboards for service health and business metrics
  • Set up alerts based on SLOs and error budgets
  • Use anomaly detection for proactive issue identification
  • Document runbooks for common alert scenarios
  • Review and tune alerts regularly to reduce noise

Instrumentation Best Practices

  • Instrument at service boundaries (entry/exit points)
  • Add custom spans for database operations and external calls
  • Include relevant attributes (user ID, request type, etc.)
  • Avoid over-instrumentation that creates noise
  • Use semantic conventions for consistent attribute naming

Production Considerations

  • Configure appropriate sampling rates to balance visibility and cost
  • Use head-based sampling for consistent trace capture
  • Implement tail-based sampling for capturing errors
  • Set retention policies based on debugging needs
  • Monitor observability infrastructure health

Related Skills

Looking for an alternative to observability-guidelines or building a Community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

testing

Logo of lobehub
lobehub

Testing is a process for verifying AI agent functionality using commands like bunx vitest run and optimizing workflows with targeted test runs.

73.3k
0
Communication

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication