usage-report — for Claude Code usage-report, mcp-gateway-registry, community, for Claude Code, ide skills, Usage Report, MCP Gateway Registry, DocumentDB telemetry collector, Terraform state, SSH key

v1.1

이 스킬 정보

적합한 상황: Ideal for AI agents that need usage report skill. 현지화된 요약: Enterprise-ready MCP Gateway & Registry that centralizes AI development tools with secure OAuth authentication, dynamic tool discovery, and unified access for both autonomous AI agents and AI coding assistants. It covers a2a, agentic-ai, agents workflows.

기능

Export telemetry data from the MCP Gateway Registry's DocumentDB telemetry collector
Generate a usage report showing deployment patterns, version adoption, and feature usage in the wild
Transform scattered MCP server chaos into governed, auditable tool access
Use SSH key at `~/.ssh/id_ed25519` with access to the bastion host
Use Terraform state available in `terraform/telemetry-collector/` (to read bastion IP)

# Core Topics

agentic-community agentic-community
[616]
[159]
Updated: 4/28/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 10/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review
Review Score
10/11
Quality Score
82
Canonical Locale
en
Detected Body Locale
en

적합한 상황: Ideal for AI agents that need usage report skill. 현지화된 요약: Enterprise-ready MCP Gateway & Registry that centralizes AI development tools with secure OAuth authentication, dynamic tool discovery, and unified access for both autonomous AI agents and AI coding assistants. It covers a2a, agentic-ai, agents workflows.

이 스킬을 사용하는 이유

추천 설명: usage-report helps agents usage report skill. The usage-report skill is a powerful tool for AI developers and administrators. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

최적의 용도

적합한 상황: Ideal for AI agents that need usage report skill.

실행 가능한 사용 사례 for usage-report

사용 사례: Applying Usage Report Skill
사용 사례: Applying SSH key at /.ssh/id ed25519 with access to the bastion host
사용 사례: Applying Terraform state available in terraform/telemetry-collector/ (to read bastion IP)

! 보안 및 제한 사항

  • 제한 사항: Requires repository-specific context from the skill documentation
  • 제한 사항: Works best when the underlying tools and dependencies are already configured

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is usage-report?

적합한 상황: Ideal for AI agents that need usage report skill. 현지화된 요약: Enterprise-ready MCP Gateway & Registry that centralizes AI development tools with secure OAuth authentication, dynamic tool discovery, and unified access for both autonomous AI agents and AI coding assistants. It covers a2a, agentic-ai, agents workflows.

How do I install usage-report?

Run the command: npx killer-skills add agentic-community/mcp-gateway-registry/usage-report. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for usage-report?

Key use cases include: 사용 사례: Applying Usage Report Skill, 사용 사례: Applying SSH key at /.ssh/id ed25519 with access to the bastion host, 사용 사례: Applying Terraform state available in terraform/telemetry-collector/ (to read bastion IP).

Which IDEs are compatible with usage-report?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for usage-report?

제한 사항: Requires repository-specific context from the skill documentation. 제한 사항: Works best when the underlying tools and dependencies are already configured.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add agentic-community/mcp-gateway-registry/usage-report. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use usage-report immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

usage-report

Install usage-report, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Usage Report Skill

Export telemetry data from the MCP Gateway Registry's DocumentDB telemetry collector and generate a usage report showing deployment patterns, version adoption, and feature usage in the wild.

Prerequisites

  1. SSH key at ~/.ssh/id_ed25519 with access to the bastion host
  2. Terraform state available in terraform/telemetry-collector/ (to read bastion IP)
  3. Bastion host enabled (bastion_enabled = true in terraform/telemetry-collector/terraform.tfvars)
  4. AWS credentials configured on the bastion host (for Secrets Manager access)
  5. GitHub CLI (gh) authenticated with read access to the upstream repo (agentic-community/mcp-gateway-registry) for collecting stars, forks, and contributor counts

Input

The skill accepts optional parameters:

/usage-report [OUTPUT_DIR]
  • OUTPUT_DIR - Base directory for reports (default: .scratchpad/usage-reports/)

If OUTPUT_DIR is not provided, save to .scratchpad/usage-reports/.

All artifacts for a given run are placed in a dated subfolder: OUTPUT_DIR/YYYY-MM-DD/. This keeps each report self-contained and avoids a flat directory of hundreds of files. Previous metrics and CSV files are discovered by scanning both the base directory and all dated subdirectories.

Workflow

Step 1: Get Bastion IP

bash
1cd terraform/telemetry-collector && terraform output -raw bastion_public_ip

If the output is "Bastion not enabled", tell the user to set bastion_enabled = true in terraform/telemetry-collector/terraform.tfvars and run terraform apply.

Step 2: Copy Export Script to Bastion

bash
1scp -o StrictHostKeyChecking=no -i ~/.ssh/id_ed25519 \ 2 terraform/telemetry-collector/bastion-scripts/telemetry_db.py \ 3 ec2-user@$BASTION_IP:~/telemetry_db.py

Step 3: Run Export on Bastion

bash
1ssh -o StrictHostKeyChecking=no -i ~/.ssh/id_ed25519 \ 2 ec2-user@$BASTION_IP \ 3 'python3 telemetry_db.py export --output /tmp/registry_metrics.csv 2>&1'

Capture the full output -- it contains the summary statistics printed by telemetry_db.py.

Step 4: Create Dated Subfolder and Download the CSV

Create a dated subfolder for this run's artifacts, then download the CSV into it:

bash
1DATE_DIR=OUTPUT_DIR/YYYY-MM-DD 2mkdir -p $DATE_DIR 3 4scp -o StrictHostKeyChecking=no -i ~/.ssh/id_ed25519 \ 5 ec2-user@$BASTION_IP:/tmp/registry_metrics.csv \ 6 $DATE_DIR/registry_metrics.csv

Step 5: Install Python Dependencies and Generate Charts

First, ensure matplotlib and seaborn are available on the system Python:

bash
1/usr/bin/python3 -c "import matplotlib, seaborn" 2>/dev/null || pip install --break-system-packages matplotlib seaborn

Then generate the instance-based deployment distribution chart (counts unique registry instances, not events):

bash
1/usr/bin/python3 .claude/skills/usage-report/generate_instance_distribution_chart.py \ 2 --csv $DATE_DIR/registry_metrics.csv \ 3 --output $DATE_DIR/instance-distribution-YYYY-MM-DD.png

This produces a single faceted PNG with 6 subplots: Cloud Provider, Compute Platform, Storage Backend, Auth Provider, Architecture, and Deployment Mode. Each subplot shows unique instance counts and percentages.

Step 5b: Generate Timeseries Chart

Generate a timeseries chart showing unique registry installs per cloud provider over time. This reads ALL CSV files in the base output directory and dated subdirectories to build a complete historical view:

bash
1/usr/bin/python3 .claude/skills/usage-report/generate_timeseries_chart.py \ 2 --csv-dir OUTPUT_DIR \ 3 --output $DATE_DIR/registry-installs-timeseries-YYYY-MM-DD.png

This produces a PNG with two subplots:

  • Cumulative Unique Registry Installs -- running total of unique registry_ids per cloud provider
  • Daily Active Registry Installs -- unique registry_ids seen each day per cloud provider

Step 5c: Generate Instance Lifetime Chart

Generate a density plot showing the distribution of instance lifetimes (age in days). This reads the metrics JSON produced by the analysis step, so it must run after Step 6. However, the SKILL.md lists it here for logical grouping with other charts:

bash
1/usr/bin/python3 .claude/skills/usage-report/generate_lifetime_chart.py \ 2 --metrics $DATE_DIR/metrics-YYYY-MM-DD.json \ 3 --output $DATE_DIR/instance-lifetime-YYYY-MM-DD.png

This produces a PNG with two panels:

  • Age Distribution -- histogram with KDE density overlay showing instance ages in days, with stats annotation (mean, max, multi-day vs single-day counts)
  • Age Buckets -- horizontal bar chart grouping instances into age ranges (0 days, 1-2 days, 3-5 days, etc.) with counts and percentages

Note: Run this after Step 6 (telemetry analysis) since it reads the metrics JSON.

Step 5d: Fetch GitHub Repository Stats

Collect community-growth signals for the upstream repo (agentic-community/mcp-gateway-registry) using the authenticated gh CLI. These numbers complement telemetry by showing project interest outside of deployed instances.

bash
1# Star, fork, watcher, open-issue counts (single API call) 2gh api repos/agentic-community/mcp-gateway-registry \ 3 --jq '{stars: .stargazers_count, forks: .forks_count, watchers: .subscribers_count, open_issues: .open_issues_count}' \ 4 > $DATE_DIR/github_stats.json 5 6# Unique contributors (paginate through all pages, count unique logins) 7gh api --paginate repos/agentic-community/mcp-gateway-registry/contributors \ 8 --jq '.[].login' | sort -u | wc -l > $DATE_DIR/github_contributors_count.txt

Record these numbers in the report and compare them against the previous report's github_stats.json (if present in the previous dated subfolder). Compute deltas for stars, forks, and contributors the same way telemetry metrics are compared.

Note: If gh is not authenticated or the API call fails, skip the GitHub section in the report and log a short note instead of failing the entire run.

Step 6: Run Telemetry Analysis

Run the analysis script to compute all distributions, instance timelines, and metrics. This produces two files:

  • tables-YYYY-MM-DD.md -- pre-formatted markdown tables ready to embed in the report (with executive summary comparison at the top)
  • metrics-YYYY-MM-DD.json -- raw computed metrics as JSON (includes per_cloud_unique_installs)

The script automatically finds the most recent previous metrics-*.json file. Since output files are written to the dated subfolder ($DATE_DIR) but previous metrics live in sibling dated subfolders, you must pass --search-dir OUTPUT_DIR so the script searches the parent directory containing all dated subfolders:

bash
1INTERNAL_INSTANCES_FILE=".claude/skills/usage-report/known-internal-instances.md" 2INTERNAL_FLAG="" 3if [ -f "$INTERNAL_INSTANCES_FILE" ]; then 4 INTERNAL_FLAG="--internal-instances $INTERNAL_INSTANCES_FILE" 5fi 6 7/usr/bin/python3 .claude/skills/usage-report/analyze_telemetry.py \ 8 --csv $DATE_DIR/registry_metrics.csv \ 9 --output-dir $DATE_DIR \ 10 --search-dir OUTPUT_DIR \ 11 --date YYYY-MM-DD \ 12 $INTERNAL_FLAG
  • --output-dir $DATE_DIR -- where to write tables-*.md and metrics-*.json
  • --search-dir OUTPUT_DIR -- where to search for previous metrics-*.json files (scans this directory and all subdirectories). If omitted, defaults to the parent of --output-dir.
  • --internal-instances -- path to known-internal-instances.md listing known internal registry instance IDs. When provided, internal instances are labeled "(internal)" in the Instance Lifetime and Identified Instances tables, a Most Active Instances table is generated with an Internal column, and stickiness metrics (3+ day non-internal count, longest-running non-internal instance) are computed and included in the JSON output.

Or with an explicit previous metrics file (skips auto-detection):

bash
1/usr/bin/python3 .claude/skills/usage-report/analyze_telemetry.py \ 2 --csv $DATE_DIR/registry_metrics.csv \ 3 --output-dir $DATE_DIR \ 4 --date YYYY-MM-DD \ 5 --previous-metrics OUTPUT_DIR/PREVIOUS-DATE/metrics-PREVIOUS-DATE.json \ 6 $INTERNAL_FLAG

Step 6b: Identify Internal vs Customer Instances

The --internal-instances flag passed in Step 6 handles internal instance identification automatically. The analysis script reads .claude/skills/usage-report/known-internal-instances.md (if it exists, since it is gitignored and may not be present on all machines) and:

  1. Labels internal instances with "(internal)" in the Instance Lifetime and Identified Instances tables
  2. Generates a "Most Active Instances" table ranked by activity score (max servers + agents + skills + search), with an Internal column and a Version column
  3. Computes stickiness metrics (3+ day non-internal count, longest-running non-internal) and writes them to the JSON output under the stickiness key
  4. Writes the list of internal instance IDs to the JSON output under internal_instance_ids

If the file does not exist, the script treats all instances as external (no internal labeling, stickiness counts all instances).

When writing the report:

  1. Clearly label known internal instances in the Instance Lifetime table and Registry Instances table (e.g., add "(internal)" suffix or a dedicated column)
  2. Separate metrics: Report total fleet numbers AND customer-only numbers (excluding internal instances). For example: "97 total instances (3 known internal + possibly more, ~94 potential customer instances)"
  3. Flag unusual activity from internal instances: If internal instances show disproportionate activity (e.g., many registered servers/agents/skills, heavy search usage, frequent restarts/heartbeats), explicitly note this is internal testing activity and NOT indicative of customer usage patterns
  4. Note that additional internal instances may exist beyond the known list -- short-lived CI/CD runs, developer local setups, etc. may not be in the known list

The known internal instances are typically the longest-running, highest-activity instances since they are always-on development environments.

Step 7: Generate the Usage Report

Read the generated tables-YYYY-MM-DD.md and include its tables directly in the report. Add narrative sections (Executive Summary, Architecture Patterns, Recommendations) around the data tables. The tables file contains:

  • Key Metrics table
  • Registry Instance Lifetime table (age in days, sorted descending, internal instances labeled)
  • Identified and Unidentified instance tables (internal instances labeled)
  • Cloud, Compute, Architecture, Storage, Auth distribution tables
  • Version Adoption table
  • Feature Adoption table
  • Search Usage table
  • Sticky Instance Breakdown table (one row per cloud/compute/storage/auth profile, with count, percentage, and change vs previous)
  • Most Active Instances table (top 10 by activity score, with Version and Internal columns)
  • Per-instance daily timelines (with servers, agents, skills, search queries)

Report Structure

The main body focuses on insights and charts. Detailed event-count distribution tables are moved to an appendix.

markdown
1# AI Registry -- Usage Report 2 3*Report Date: YYYY-MM-DD* 4*Data Source: Telemetry Collector (DocumentDB)* 5*Collection Period: [earliest ts] to [latest ts]* 6 7--- 8 9## Executive Summary 10Lead with new installs since last report, total unique installs, dominant cloud/compute/IdP, growth trends. Also include the current GitHub star count (with delta vs previous report) as a top-line community signal. Include timeseries chart. 11 12Include an **instance stickiness** line: "N instances have been running for 3+ days (up/down from M in the previous report). The longest-running non-internal instance is `REGISTRY_ID` at D days (previously P days)." This signals real adoption beyond one-time trials. 13 14These numbers are pre-computed by `analyze_telemetry.py` (when `--internal-instances` is provided) and available in `metrics-YYYY-MM-DD.json` under the `stickiness` key: 15- `stickiness.sticky_3plus_days`: count of non-internal instances where age_days >= 3 16- `stickiness.longest_non_internal_id`: registry_id with max age_days (filtered) 17- `stickiness.longest_non_internal_days`: max age_days value 18 19Compare both numbers against the same `stickiness` values from the previous report's `metrics-*.json`. 20 21![Registry Installs Timeseries](registry-installs-timeseries-YYYY-MM-DD.png) 22 23### Comparison with Previous Report 24- Deltas for total events, unique instances, heartbeat events, null registry_id count 25- Per-cloud-provider unique registry installs comparison table 26- GitHub stars delta (and forks/contributors if notable) 27- Customer instances running 3+ days: current vs previous count 28- Longest-running non-internal instance: current age vs previous age 29 30## Deployment Distribution (by Unique Instances) 31![Instance Distribution](instance-distribution-YYYY-MM-DD.png) 32 33## Key Metrics 34| Metric | Value | 35|--------|-------| 36| Total Events | N | 37| Unique Registry Instances | N | 38| Known Internal Instances | 3 (+ possibly more) | 39| Potential Customer Instances | N - internal | 40| ... | ... | 41 42## Internal Instances (Development/Testing) 43List the known internal instances from known-internal-instances.md. 44Note their disproportionate activity (high search, many servers/agents/skills, long uptime). 45Clearly state: "Activity from these instances reflects internal testing and should not be interpreted as customer usage patterns." 46Flag any unusual spikes (e.g., restart storms, heavy search bursts) with context. 47 48## Registry Instance Lifetime 49Commentary on average/max lifetime, multi-day vs single-day. 50Density chart and top-10 table by age. Mark internal instances. 51 52## Version Adoption 53Table of version strings with event counts. Notes on release vs dev versions. 54 55## Feature Adoption 56Federation, gateway mode, heartbeat rates. 57 58## Search Usage 59Total queries (deduplicated), average per instance, max from single instance. 60 61## Heartbeat Metrics 62Server/agent/skill counts, uptime, search backend, embeddings provider. 63 64## Sticky Instance Breakdown (3+ Days) 65This section is pre-generated by `analyze_telemetry.py` in the `tables-YYYY-MM-DD.md` file. It shows a single table of unique non-internal registry instances with age >= 3 days, grouped by deployment profile (cloud, compute, storage, auth combination). Each row shows the instance count, percentage, and change vs the previous report. The profile counts are also saved in the metrics JSON as `sticky_profiles` so future reports can compute deltas. 66 67Add a short narrative highlighting the top deployment profiles and any notable shifts in the mix. 68 69## Most Active Instances (by Feature Usage) 70This table is pre-generated by `analyze_telemetry.py` in the `tables-YYYY-MM-DD.md` file. It shows the top 10 instances ranked by activity score (max servers + agents + skills + lifetime search queries), with columns: Rank, Registry ID, Cloud/Compute/Auth, Version, Servers, Agents, Skills, Search, Total, Internal. 71 72Add a short narrative below the table highlighting distinct usage patterns among the top customer instances (e.g., full-featured vs search-only vs skills-catalog). 73 74## GitHub Repository 75Community-growth signals for `agentic-community/mcp-gateway-registry` pulled via the `gh` CLI in Step 5d. Include a table with current values and deltas vs the previous report: 76 77| Metric | Previous | Current | Change | 78|--------|----------|---------|--------| 79| Stars | N | N | +N | 80| Forks | N | N | +N | 81| Contributors | N | N | +N | 82 83Add a short narrative: direction of growth (stars/week trend), any notable jumps (e.g., post-launch spike, blog-post-driven traffic), and whether contributor count is broadening (new external contributors) or concentrating. 84 85## Architecture Patterns Observed 863-5 distinct deployment patterns from the data. 87 88## Recommendations 893-5 actionable insights based on the data. 90 91## Appendix: Raw Distribution Tables 92Event-count-based distribution tables for cloud, compute, architecture, storage, and auth. 93These provide the raw numbers behind the instance-based chart above.

Save the report to $DATE_DIR/ai-registry-usage-report-YYYY-MM-DD.md.

Step 8: Generate Self-Contained HTML

Convert the markdown report to a single self-contained HTML file using pandoc. The chart PNG is base64-embedded so the HTML works standalone. Run from the DATE_DIR so relative image paths resolve:

bash
1cd $DATE_DIR && pandoc ai-registry-usage-report-YYYY-MM-DD.md \ 2 -o ai-registry-usage-report-YYYY-MM-DD.html \ 3 --embed-resources --standalone \ 4 --css=.claude/skills/usage-report/report-style.css \ 5 --metadata title="AI Registry - Usage Report YYYY-MM-DD"

The report-style.css file in the skill directory provides a clean, professional layout. Pandoc must be installed:

bash
1which pandoc >/dev/null || sudo apt-get install -y pandoc

Step 9: Present Results

After generating the report:

  1. Display the Executive Summary (with comparison deltas, including GitHub stars delta) and Key Metrics directly in the conversation
  2. Tell the user the full report path, HTML path, CSV path, and chart paths
  3. Highlight the most interesting findings and notable changes from the previous report (telemetry + GitHub)

Error Handling

  • SSH connection fails: Check that the bastion IP is correct and security group allows your IP. The allowed CIDRs are in terraform/telemetry-collector/terraform.tfvars under bastion_allowed_cidrs.
  • Export returns 0 documents: The telemetry collector may not have received any events yet. Check that telemetry_enabled is true in registry settings and the collector endpoint is reachable.
  • Terraform output fails: Make sure you're in the right directory and have run terraform init.

Example Usage

User: /usage-report

Output:

Executive Summary: 1074 events from 97 unique registry instances over 21 days...
Compared to previous report (2026-04-16): +327 events (+44%), +5 new instances (+5%)

Full report: .scratchpad/usage-reports/2026-04-18/ai-registry-usage-report-2026-04-18.md
HTML report: .scratchpad/usage-reports/2026-04-18/ai-registry-usage-report-2026-04-18.html
Charts:
  - .scratchpad/usage-reports/2026-04-18/instance-distribution-2026-04-18.png
  - .scratchpad/usage-reports/2026-04-18/registry-installs-timeseries-2026-04-18.png
  - .scratchpad/usage-reports/2026-04-18/instance-lifetime-2026-04-18.png
CSV data: .scratchpad/usage-reports/2026-04-18/registry_metrics.csv
User: /usage-report /tmp/reports

Output saved to /tmp/reports/2026-04-18/.

Output Directory Structure

.scratchpad/usage-reports/
  2026-04-16/
    ai-registry-usage-report-2026-04-16.md
    ai-registry-usage-report-2026-04-16.html
    instance-distribution-2026-04-16.png
    registry-installs-timeseries-2026-04-16.png
    instance-lifetime-2026-04-16.png
    tables-2026-04-16.md
    metrics-2026-04-16.json
    registry_metrics.csv
  2026-04-18/
    ai-registry-usage-report-2026-04-18.md
    ai-registry-usage-report-2026-04-18.html
    github_stats.json
    github_contributors_count.txt
    ...

관련 스킬

Looking for an alternative to usage-report or another community skill for your workflow? Explore these related open-source skills.

모두 보기

openclaw-release-maintainer

Logo of openclaw
openclaw

현지화된 요약: 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

333.8k
0
인공지능

widget-generator

Logo of f
f

현지화된 요약: Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat . It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf

149.6k
0
인공지능

flags

Logo of vercel
vercel

현지화된 요약: The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
브라우저

pr-review

Logo of pytorch
pytorch

현지화된 요약: Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review . It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
개발자