Killer-Skills Review
Decision support comes first. Repository text comes second.
This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.
高度なパフォーマンスモニタリングとLLMトークンの使用状況分析が必要なAIエージェントに最適です。 View LLM token usage, API call timing, and runtime metrics. Use when the user asks about token consumption, API costs, or performance statistics.
このスキルを使用する理由
エージェントがLLMトークンの使用状況やAPI呼び出しの時間などの重要なパフォーマンス指標を追跡できるようにし、API呼び出しの時間やトークンの使用状況などのメトリクスを使用してエージェントのパフォーマンスに関する貴重な洞察を提供します。
おすすめ
高度なパフォーマンスモニタリングとLLMトークンの使用状況分析が必要なAIエージェントに最適です。
↓ 実現可能なユースケース for metrics
! セキュリティと制限
- LLMトークンの使用状況データへのアクセスが必要
- API呼び出しの時間とトークンの使用状況の追跡のみ
- 実行にはコマンドラインインターフェースが必要
Why this page is reference-only
- - Current locale does not satisfy the locale-governance contract.
- - The underlying skill quality score is below the review floor.
Source Boundary
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Decide The Next Action Before You Keep Reading Repository Material
Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.
Start With Installation And Validation
If this skill is worth continuing with, the next step is to confirm the install command, CLI write path, and environment validation.
Cross-Check Against Trusted Picks
If you are still comparing multiple skills or vendors, go back to the trusted collection before amplifying repository noise.
Move To Workflow Collections For Team Rollout
When the goal shifts from a single skill to team handoff, approvals, and repeatable execution, move into workflow collections.
Browser Sandbox Environment
⚡️ Ready to unleash?
Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.
FAQ & Installation Steps
These questions and steps mirror the structured data on this page for better search understanding.
? Frequently Asked Questions
What is metrics?
高度なパフォーマンスモニタリングとLLMトークンの使用状況分析が必要なAIエージェントに最適です。 View LLM token usage, API call timing, and runtime metrics. Use when the user asks about token consumption, API costs, or performance statistics.
How do I install metrics?
Run the command: npx killer-skills add ZimoLiao/scholaraio. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.
What are the use cases for metrics?
Key use cases include: コスト最適化のためにLLMトークンの使用状況を分析する, パフォーマンスの問題を解決するためにAPI呼び出しの時間をデバッグする, エージェントのパフォーマンス評価のために要約統計を生成する.
Which IDEs are compatible with metrics?
This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.
Are there any limitations for metrics?
LLMトークンの使用状況データへのアクセスが必要. API呼び出しの時間とトークンの使用状況の追跡のみ. 実行にはコマンドラインインターフェースが必要.
↓ How To Install
-
1. Open your terminal
Open the terminal or command line in your project directory.
-
2. Run the install command
Run: npx killer-skills add ZimoLiao/scholaraio. The CLI will automatically detect your IDE or AI agent and configure the skill.
-
3. Start using the skill
The skill is now active. Your AI agent can use metrics immediately in the current project.
! Reference-Only Mode
This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
metrics
Install metrics, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.