Killer-Skills Review
Decision support comes first. Repository text comes second.
This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.
고급 성능 모니터링 및 LLM 토큰 사용 분석이 필요한 AI 에이전트에게 적합합니다. View LLM token usage, API call timing, and runtime metrics. Use when the user asks about token consumption, API costs, or performance statistics.
이 스킬을 사용하는 이유
에이전트가 LLM 토큰 사용량 및 API 호출 시간과 같은 주요 성능 지표를 추적할 수 있도록 하며, API 호출 시간 및 토큰 사용 통계와 같은 메트릭을 사용하여 에이전트의 성능에 대한 유용한 통찰력을 제공합니다.
최적의 용도
고급 성능 모니터링 및 LLM 토큰 사용 분석이 필요한 AI 에이전트에게 적합합니다.
↓ 실행 가능한 사용 사례 for metrics
! 보안 및 제한 사항
- LLM 토큰 사용 데이터에 대한 액세스가 필요합니다
- API 호출 시간 및 토큰 사용량 추적만 가능
- 실행을 위해 명령줄 인터페이스가 필요합니다
Why this page is reference-only
- - Current locale does not satisfy the locale-governance contract.
- - The underlying skill quality score is below the review floor.
Source Boundary
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Decide The Next Action Before You Keep Reading Repository Material
Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.
Start With Installation And Validation
If this skill is worth continuing with, the next step is to confirm the install command, CLI write path, and environment validation.
Cross-Check Against Trusted Picks
If you are still comparing multiple skills or vendors, go back to the trusted collection before amplifying repository noise.
Move To Workflow Collections For Team Rollout
When the goal shifts from a single skill to team handoff, approvals, and repeatable execution, move into workflow collections.
Browser Sandbox Environment
⚡️ Ready to unleash?
Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.
FAQ & Installation Steps
These questions and steps mirror the structured data on this page for better search understanding.
? Frequently Asked Questions
What is metrics?
고급 성능 모니터링 및 LLM 토큰 사용 분석이 필요한 AI 에이전트에게 적합합니다. View LLM token usage, API call timing, and runtime metrics. Use when the user asks about token consumption, API costs, or performance statistics.
How do I install metrics?
Run the command: npx killer-skills add ZimoLiao/scholaraio. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.
What are the use cases for metrics?
Key use cases include: 비용 최적화를 위해 LLM 토큰 사용량을 분석합니다, 성능 문제를 해결하기 위해 API 호출 시간을 디버깅합니다, 에이전트 성능 평가를 위해 요약 통계를 생성합니다.
Which IDEs are compatible with metrics?
This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.
Are there any limitations for metrics?
LLM 토큰 사용 데이터에 대한 액세스가 필요합니다. API 호출 시간 및 토큰 사용량 추적만 가능. 실행을 위해 명령줄 인터페이스가 필요합니다.
↓ How To Install
-
1. Open your terminal
Open the terminal or command line in your project directory.
-
2. Run the install command
Run: npx killer-skills add ZimoLiao/scholaraio. The CLI will automatically detect your IDE or AI agent and configure the skill.
-
3. Start using the skill
The skill is now active. Your AI agent can use metrics immediately in the current project.
! Reference-Only Mode
This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
metrics
Install metrics, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.