xrk-llm — for Claude Code xrk-llm, XRK-AGT, community, for Claude Code, ide skills, *_llm.yaml, *_compat_llm.yaml, commonconfig, openai, azure_openai

v1.0.0

Acerca de este Skill

Escenario recomendado: Ideal for AI agents that need 工厂:src/factory/llm/llmfactory.js. Resumen localizado: xrk-llm helps AI agents handle repository-specific developer workflows with documented implementation details.

Características

工厂:src/factory/llm/LLMFactory.js
OpenAI Chat 协议工具:src/utils/llm/openai-chat-utils.js
v3 网关:core/system-Core/http/ai.js(POST /api/v3/chat/completions)
Provider 选择规则(对外约定)
v3 请求体里的 model: 填 provider key (例如 openai、azure openai、ollama-local)。

# Core Topics

sunflowermm sunflowermm
[131]
[12]
Updated: 4/29/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for teams, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
49
Canonical Locale
zh
Detected Body Locale
zh

Escenario recomendado: Ideal for AI agents that need 工厂:src/factory/llm/llmfactory.js. Resumen localizado: xrk-llm helps AI agents handle repository-specific developer workflows with documented implementation details.

¿Por qué usar esta habilidad?

Recomendacion: xrk-llm helps agents 工厂:src/factory/llm/llmfactory.js. xrk-llm helps AI agents handle repository-specific developer workflows with documented implementation details.

Mejor para

Escenario recomendado: Ideal for AI agents that need 工厂:src/factory/llm/llmfactory.js.

Casos de uso accionables for xrk-llm

Caso de uso: Applying 工厂:src/factory/llm/LLMFactory.js
Caso de uso: Applying OpenAI Chat 协议工具:src/utils/llm/openai-chat-utils.js
Caso de uso: Applying v3 网关:core/system-Core/http/ai.js(POST /api/v3/chat/completions)

! Seguridad y limitaciones

  • Limitacion: Requires repository-specific context from the skill documentation
  • Limitacion: Works best when the underlying tools and dependencies are already configured

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is xrk-llm?

Escenario recomendado: Ideal for AI agents that need 工厂:src/factory/llm/llmfactory.js. Resumen localizado: xrk-llm helps AI agents handle repository-specific developer workflows with documented implementation details.

How do I install xrk-llm?

Run the command: npx killer-skills add sunflowermm/XRK-AGT/xrk-llm. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for xrk-llm?

Key use cases include: Caso de uso: Applying 工厂:src/factory/llm/LLMFactory.js, Caso de uso: Applying OpenAI Chat 协议工具:src/utils/llm/openai-chat-utils.js, Caso de uso: Applying v3 网关:core/system-Core/http/ai.js(POST /api/v3/chat/completions).

Which IDEs are compatible with xrk-llm?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for xrk-llm?

Limitacion: Requires repository-specific context from the skill documentation. Limitacion: Works best when the underlying tools and dependencies are already configured.

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add sunflowermm/XRK-AGT/xrk-llm. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use xrk-llm immediately in the current project.

! Reference-Only Mode

This page remains useful for installation and reference, but Killer-Skills no longer treats it as a primary indexable landing page. Read the review above before relying on the upstream repository instructions.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

xrk-llm

Install xrk-llm, an AI agent skill for AI agent workflows and automation. Review the use cases, limitations, and setup path before rollout.

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

你是什么

你是 XRK-AGT 的 LLM 工厂与多运营商配置专家。你必须以“配置驱动”为第一原则:不建议硬编码,优先调整 *_llm.yaml*_compat_llm.yaml,并确保 commonconfig schema 字段与客户端实现一致。

关键入口

  • 工厂:src/factory/llm/LLMFactory.js
  • OpenAI Chat 协议工具:src/utils/llm/openai-chat-utils.js
  • v3 网关:core/system-Core/http/ai.jsPOST /api/v3/chat/completions

Provider 选择规则(对外约定)

  • v3 请求体里的 model填 provider key(例如 openaiazure_openaiollama-local)。
  • 真实模型名:写在 provider 配置 YAML 的 model/chatModel(或 Azure 的 deployment)。

配置文件地图

官方 provider(单一配置文件)

  • OpenAI:data/server_bots/{port}/openai_llm.yaml
  • Azure OpenAI:data/server_bots/{port}/azure_openai_llm.yaml
  • Gemini:data/server_bots/{port}/gemini_llm.yaml
  • (若启用)Anthropic:data/server_bots/{port}/anthropic_llm.yaml

兼容 provider(providers[] 多运营商聚合)

  • OpenAI Chat:data/server_bots/{port}/openai_compat_llm.yaml
  • OpenAI Responses:data/server_bots/{port}/openai_responses_compat_llm.yaml
  • New API:data/server_bots/{port}/newapi_compat_llm.yaml
  • CherryIN:data/server_bots/{port}/cherryin_compat_llm.yaml
  • Ollama:data/server_bots/{port}/ollama_compat_llm.yaml
  • Gemini:data/server_bots/{port}/gemini_compat_llm.yaml
  • Anthropic:data/server_bots/{port}/anthropic_compat_llm.yaml
  • Azure OpenAI:data/server_bots/{port}/azure_openai_compat_llm.yaml

Schema(前端表单字段来源)

对应文件:core/system-Core/commonconfig/*.js
原则:YAML 字段必须能在 schema 里找到;schema 提供的字段必须被代码消费。

排障清单(必须按顺序)

  1. provider key 是否存在LLMFactory.listProviders() / GET /api/v3/models
  2. v3 的 model 是否填对:应该是 provider key,不是真实模型名
  3. 端点拼接是否正确baseUrl + path(或 Azure deployment + api-version、Ollama /api/chat、Gemini :generateContent
    • OpenAI/兼容 Chat 协议:默认约定 baseUrl 已包含版本前缀(如 /v1),path 只写资源路径(如 /chat/completions/responses
  4. 认证方式是否匹配
    • OpenAI 官方:Authorization: Bearer
    • Azure:api-key
    • Anthropic:x-api-key + anthropic-version
    • Gemini:query key=...
    • 兼容网关:用 authMode(bearer/api-key/header)配置
  5. 流式开关:provider 配置 enableStream 是否禁用
  6. 工具注入(MCP)enableTools + streams 白名单(由请求体 workflow 整理出来)

文档来源(需要时引用)

  • docs/factory.md
  • docs/aistream.md(streams 白名单、MCP 注入与子服务端关系)
  • core/system-Core/http/ai.js(v3 行为权威实现)

权威入口

  • 项目概览:PROJECT_OVERVIEW.md
  • 代码入口:src/core/ 对应子目录
  • 相关文档:docs/ 下对应主题文档

适用场景

  • 需要定位该子系统的实现路径与配置入口。
  • 需要快速给出改动落点与兼容性注意事项。

非适用场景

  • 不用于替代其他子系统的实现说明。
  • 不在缺少证据时臆造路径或字段。

执行步骤

  1. 先确认需求属于该技能的职责边界。
  2. 再给出代码路径、配置路径与关键字段。
  3. 最后补充风险点、验证步骤与回归范围。

常见陷阱

  • 只给概念,不给具体文件路径。
  • 文档与代码冲突时未标注以代码为准。
  • 忽略配置、Schema 与消费代码的一致性。

Habilidades relacionadas

Looking for an alternative to xrk-llm or another community skill for your workflow? Explore these related open-source skills.

Ver todo

openclaw-release-maintainer

Logo of openclaw
openclaw

Resumen localizado: 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

333.8k
0
Inteligencia Artificial

widget-generator

Logo of f
f

Resumen localizado: Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat . It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor, and

149.6k
0
Inteligencia Artificial

flags

Logo of vercel
vercel

Resumen localizado: The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Navegador

pr-review

Logo of pytorch
pytorch

Resumen localizado: Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review . It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Desarrollador