codex — AI代理服务器技能 qcc_plus, community, AI代理服务器技能, ide skills, Claude Code技能, 多租户支持, 自动故障切换, gpt-5.1-codex-max模型, 高级推理努力, Docker集成, Claude Code

v1.0.0

关于此技能

非常适合需要使用 Claude Code 和 gpt-5.1-codex-max 模型进行高级自动化和工作流优化的编码代理 Codex AI 代理服务器技能是一种为AI编码助手提供多租户和自动故障切换的技术

功能特性

多租户支持
自动故障切换
Web管理界面
使用gpt-5.1-codex-max模型
高级推理努力
Docker和Golang集成

# 核心主题

yxhpy yxhpy
[41]
[14]
更新于: 3/16/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 8/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution
Review Score
8/11
Quality Score
34
Canonical Locale
en
Detected Body Locale
en

非常适合需要使用 Claude Code 和 gpt-5.1-codex-max 模型进行高级自动化和工作流优化的编码代理 Codex AI 代理服务器技能是一种为AI编码助手提供多租户和自动故障切换的技术

核心价值

赋予代理使用 gpt-5.1-codex-max 模型自动化编码任务的能力,具有高推理努力,实现多租户支持、自动故障转移和通过 Codex(Claude Code CLI 代理服务器)实现的 Web 管理接口,利用 CLI 协议和高级推理实现优化的工作流自动化

适用 Agent 类型

非常适合需要使用 Claude Code 和 gpt-5.1-codex-max 模型进行高级自动化和工作流优化的编码代理

赋予的主要能力 · codex

使用 gpt-5.1-codex-max 模型自动化编码任务
为开发人员优化工作流自动化
为编码任务启用多租户支持和自动故障转移

! 使用限制与门槛

  • 需要 gpt-5.1-codex-max 模型和高推理努力
  • 必须使用特定的模型和推理努力
  • 需要与 Claude Code CLI 代理服务器兼容

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

实验室 Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

常见问题与安装步骤

以下问题与步骤与页面结构化数据保持一致,便于搜索引擎理解页面内容。

? FAQ

codex 是什么?

非常适合需要使用 Claude Code 和 gpt-5.1-codex-max 模型进行高级自动化和工作流优化的编码代理 Codex AI 代理服务器技能是一种为AI编码助手提供多租户和自动故障切换的技术

如何安装 codex?

运行命令:npx killer-skills add yxhpy/qcc_plus/codex。支持 Cursor、Windsurf、VS Code、Claude Code 等 19+ IDE/Agent。

codex 适用于哪些场景?

典型场景包括:使用 gpt-5.1-codex-max 模型自动化编码任务、为开发人员优化工作流自动化、为编码任务启用多租户支持和自动故障转移。

codex 支持哪些 IDE 或 Agent?

该技能兼容 Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer。可使用 Killer-Skills CLI 一条命令通用安装。

codex 有哪些限制?

需要 gpt-5.1-codex-max 模型和高推理努力;必须使用特定的模型和推理努力;需要与 Claude Code CLI 代理服务器兼容。

安装步骤

  1. 1. 打开终端

    在你的项目目录中打开终端或命令行。

  2. 2. 执行安装命令

    运行:npx killer-skills add yxhpy/qcc_plus/codex。CLI 会自动识别 IDE 或 AI Agent 并完成配置。

  3. 3. 开始使用技能

    codex 已启用,可立即在当前项目中调用。

! 参考页模式

此页面仍可作为安装与查阅参考,但 Killer-Skills 不再把它视为主要可索引落地页。请优先阅读上方评审结论,再决定是否继续查看上游仓库说明。

Imported Repository Instructions

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Supporting Evidence

codex

使用Codex AI代理服务器技能,实现多租户、自动故障切换和Web管理界面,提升您的Claude Code开发体验

SKILL.md
Readonly
Imported Repository Instructions
The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.
Supporting Evidence

Codex Skill Guide

Running a Task

  1. Always use model gpt-5.1-codex-max with reasoning effort high for all Codex runs:
    • Model: gpt-5.1-codex-max (mandatory, do not change)
    • Reasoning effort: high (mandatory, do not change)
    • If the user mentions other model names (gpt-5, gpt-5-codex-max, gpt-5-codex), always use gpt-5.1-codex-max instead.
    • Clearly state in your summary that model gpt-5.1-codex-max with high reasoning effort was used.
  2. Select the sandbox mode required for the task; default to --sandbox read-only for analysis-only tasks, and use --sandbox workspace-write when edits are needed. Only consider --sandbox danger-full-access when the user’s request clearly requires network or broad system access.
  3. Assemble the command with the appropriate options:
    • -m, --model <MODEL>
    • --config model_reasoning_effort="<high|medium|low>"
    • --sandbox <read-only|workspace-write|danger-full-access>
    • --full-auto
    • -C, --cd <DIR>
    • --skip-git-repo-check
  4. Always use --skip-git-repo-check.
  5. When continuing a previous session, use codex exec --skip-git-repo-check resume --last via stdin. When resuming don't use any configuration flags unless explicitly requested by the user e.g. if he species the model or the reasoning effort when requesting to resume a session. Resume syntax: echo "your prompt here" | codex exec --skip-git-repo-check resume --last 2>/dev/null. All flags have to be inserted between exec and resume.
  6. IMPORTANT: By default, append 2>/dev/null to all codex exec commands to suppress thinking tokens (stderr). Only show stderr if the user explicitly requests to see thinking tokens or if debugging is needed.
  7. Run the command, capture stdout/stderr (filtered as appropriate), and summarize the outcome for the user.
  8. After Codex completes, inform the user: "You can resume this Codex session at any time by saying 'codex resume' or asking me to continue with additional analysis or changes."

Quick Reference

Use caseSandbox modeKey flags
Read-only review or analysisread-only--sandbox read-only 2>/dev/null
Apply local editsworkspace-write--sandbox workspace-write --full-auto 2>/dev/null
Permit network or broad accessdanger-full-access--sandbox danger-full-access --full-auto 2>/dev/null
Resume recent sessionInherited from originalecho "prompt" | codex exec --skip-git-repo-check resume --last 2>/dev/null (no flags allowed)
Run from another directoryMatch task needs-C <DIR> plus other flags 2>/dev/null

Following Up

  • After every codex command, summarize what was done, propose concrete next steps, and, if needed, ask focused follow-up questions only about the task itself (not about model or reasoning-effort choices).
  • When resuming, pipe the new prompt via stdin: echo "new prompt" | codex exec resume --last 2>/dev/null. The resumed session automatically uses the same model, reasoning effort, and sandbox mode from the original session.
  • Restate that gpt-5.1-codex-max with high reasoning effort was used, along with the sandbox mode, when proposing follow-up actions.

Error Handling

  • Stop and report failures whenever codex --version or a codex exec command exits non-zero; request direction before retrying.
  • Before you use high-impact flags like --full-auto or --sandbox danger-full-access, make sure the user’s request clearly implies this level of automation or access; you do not need to ask them to choose model or reasoning-effort.
  • When output includes warnings or partial results, summarize them and ask how to adjust using AskUserQuestion.

相关技能

寻找 codex 的替代方案 (Alternative) 或可搭配使用的同类 community Skill?探索以下相关开源技能。

查看全部

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

为prompts.chat的信息反馈系统生成可定制的插件小部件

149.6k
0
AI

flags

Logo of vercel
vercel

React 框架

138.4k
0
浏览器

pr-review

Logo of pytorch
pytorch

Python中具有强大GPU加速的张量和动态神经网络

98.6k
0
开发者工具