gsd-eval-review — for Claude Code gsd-eval-review, TotNghiepProject, community, for Claude Code, ide skills, $gsd-eval-review, {{GSD_ARGS}}, AskUserQuestion, request_user_input, header

v1.0.0

À propos de ce Skill

Scenario recommande : Ideal for AI agents that need <codex skill adapter. Resume localise : Đây là dự án final project <codex skill adapter A. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Fonctionnalités

<codex skill adapter
A. Skill Invocation
This skill is invoked by mentioning $gsd-eval-review.
Treat all user text after $gsd-eval-review as {{GSD ARGS}}.
If no arguments are present, treat {{GSD ARGS}} as empty.

# Sujets clés

ndn2k5 ndn2k5
[0]
[0]
Mis à jour: 5/4/2026

Skill Overview

Start with fit, limitations, and setup before diving into the repository.

Scenario recommande : Ideal for AI agents that need <codex skill adapter. Resume localise : Đây là dự án final project <codex skill adapter A. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Pourquoi utiliser cette compétence

Recommandation : gsd-eval-review helps agents <codex skill adapter. Đây là dự án final project <codex skill adapter A. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Meilleur pour

Scenario recommande : Ideal for AI agents that need <codex skill adapter.

Cas d'utilisation exploitables for gsd-eval-review

Cas d'usage : Applying <codex skill adapter
Cas d'usage : Applying A. Skill Invocation
Cas d'usage : Applying This skill is invoked by mentioning $gsd-eval-review

! Sécurité et Limitations

  • Limitation : You may only proceed without a user answer when one of these is true:
  • Limitation : Do NOT pick a default and continue (#3018)
  • Limitation : Requires repository-specific context from the skill documentation

About The Source

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Démo Labs

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ et étapes d’installation

These questions and steps mirror the structured data on this page for better search understanding.

? Questions fréquentes

Qu’est-ce que gsd-eval-review ?

Scenario recommande : Ideal for AI agents that need <codex skill adapter. Resume localise : Đây là dự án final project <codex skill adapter A. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

Comment installer gsd-eval-review ?

Exécutez la commande : npx killer-skills add ndn2k5/TotNghiepProject/gsd-eval-review. Elle fonctionne avec Cursor, Windsurf, VS Code, Claude Code et plus de 19 autres IDE.

Quels sont les cas d’usage de gsd-eval-review ?

Les principaux cas d’usage incluent : Cas d'usage : Applying <codex skill adapter, Cas d'usage : Applying A. Skill Invocation, Cas d'usage : Applying This skill is invoked by mentioning $gsd-eval-review.

Quels IDE sont compatibles avec gsd-eval-review ?

Cette skill est compatible avec Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Utilisez la CLI Killer-Skills pour une installation unifiée.

Y a-t-il des limites pour gsd-eval-review ?

Limitation : You may only proceed without a user answer when one of these is true:. Limitation : Do NOT pick a default and continue (#3018). Limitation : Requires repository-specific context from the skill documentation.

Comment installer ce skill

  1. 1. Ouvrir le terminal

    Ouvrez le terminal ou la ligne de commande dans le dossier du projet.

  2. 2. Lancer la commande d’installation

    Exécutez : npx killer-skills add ndn2k5/TotNghiepProject/gsd-eval-review. La CLI détectera automatiquement votre IDE ou votre agent et configurera la skill.

  3. 3. Commencer à utiliser le skill

    Le skill est maintenant actif. Votre agent IA peut utiliser gsd-eval-review immédiatement dans le projet.

! Source Notes

This page is still useful for installation and source reference. Before using it, compare the fit, limitations, and upstream repository notes above.

Upstream Repository Material

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Upstream Source

gsd-eval-review

Đây là dự án final project <codex skill adapter A. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows. <codex skill adapter

SKILL.md
Readonly
Upstream Repository Material
The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.
Upstream Source

<codex_skill_adapter>

A. Skill Invocation

  • This skill is invoked by mentioning $gsd-eval-review.
  • Treat all user text after $gsd-eval-review as {{GSD_ARGS}}.
  • If no arguments are present, treat {{GSD_ARGS}} as empty.

B. AskUserQuestion → request_user_input Mapping

GSD workflows use AskUserQuestion (Claude Code syntax). Translate to Codex request_user_input:

Parameter mapping:

  • headerheader
  • questionquestion
  • Options formatted as "Label" — description{label: "Label", description: "description"}
  • Generate id from header: lowercase, replace spaces with underscores

Batched calls:

  • AskUserQuestion([q1, q2]) → single request_user_input with multiple entries in questions[]

Multi-select workaround:

  • Codex has no multiSelect. Use sequential single-selects, or present a numbered freeform list asking the user to enter comma-separated numbers.

Execute mode fallback:

  • When request_user_input is rejected or unavailable, you MUST stop and present the questions as a plain-text numbered list, then wait for the user's reply. Do NOT pick a default and continue (#3018).
  • You may only proceed without a user answer when one of these is true: (a) the invocation included an explicit non-interactive flag (--auto or --all), (b) the user has explicitly approved a specific default for this question, or (c) the workflow's documented contract says defaults are safe (e.g. autonomous lifecycle paths).
  • Do NOT write workflow artifacts (CONTEXT.md, DISCUSSION-LOG.md, PLAN.md, checkpoint files) until the user has answered the plain-text questions or one of (a)-(c) above applies. Surfacing the questions and waiting is the correct response — silently defaulting and writing artifacts is the #3018 failure mode.

C. Task() → spawn_agent Mapping

GSD workflows use Task(...) (Claude Code syntax). Translate to Codex collaboration tools:

Direct mapping:

  • Task(subagent_type="X", prompt="Y")spawn_agent(agent_type="X", message="Y")
  • Task(model="...") → omit. spawn_agent has no inline model parameter; GSD embeds the resolved per-agent model directly into each agent's .toml at install time so model_overrides from .planning/config.json and ~/.gsd/defaults.json are honored automatically by Codex's agent router.
  • fork_context: false by default — GSD agents load their own context via <files_to_read> blocks

Spawn restriction:

  • Codex restricts spawn_agent to cases where the user has explicitly requested sub-agents. When automatic spawning is not permitted, do the work inline in the current agent rather than attempting to force a spawn.

Parallel fan-out:

  • Spawn multiple agents → collect agent IDs → wait(ids) for all to complete

Result parsing:

  • Look for structured markers in agent output: CHECKPOINT, PLAN COMPLETE, SUMMARY, etc.
  • close_agent(id) after collecting results from each agent </codex_skill_adapter>
<objective> Conduct a retroactive evaluation coverage audit of a completed AI phase. Checks whether the evaluation strategy from AI-SPEC.md was implemented. Produces EVAL-REVIEW.md with score, verdict, gaps, and remediation plan. </objective>

<execution_context> @C:/Users/wikiepeidia/OneDrive - caugiay.edu.vn/bài tập/usth/GEN14/INTERNSHIP/example internship/TotNghiepProject/.codex/get-shit-done/workflows/eval-review.md @C:/Users/wikiepeidia/OneDrive - caugiay.edu.vn/bài tập/usth/GEN14/INTERNSHIP/example internship/TotNghiepProject/.codex/get-shit-done/references/ai-evals.md </execution_context>

<context> Phase: {{GSD_ARGS}} — optional, defaults to last completed phase. </context> <process> Execute @C:/Users/wikiepeidia/OneDrive - caugiay.edu.vn/bài tập/usth/GEN14/INTERNSHIP/example internship/TotNghiepProject/.codex/get-shit-done/workflows/eval-review.md end-to-end. Preserve all workflow gates. </process>

Compétences associées

Looking for an alternative to gsd-eval-review or another community skill for your workflow? Explore these related open-source skills.

Voir tout

openclaw-release-maintainer

Logo of openclaw
openclaw

Resume localise : 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

widget-generator

Logo of f
f

Resume localise : Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat . It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor, and

flags

Logo of vercel
vercel

Resume localise : The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Navigateur

pr-review

Logo of pytorch
pytorch

Resume localise : Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review . It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Développeur