explore — AIエージェント任务澄清 explore, 2026-jan-pu-opus-clone-01, community, AIエージェント任务澄清, ide skills, 苏格拉底式提问, 任务理解, 问题解决, 解决方案优化, AI工具任务管理, Claude Code

v1.0.0

关于此技能

适合需要通过苏格拉底式提问进行系统任务阐明的对话式代理。 Explore AIエージェント是一种任务澄清和理解的技能,通过系统化的苏格拉底式提问

功能特性

系统化的苏格拉底式提问
任务澄清和理解
确保真正的问题被理解
避免解决错误的问题
提高解决方案的有效性

# 核心主题

dzhechko dzhechko
[0]
[1]
更新于: 2/27/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reference-Only Page Review Score: 5/11

This page remains useful for operators, but Killer-Skills treats it as reference material instead of a primary organic landing page.

Concrete use-case guidance Explicit limitations and caution
Review Score
5/11
Quality Score
39
Canonical Locale
en
Detected Body Locale
en

适合需要通过苏格拉底式提问进行系统任务阐明的对话式代理。 Explore AIエージェント是一种任务澄清和理解的技能,通过系统化的苏格拉底式提问

核心价值

赋予代理转化模糊请求为清晰可行的任务规范的能力,使用自适应提问,通过系统询问解锁理解并揭示约束,从而确保在提出解决方案之前真正理解问题。

适用 Agent 类型

适合需要通过苏格拉底式提问进行系统任务阐明的对话式代理。

赋予的主要能力 · explore

阐明模糊的项目要求
进行系统化的面试以揭示隐藏的约束
将模糊的用户请求转化为可行的任务规范

! 使用限制与门槛

  • 需要能够进行系统化的提问
  • 可能不适用于非常紧急或简单的任务

Why this page is reference-only

  • - Current locale does not satisfy the locale-governance contract.
  • - The page lacks a strong recommendation layer.
  • - The underlying skill quality score is below the review floor.

Source Boundary

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

实验室 Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

常见问题与安装步骤

以下问题与步骤与页面结构化数据保持一致,便于搜索引擎理解页面内容。

? FAQ

explore 是什么?

适合需要通过苏格拉底式提问进行系统任务阐明的对话式代理。 Explore AIエージェント是一种任务澄清和理解的技能,通过系统化的苏格拉底式提问

如何安装 explore?

运行命令:npx killer-skills add dzhechko/2026-jan-pu-opus-clone-01/explore。支持 Cursor、Windsurf、VS Code、Claude Code 等 19+ IDE/Agent。

explore 适用于哪些场景?

典型场景包括:阐明模糊的项目要求、进行系统化的面试以揭示隐藏的约束、将模糊的用户请求转化为可行的任务规范。

explore 支持哪些 IDE 或 Agent?

该技能兼容 Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer。可使用 Killer-Skills CLI 一条命令通用安装。

explore 有哪些限制?

需要能够进行系统化的提问;可能不适用于非常紧急或简单的任务。

安装步骤

  1. 1. 打开终端

    在你的项目目录中打开终端或命令行。

  2. 2. 执行安装命令

    运行:npx killer-skills add dzhechko/2026-jan-pu-opus-clone-01/explore。CLI 会自动识别 IDE 或 AI Agent 并完成配置。

  3. 3. 开始使用技能

    explore 已启用,可立即在当前项目中调用。

! 参考页模式

此页面仍可作为安装与查阅参考,但 Killer-Skills 不再把它视为主要可索引落地页。请优先阅读上方评审结论,再决定是否继续查看上游仓库说明。

Imported Repository Instructions

The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.

Supporting Evidence

explore

安装 explore,这是一款面向AI agent workflows and automation的 AI Agent Skill。支持 Claude Code、Cursor、Windsurf,一键安装。

SKILL.md
Readonly
Imported Repository Instructions
The section below is supporting source material from the upstream repository. Use the Killer-Skills review above as the primary decision layer.
Supporting Evidence

Explore: Adaptive Task Clarification

Transform vague requests into crystal-clear, actionable task specifications through systematic Socratic questioning.

Core Philosophy

Never solve before you understand. Most failed solutions solve the wrong problem. This skill ensures the real problem is understood before any solution is proposed.

Key principles:

  • Questions unlock understanding; answers often hide assumptions
  • The stated goal is rarely the real goal
  • Constraints reveal opportunities
  • Success criteria prevent scope creep

Task Classification

Before asking questions, classify the task type to select appropriate exploration dimensions:

Task TypeIndicatorsPrimary Dimensions
Product/Feature"build", "create app", "develop"Outcome, Users, Constraints, Success
Problem Solving"fix", "solve", "issue with"Root Cause, Constraints, Attempted Solutions
Decision Making"should I", "choose between", "evaluate"Criteria, Tradeoffs, Timeline, Reversibility
Creative"write", "design", "make content"Audience, Tone, Format, Examples
Research"find out", "analyze", "understand"Scope, Depth, Sources, Deliverable
Process/Workflow"how to", "improve process"Current State, Desired State, Blockers

Exploration Dimensions

Select 3-5 dimensions based on task type. Each dimension has multiple question variants—choose the most natural for context.

1. The Real Objective

Uncover what success truly looks like, beyond the stated request.

Questions:

  • "If this worked perfectly, what would be different in your [work/life/business]?"
  • "What outcome would make you say 'this was absolutely worth it'?"
  • "Is this goal a means to something else, or the end itself?"
  • "If you could wave a magic wand and have any result, what would you choose?"

Red flags to probe: Generic goals ("make it better"), proxy metrics, solutions presented as requirements.

2. Constraints & Boundaries

Identify hard limits that shape the solution space.

Questions:

  • "What's absolutely off the table—budget, time, technology, or approach-wise?"
  • "What existing systems, processes, or decisions must this work with?"
  • "Who needs to approve this, and what are their non-negotiables?"
  • "What would disqualify a solution, even if it technically works?"

Red flags to probe: No constraints mentioned (usually means hidden ones), unrealistic expectations.

3. Available Resources

Understand leverage points and existing assets.

Questions:

  • "What do you already have that we could build on—data, tools, people, prior work?"
  • "Who else is involved, and what can they contribute?"
  • "What similar problems have you solved before, and what worked?"
  • "What's your actual capacity to implement this?"

Red flags to probe: Overestimated capabilities, unacknowledged dependencies.

4. Timeline & Urgency

Distinguish real deadlines from arbitrary ones.

Questions:

  • "What happens if this takes 2x longer than expected?"
  • "Is there a hard deadline, and what's driving it?"
  • "Would you prefer a quick 80% solution or a slower 100% solution?"
  • "What's the cost of delay vs. the cost of getting it wrong?"

Red flags to probe: Artificial urgency, no clear driver for deadline.

5. Success Criteria

Define what "done" actually means.

Questions:

  • "How will you know this is successful? What will you measure?"
  • "Who decides if this is good enough, and what will they look for?"
  • "What's the minimum viable outcome that would still be valuable?"
  • "In 6 months, what would make you regret the approach we took?"

Red flags to probe: Vague criteria ("stakeholders will be happy"), moving targets.

6. Attempted Solutions (for problems)

Learn from what hasn't worked.

Questions:

  • "What have you already tried, and why didn't it work?"
  • "What solutions have you considered but rejected?"
  • "What would the obvious solution be, and why isn't that good enough?"

7. Audience & Stakeholders (for products/content)

Understand who this serves.

Questions:

  • "Who specifically will use this, and what's their context when they do?"
  • "What does your audience already know or believe about this?"
  • "Who might be negatively affected, and does that matter?"

Execution Protocol

Phase 1: Initial Assessment (1 turn)

  1. Parse the user's request
  2. Identify what's already clear from context
  3. Classify task type
  4. Select 3-5 most critical dimensions
  5. Note any immediate red flags or assumptions

Phase 2: Adaptive Questioning (3-7 turns)

Rules:

  • Ask ONE question at a time
  • Make questions specific and decision-shaping, not generic
  • Challenge vague answers: "Can you be more specific about...?"
  • Acknowledge answers before next question
  • Skip dimensions already clarified
  • Stop when you have enough to create a clear brief

Question Sequencing:

  1. Start with Real Objective (reveals the most)
  2. Follow with Constraints (narrows solution space)
  3. Then Success Criteria (defines done)
  4. Fill gaps with other dimensions as needed

Adaptive behavior:

  • If user gives detailed answer → compress follow-ups
  • If user seems frustrated → summarize and ask if they want to continue
  • If contradiction detected → gently probe: "Earlier you mentioned X, but now Y—help me understand?"

Phase 3: Task Brief Synthesis (1 turn)

After sufficient exploration, synthesize into a Task Brief:

## Task Brief

**Objective:** [Clear statement of what we're actually solving]

**Context:** [Relevant background and constraints]

**Success Criteria:**
- [Measurable criterion 1]
- [Measurable criterion 2]

**Constraints:**
- [Hard constraint 1]
- [Hard constraint 2]

**Resources Available:** [What we can leverage]

**Timeline:** [Deadline and urgency level]

**Key Assumptions:** [Things we're assuming that could change the approach]

**Out of Scope:** [Explicitly excluded items]

Ask user: "Does this capture what you need? Anything to add or correct?"

Phase 4: Handoff

Once validated, either:

  • Proceed to solution (if user wants immediate help)
  • Export brief for later use
  • Suggest appropriate next steps/skills

Anti-Patterns to Avoid

Interrogation mode: Don't fire questions robotically ❌ Assuming context: Don't skip clarification because you "think" you understand ❌ Premature solutions: Don't hint at solutions before exploration is complete ❌ Over-questioning: Stop when you have enough clarity ❌ Generic questions: Each question should be tailored to this specific task ❌ Ignoring signals: If user provides info proactively, don't re-ask

Example Flow

User: "I want to create a dashboard for my team"

Claude (Phase 1 assessment):

  • Task type: Product/Feature
  • Unclear: Who uses it, what data, what decisions it enables, timeline
  • Dimensions needed: Real Objective, Audience, Constraints, Success Criteria

Claude: "Before we dive in—what decisions will your team make differently once they have this dashboard? What's the main insight they're missing today?"

[User answers about tracking project delays]

Claude: "Got it—so the core need is visibility into project health to catch delays early. How do you know a project is delayed today? What's the current process for catching these issues?"

[Continues adaptively based on answers...]

Integration Notes

After exploration, this skill can hand off to:

  • problem-solver-enhanced (for complex problems)
  • goap-research (for research tasks)
  • frontend-design (for UI/product tasks)
  • Any implementation skill with the structured Task Brief

相关技能

寻找 explore 的替代方案 (Alternative) 或可搭配使用的同类 community Skill?探索以下相关开源技能。

查看全部

openclaw-release-maintainer

Logo of openclaw
openclaw

Your own personal AI assistant. Any OS. Any Platform. The lobster way. 🦞

333.8k
0
AI

widget-generator

Logo of f
f

为prompts.chat的信息反馈系统生成可定制的插件小部件

149.6k
0
AI

flags

Logo of vercel
vercel

React 框架

138.4k
0
浏览器

pr-review

Logo of pytorch
pytorch

Python中具有强大GPU加速的张量和动态神经网络

98.6k
0
开发者工具