product — for Claude Code product, ai-assets, community, for Claude Code, ide skills, architecture, Agent(product-manager), CLAUDE.md, AGENTS.md, FEATURES.md

v1.0.0

À propos de ce Skill

Scenario recommande : Ideal for AI agents that need apply agent(product-manager) role for all steps below. Resume localise : AI Assets for Claude Code, Codex, Windsurf # Feature Design Transform raw feature inputs into structured product documentation.

Fonctionnalités

Apply Agent(product-manager) role for all steps below.
Read CLAUDE.md (or AGENTS.md) at the project root to identify:
Product description and domain
Existing features and modules (to avoid duplication and understand scope)
Tech stack constraints (affects feasibility assessment)

# Sujets clés

avav25 avav25
[0]
[0]
Mis à jour: 4/30/2026

Skill Overview

Start with fit, limitations, and setup before diving into the repository.

Scenario recommande : Ideal for AI agents that need apply agent(product-manager) role for all steps below. Resume localise : AI Assets for Claude Code, Codex, Windsurf # Feature Design Transform raw feature inputs into structured product documentation.

Pourquoi utiliser cette compétence

Recommandation : product helps agents apply agent(product-manager) role for all steps below. AI Assets for Claude Code, Codex, Windsurf # Feature Design Transform raw feature inputs into structured product documentation.

Meilleur pour

Scenario recommande : Ideal for AI agents that need apply agent(product-manager) role for all steps below.

Cas d'utilisation exploitables for product

Cas d'usage : Applying Apply Agent(product-manager) role for all steps below
Cas d'usage : Applying Read CLAUDE.md (or AGENTS.md) at the project root to identify:
Cas d'usage : Applying Product description and domain

! Sécurité et Limitations

  • Limitation : Existing features and modules (to avoid duplication and understand scope)
  • Limitation : If the user provides a verbal description only — proceed to Step 2 to structure it through discovery
  • Limitation : User segments [who needs this] [ICP/persona indicators]

About The Source

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Démo Labs

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ et étapes d’installation

These questions and steps mirror the structured data on this page for better search understanding.

? Questions fréquentes

Qu’est-ce que product ?

Scenario recommande : Ideal for AI agents that need apply agent(product-manager) role for all steps below. Resume localise : AI Assets for Claude Code, Codex, Windsurf # Feature Design Transform raw feature inputs into structured product documentation.

Comment installer product ?

Exécutez la commande : npx killer-skills add avav25/ai-assets. Elle fonctionne avec Cursor, Windsurf, VS Code, Claude Code et plus de 19 autres IDE.

Quels sont les cas d’usage de product ?

Les principaux cas d’usage incluent : Cas d'usage : Applying Apply Agent(product-manager) role for all steps below, Cas d'usage : Applying Read CLAUDE.md (or AGENTS.md) at the project root to identify:, Cas d'usage : Applying Product description and domain.

Quels IDE sont compatibles avec product ?

Cette skill est compatible avec Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Utilisez la CLI Killer-Skills pour une installation unifiée.

Y a-t-il des limites pour product ?

Limitation : Existing features and modules (to avoid duplication and understand scope). Limitation : If the user provides a verbal description only — proceed to Step 2 to structure it through discovery. Limitation : User segments [who needs this] [ICP/persona indicators].

Comment installer ce skill

  1. 1. Ouvrir le terminal

    Ouvrez le terminal ou la ligne de commande dans le dossier du projet.

  2. 2. Lancer la commande d’installation

    Exécutez : npx killer-skills add avav25/ai-assets. La CLI détectera automatiquement votre IDE ou votre agent et configurera la skill.

  3. 3. Commencer à utiliser le skill

    Le skill est maintenant actif. Votre agent IA peut utiliser product immédiatement dans le projet.

! Source Notes

This page is still useful for installation and source reference. Before using it, compare the fit, limitations, and upstream repository notes above.

Upstream Repository Material

The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.

Upstream Source

product

AI Assets for Claude Code, Codex, Windsurf # Feature Design Transform raw feature inputs into structured product documentation. Apply Agent(product-manager)

SKILL.md
Readonly
Upstream Repository Material
The section below comes from the upstream repository. Use it as supporting material alongside the fit, use-case, and installation summary on this page.
Upstream Source

Feature Design

Transform raw feature inputs into structured product documentation. This is the product design phase — no code, architecture, or engineering decisions are made here. Output feeds into /architecture for technical design and /plan for implementation planning.

Apply Agent(product-manager) role for all steps below.

0. Gather Context

Read CLAUDE.md (or AGENTS.md) at the project root to identify:

  • Product description and domain
  • Existing features and modules (to avoid duplication and understand scope)
  • Tech stack constraints (affects feasibility assessment)
  • Team structure (affects user story assignment)

If the project has a FEATURES.md, read it for the current feature registry.

1. Receive Feature Inputs

Gather all available resources describing the feature:

  • Accepted inputs: idea brief, user research, customer feedback, support tickets, competitive analysis, stakeholder request, verbal description, market data, analytics report
  • Read every provided document thoroughly
  • If the user provides a verbal description only — proceed to Step 2 to structure it through discovery

Extract Raw Signals

From the provided inputs, extract and organize:

SignalSourceNotes
Problem[what pain exists, for whom][evidence or assumption]
Opportunity[what business value][data points]
User segments[who needs this][ICP/persona indicators]
Competitive context[how others solve it][differentiation angle]
Constraints[timeline, budget, technical, regulatory][hard vs soft]
Existing assets[related features, prior art, dependencies][links/refs]

If critical signals are missing (no clear problem statement, no target user) — ask before proceeding. Do not invent user needs.

2. Discovery and Framing

2a. Problem Validation

Define the problem with precision:

  1. Problem statement: One paragraph — who has the problem, what the problem is, why it matters now
  2. Evidence: Data points supporting the problem (usage metrics, support volume, churn correlation, user quotes, competitive pressure)
  3. Impact of inaction: What happens if we do nothing

If evidence is weak — flag this as a risk. Do not fabricate data.

2b. Target Users

Apply JTBD and ICP frameworks from Agent(product-manager):

  • Job-to-be-Done: When [situation], I want to [motivation], so I can [outcome]
  • ICP segments: Which customer profiles benefit most. Include triggers, buying signals, objections
  • Personas (if useful): Name, role, goal, frustration. Keep lightweight — max 2 personas

2c. Scope Decision

Based on complexity, decide the deliverable type:

ComplexitySignalsDeliverable
SmallSingle service, < 1 week effort, clear solutionFeature Brief (abbreviated PRD)
MediumMulti-component, 1–4 weeks, some unknownsFull PRD
LargeMulti-service, 4+ weeks, significant unknowns, new capabilitiesFull PRD + Spike/Discovery tasks
AI/AgentLLM-powered, autonomous behavior, trust/safety concernsFull PRD + Agent Contract + Eval Strategy

Present the scope decision to the user for confirmation before proceeding.

3. Requirements Definition

3a. Functional Requirements

List requirements prioritized using MoSCoW:

  • Must have: Core functionality — feature is useless without these
  • Should have: Important but not blocking launch
  • Could have: Nice-to-have, defer if constrained
  • Won't have (this iteration): Explicitly excluded — prevents scope creep

Each requirement: one sentence, testable, outcome-focused. Never specify implementation — describe what, not how.

3b. Non-Functional Requirements

Identify relevant NFRs:

CategoryRequirementTarget
Performance[e.g., response time][e.g., < 200ms p95]
Security[e.g., auth, data protection][e.g., RBAC, PII encrypted]
Scalability[e.g., concurrent users][e.g., 10K concurrent]
Accessibility[e.g., WCAG level][e.g., WCAG 2.2 AA]
Compliance[e.g., GDPR, SOC2][specific requirements]

Only include categories relevant to this feature.

3c. Acceptance Criteria

Write acceptance criteria for every Must-have and Should-have requirement:

  • Use Given/When/Then format for behavioral criteria
  • Include happy path, edge cases, error states
  • Include operational criteria: "no new errors in logs", "latency < Xms p95"
  • Include security criteria: "unauthorized users receive 403", "PII not logged"
  • Each criterion is independently testable

4. Success Metrics

Define how to measure feature success:

4a. Metric Framework

Metric TypeMetricTargetMeasurement Method
Primary (North Star)[e.g., activation rate][target value][how to measure]
Leading[e.g., feature adoption D7][target value][how to measure]
Guardrail[e.g., error rate, latency][must not exceed][how to measure]

4b. Instrumentation Plan

  • Events to track (name, properties, trigger)
  • Dashboards to create or update
  • Alerts to configure (guardrail breaches)

5. Risk Assessment

RiskTypeImpactLikelihoodMitigation
[description]Technical / Business / Security / ComplianceHigh/Med/LowHigh/Med/Low[strategy]

Mandatory risk categories to evaluate:

  • Security threats (apply OWASP Top 10; add LLM Top 10 for AI features)
  • Data handling risks (PII, secrets, retention)
  • Backward compatibility and migration
  • Dependency on external systems or teams
  • User abuse scenarios

6. AI/Agent Addendum

Include this section only for AI-powered or agent features. Skip for standard features.

<agent_contract>

  • Autonomy level: Assist (human decides) → Semi-auto (agent proposes, human approves) → Auto (agent acts independently)
  • Allowed tools and permissions: Least privilege. List every tool/API the agent can access
  • Confirmation gates: Which actions require human approval
  • Failure modes: What happens when the agent fails, hallucinates, or exceeds scope
  • Cost budget: Token/compute/API limits per operation
  • Context pipeline: Consult context-engineering skill for stack design, RAG, memory, agent harness
  • Eval strategy: Offline eval set (fixtures, synthetic + real cases), online monitoring (drift, failure rates) </agent_contract>

7. Compile Deliverable

Full PRD

Compile all sections into a structured PRD:

markdown
1# PRD: [Feature Name] 2 3## Problem Statement 4[From Step 2a] 5 6## Target Users 7[From Step 2b — JTBD, ICP, personas] 8 9## Scope and Non-Goals 10[From Step 2c — what's in, what's explicitly out] 11 12## Requirements 13[From Step 3a — MoSCoW prioritized] 14 15## Non-Functional Requirements 16[From Step 3b] 17 18## Acceptance Criteria 19[From Step 3c] 20 21## Success Metrics 22[From Step 4] 23 24## Risks and Mitigations 25[From Step 5] 26 27## Agent Contract 28[From Step 6 — only for AI features] 29 30## Rollout Strategy 31- Phase 1: Internal / dogfood 32- Phase 2: Beta (limited users, feature flag) 33- Phase 3: GA (all users) 34- Backward compatibility notes 35- Feature flag name and config

Feature Brief (for small scope)

Abbreviated format — single document:

markdown
1# Feature Brief: [Feature Name] 2 3**Problem**: [1–2 sentences] 4**Users**: [target segment] 5**JTBD**: When [situation], I want to [motivation], so I can [outcome] 6 7## Requirements 8[Must-have list only — max 5 items] 9 10## Acceptance Criteria 11[Given/When/Then for each requirement] 12 13## Success Metric 14[Single primary metric + target] 15 16## Risks 17[Top 1–3 risks with mitigations]

Present the compiled deliverable to the user for review.

8. Multi-Reviewer Feedback Loop

The PRD or Feature Brief MUST pass a mandatory multi-reviewer cycle before handoff. Do not update FEATURES.md or hand off to /architecture / /plan until every reviewer returns approved.

Reviewer Panel

Spawn each role as an independent named subagent per @team-protocols Spawn Primitives. If Agent is unavailable, apply the four roles sequentially in the main thread and note the degraded fan-out in the Review History. Each reviewer runs with the current deliverable as input:

  • Agent(product-manager) as reviewer-product — problem framing, scope, requirements clarity, acceptance criteria testability, success metrics, risk coverage
  • Agent(marketing-strategist) as reviewer-marketing — positioning, target audience alignment, messaging, GTM fit, competitive differentiation
  • Agent(content-writer) as reviewer-content — structure, terminology consistency, clarity, readability
  • Agent(seo-engineer) as reviewer-seo — feature naming searchability, keyword alignment, AI citability (GEO/AEO), discoverability hooks for public-facing surfaces

Cycle

  1. Spawn reviewers in parallel. Each produces a findings report: Critical (must fix), Major (should fix, justify if waived), Minor (optional), plus an explicit verdict: approved / approved-with-changes / rejected
  2. Collect all reports before editing
  3. Apply all actionable findings. Resolve conflicts with priority Critical > Major > Minor; on ties, product-manager > content-writer > marketing-strategist > seo-engineer. Record waivers with a one-line rationale
  4. Re-spawn the same four reviewers against the updated deliverable
  5. Loop until every reviewer returns approved with zero remaining critical/major findings

Termination: pass when all four are approved. On divergence (findings not shrinking, mutually exclusive asks) — pause and ask the user to arbitrate. Max 5 cycles before escalation.

Record the review history at the bottom of the deliverable as a ## Review History section listing each cycle's reviewer verdicts and open issue counts.

9. Update FEATURES.md

After the PRD/brief is approved:

  1. If FEATURES.md does not exist — create it at the project root
  2. Add or update the feature entry:
    • Name, one-line description
    • Status: planned
    • Link to PRD/brief in features/ directory
  3. Save the PRD/brief to features/[feature-name].md (kebab-case)

10. Handoff

Guide the next step based on feature complexity:

ComplexityNext Step
Small (brief)Run /plan directly — architecture is implicit
Medium (PRD)Run /architecture for technical design, then /plan
Large (PRD + spikes)Execute spikes first, then /architecture/plan
AI/Agent (PRD + contract)Run /architecture with agent contract as input → /plan

Integration

  • Input: Raw feature resources (ideas, research, feedback, stakeholder requests)
  • Followed by: /architecture (technical design), /plan (work decomposition)
  • Roles: Agent(product-manager) (primary — owns PRD), Agent(marketing-strategist) + Agent(content-writer) + Agent(seo-engineer) (Step 8 reviewers), Agent(solution-architect) (consulted for feasibility)
  • Skills: context-engineering (for AI/Agent features, Step 6), @team-protocols (reviewer spawning, Step 8)
  • Updates: FEATURES.md, features/ directory
  • Enables: Full planning chain: /product/architecture/plan/feature-dev

Compétences associées

Looking for an alternative to product or another community skill for your workflow? Explore these related open-source skills.

Voir tout

openclaw-release-maintainer

Logo of openclaw
openclaw

Resume localise : 🦞 # OpenClaw Release Maintainer Use this skill for release and publish-time workflow. It covers ai, assistant, crustacean workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

widget-generator

Logo of f
f

Resume localise : Generate customizable widget plugins for the prompts.chat feed system # Widget Generator Skill This skill guides creation of widget plugins for prompts.chat . It covers ai, artificial-intelligence, awesome-list workflows. This AI agent skill supports Claude Code, Cursor, and

flags

Logo of vercel
vercel

Resume localise : The React Framework # Feature Flags Use this skill when adding or changing framework feature flags in Next.js internals. It covers blog, browser, compiler workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

138.4k
0
Navigateur

pr-review

Logo of pytorch
pytorch

Resume localise : Usage Modes No Argument If the user invokes /pr-review with no arguments, do not perform a review . It covers autograd, deep-learning, gpu workflows. This AI agent skill supports Claude Code, Cursor, and Windsurf workflows.

98.6k
0
Développeur