product — for Claude Code product, ai-assets, community, for Claude Code, ide skills, architecture, Agent(product-manager), CLAUDE.md, AGENTS.md, FEATURES.md

v1.0.0

About this Skill

Ideal for AI agents that need apply agent(product-manager) role for all steps below. product is an AI agent skill for apply agent(product-manager) role for all steps below.

Features

Apply Agent(product-manager) role for all steps below.
Read CLAUDE.md (or AGENTS.md) at the project root to identify:
Product description and domain
Existing features and modules (to avoid duplication and understand scope)
Tech stack constraints (affects feasibility assessment)

# Core Topics

avav25 avav25
[0]
[0]
Updated: 4/30/2026

Killer-Skills Review

Decision support comes first. Repository text comes second.

Reviewed Landing Page Review Score: 10/11

Killer-Skills keeps this page indexable because it adds recommendation, limitations, and review signals beyond the upstream repository text.

Original recommendation layer Concrete use-case guidance Explicit limitations and caution Quality floor passed for review Locale and body language aligned
Review Score
10/11
Quality Score
55
Canonical Locale
en
Detected Body Locale
en

Ideal for AI agents that need apply agent(product-manager) role for all steps below. product is an AI agent skill for apply agent(product-manager) role for all steps below.

Core Value

product helps agents apply agent(product-manager) role for all steps below. AI Assets for Claude Code, Codex, Windsurf # Feature Design Transform raw feature inputs into structured product documentation.

Ideal Agent Persona

Ideal for AI agents that need apply agent(product-manager) role for all steps below.

Capabilities Granted for product

Applying Apply Agent(product-manager) role for all steps below
Applying Read CLAUDE.md (or AGENTS.md) at the project root to identify:
Applying Product description and domain

! Prerequisites & Limits

  • Existing features and modules (to avoid duplication and understand scope)
  • If the user provides a verbal description only — proceed to Step 2 to structure it through discovery
  • User segments [who needs this] [ICP/persona indicators]

Source Boundary

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

After The Review

Decide The Next Action Before You Keep Reading Repository Material

Killer-Skills should not stop at opening repository instructions. It should help you decide whether to install this skill, when to cross-check against trusted collections, and when to move into workflow rollout.

Labs Demo

Browser Sandbox Environment

⚡️ Ready to unleash?

Experience this Agent in a zero-setup browser environment powered by WebContainers. No installation required.

Boot Container Sandbox

FAQ & Installation Steps

These questions and steps mirror the structured data on this page for better search understanding.

? Frequently Asked Questions

What is product?

Ideal for AI agents that need apply agent(product-manager) role for all steps below. product is an AI agent skill for apply agent(product-manager) role for all steps below.

How do I install product?

Run the command: npx killer-skills add avav25/ai-assets/product. It works with Cursor, Windsurf, VS Code, Claude Code, and 19+ other IDEs.

What are the use cases for product?

Key use cases include: Applying Apply Agent(product-manager) role for all steps below, Applying Read CLAUDE.md (or AGENTS.md) at the project root to identify:, Applying Product description and domain.

Which IDEs are compatible with product?

This skill is compatible with Cursor, Windsurf, VS Code, Trae, Claude Code, OpenClaw, Aider, Codex, OpenCode, Goose, Cline, Roo Code, Kiro, Augment Code, Continue, GitHub Copilot, Sourcegraph Cody, and Amazon Q Developer. Use the Killer-Skills CLI for universal one-command installation.

Are there any limitations for product?

Existing features and modules (to avoid duplication and understand scope). If the user provides a verbal description only — proceed to Step 2 to structure it through discovery. User segments [who needs this] [ICP/persona indicators].

How To Install

  1. 1. Open your terminal

    Open the terminal or command line in your project directory.

  2. 2. Run the install command

    Run: npx killer-skills add avav25/ai-assets/product. The CLI will automatically detect your IDE or AI agent and configure the skill.

  3. 3. Start using the skill

    The skill is now active. Your AI agent can use product immediately in the current project.

Upstream Repository Material

The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.

Upstream Source

product

AI Assets for Claude Code, Codex, Windsurf # Feature Design Transform raw feature inputs into structured product documentation. Apply Agent(product-manager)

SKILL.md
Readonly
Upstream Repository Material
The section below is imported from the upstream repository and should be treated as secondary evidence. Use the Killer-Skills review above as the primary layer for fit, risk, and installation decisions.
Supporting Evidence

Feature Design

Transform raw feature inputs into structured product documentation. This is the product design phase — no code, architecture, or engineering decisions are made here. Output feeds into /architecture for technical design and /plan for implementation planning.

Apply Agent(product-manager) role for all steps below.

0. Gather Context

Read CLAUDE.md (or AGENTS.md) at the project root to identify:

  • Product description and domain
  • Existing features and modules (to avoid duplication and understand scope)
  • Tech stack constraints (affects feasibility assessment)
  • Team structure (affects user story assignment)

If the project has a FEATURES.md, read it for the current feature registry.

1. Receive Feature Inputs

Gather all available resources describing the feature:

  • Accepted inputs: idea brief, user research, customer feedback, support tickets, competitive analysis, stakeholder request, verbal description, market data, analytics report
  • Read every provided document thoroughly
  • If the user provides a verbal description only — proceed to Step 2 to structure it through discovery

Extract Raw Signals

From the provided inputs, extract and organize:

SignalSourceNotes
Problem[what pain exists, for whom][evidence or assumption]
Opportunity[what business value][data points]
User segments[who needs this][ICP/persona indicators]
Competitive context[how others solve it][differentiation angle]
Constraints[timeline, budget, technical, regulatory][hard vs soft]
Existing assets[related features, prior art, dependencies][links/refs]

If critical signals are missing (no clear problem statement, no target user) — ask before proceeding. Do not invent user needs.

2. Discovery and Framing

2a. Problem Validation

Define the problem with precision:

  1. Problem statement: One paragraph — who has the problem, what the problem is, why it matters now
  2. Evidence: Data points supporting the problem (usage metrics, support volume, churn correlation, user quotes, competitive pressure)
  3. Impact of inaction: What happens if we do nothing

If evidence is weak — flag this as a risk. Do not fabricate data.

2b. Target Users

Apply JTBD and ICP frameworks from Agent(product-manager):

  • Job-to-be-Done: When [situation], I want to [motivation], so I can [outcome]
  • ICP segments: Which customer profiles benefit most. Include triggers, buying signals, objections
  • Personas (if useful): Name, role, goal, frustration. Keep lightweight — max 2 personas

2c. Scope Decision

Based on complexity, decide the deliverable type:

ComplexitySignalsDeliverable
SmallSingle service, < 1 week effort, clear solutionFeature Brief (abbreviated PRD)
MediumMulti-component, 1–4 weeks, some unknownsFull PRD
LargeMulti-service, 4+ weeks, significant unknowns, new capabilitiesFull PRD + Spike/Discovery tasks
AI/AgentLLM-powered, autonomous behavior, trust/safety concernsFull PRD + Agent Contract + Eval Strategy

Present the scope decision to the user for confirmation before proceeding.

3. Requirements Definition

3a. Functional Requirements

List requirements prioritized using MoSCoW:

  • Must have: Core functionality — feature is useless without these
  • Should have: Important but not blocking launch
  • Could have: Nice-to-have, defer if constrained
  • Won't have (this iteration): Explicitly excluded — prevents scope creep

Each requirement: one sentence, testable, outcome-focused. Never specify implementation — describe what, not how.

3b. Non-Functional Requirements

Identify relevant NFRs:

CategoryRequirementTarget
Performance[e.g., response time][e.g., < 200ms p95]
Security[e.g., auth, data protection][e.g., RBAC, PII encrypted]
Scalability[e.g., concurrent users][e.g., 10K concurrent]
Accessibility[e.g., WCAG level][e.g., WCAG 2.2 AA]
Compliance[e.g., GDPR, SOC2][specific requirements]

Only include categories relevant to this feature.

3c. Acceptance Criteria

Write acceptance criteria for every Must-have and Should-have requirement:

  • Use Given/When/Then format for behavioral criteria
  • Include happy path, edge cases, error states
  • Include operational criteria: "no new errors in logs", "latency < Xms p95"
  • Include security criteria: "unauthorized users receive 403", "PII not logged"
  • Each criterion is independently testable

4. Success Metrics

Define how to measure feature success:

4a. Metric Framework

Metric TypeMetricTargetMeasurement Method
Primary (North Star)[e.g., activation rate][target value][how to measure]
Leading[e.g., feature adoption D7][target value][how to measure]
Guardrail[e.g., error rate, latency][must not exceed][how to measure]

4b. Instrumentation Plan

  • Events to track (name, properties, trigger)
  • Dashboards to create or update
  • Alerts to configure (guardrail breaches)

5. Risk Assessment

RiskTypeImpactLikelihoodMitigation
[description]Technical / Business / Security / ComplianceHigh/Med/LowHigh/Med/Low[strategy]

Mandatory risk categories to evaluate:

  • Security threats (apply OWASP Top 10; add LLM Top 10 for AI features)
  • Data handling risks (PII, secrets, retention)
  • Backward compatibility and migration
  • Dependency on external systems or teams
  • User abuse scenarios

6. AI/Agent Addendum

Include this section only for AI-powered or agent features. Skip for standard features.

<agent_contract>

  • Autonomy level: Assist (human decides) → Semi-auto (agent proposes, human approves) → Auto (agent acts independently)
  • Allowed tools and permissions: Least privilege. List every tool/API the agent can access
  • Confirmation gates: Which actions require human approval
  • Failure modes: What happens when the agent fails, hallucinates, or exceeds scope
  • Cost budget: Token/compute/API limits per operation
  • Context pipeline: Consult context-engineering skill for stack design, RAG, memory, agent harness
  • Eval strategy: Offline eval set (fixtures, synthetic + real cases), online monitoring (drift, failure rates) </agent_contract>

7. Compile Deliverable

Full PRD

Compile all sections into a structured PRD:

markdown
1# PRD: [Feature Name] 2 3## Problem Statement 4[From Step 2a] 5 6## Target Users 7[From Step 2b — JTBD, ICP, personas] 8 9## Scope and Non-Goals 10[From Step 2c — what's in, what's explicitly out] 11 12## Requirements 13[From Step 3a — MoSCoW prioritized] 14 15## Non-Functional Requirements 16[From Step 3b] 17 18## Acceptance Criteria 19[From Step 3c] 20 21## Success Metrics 22[From Step 4] 23 24## Risks and Mitigations 25[From Step 5] 26 27## Agent Contract 28[From Step 6 — only for AI features] 29 30## Rollout Strategy 31- Phase 1: Internal / dogfood 32- Phase 2: Beta (limited users, feature flag) 33- Phase 3: GA (all users) 34- Backward compatibility notes 35- Feature flag name and config

Feature Brief (for small scope)

Abbreviated format — single document:

markdown
1# Feature Brief: [Feature Name] 2 3**Problem**: [1–2 sentences] 4**Users**: [target segment] 5**JTBD**: When [situation], I want to [motivation], so I can [outcome] 6 7## Requirements 8[Must-have list only — max 5 items] 9 10## Acceptance Criteria 11[Given/When/Then for each requirement] 12 13## Success Metric 14[Single primary metric + target] 15 16## Risks 17[Top 1–3 risks with mitigations]

Present the compiled deliverable to the user for review.

8. Multi-Reviewer Feedback Loop

The PRD or Feature Brief MUST pass a mandatory multi-reviewer cycle before handoff. Do not update FEATURES.md or hand off to /architecture / /plan until every reviewer returns approved.

Reviewer Panel

Spawn each role as an independent named subagent per @team-protocols Spawn Primitives. If Agent is unavailable, apply the four roles sequentially in the main thread and note the degraded fan-out in the Review History. Each reviewer runs with the current deliverable as input:

  • Agent(product-manager) as reviewer-product — problem framing, scope, requirements clarity, acceptance criteria testability, success metrics, risk coverage
  • Agent(marketing-strategist) as reviewer-marketing — positioning, target audience alignment, messaging, GTM fit, competitive differentiation
  • Agent(content-writer) as reviewer-content — structure, terminology consistency, clarity, readability
  • Agent(seo-engineer) as reviewer-seo — feature naming searchability, keyword alignment, AI citability (GEO/AEO), discoverability hooks for public-facing surfaces

Cycle

  1. Spawn reviewers in parallel. Each produces a findings report: Critical (must fix), Major (should fix, justify if waived), Minor (optional), plus an explicit verdict: approved / approved-with-changes / rejected
  2. Collect all reports before editing
  3. Apply all actionable findings. Resolve conflicts with priority Critical > Major > Minor; on ties, product-manager > content-writer > marketing-strategist > seo-engineer. Record waivers with a one-line rationale
  4. Re-spawn the same four reviewers against the updated deliverable
  5. Loop until every reviewer returns approved with zero remaining critical/major findings

Termination: pass when all four are approved. On divergence (findings not shrinking, mutually exclusive asks) — pause and ask the user to arbitrate. Max 5 cycles before escalation.

Record the review history at the bottom of the deliverable as a ## Review History section listing each cycle's reviewer verdicts and open issue counts.

9. Update FEATURES.md

After the PRD/brief is approved:

  1. If FEATURES.md does not exist — create it at the project root
  2. Add or update the feature entry:
    • Name, one-line description
    • Status: planned
    • Link to PRD/brief in features/ directory
  3. Save the PRD/brief to features/[feature-name].md (kebab-case)

10. Handoff

Guide the next step based on feature complexity:

ComplexityNext Step
Small (brief)Run /plan directly — architecture is implicit
Medium (PRD)Run /architecture for technical design, then /plan
Large (PRD + spikes)Execute spikes first, then /architecture/plan
AI/Agent (PRD + contract)Run /architecture with agent contract as input → /plan

Integration

  • Input: Raw feature resources (ideas, research, feedback, stakeholder requests)
  • Followed by: /architecture (technical design), /plan (work decomposition)
  • Roles: Agent(product-manager) (primary — owns PRD), Agent(marketing-strategist) + Agent(content-writer) + Agent(seo-engineer) (Step 8 reviewers), Agent(solution-architect) (consulted for feasibility)
  • Skills: context-engineering (for AI/Agent features, Step 6), @team-protocols (reviewer spawning, Step 8)
  • Updates: FEATURES.md, features/ directory
  • Enables: Full planning chain: /product/architecture/plan/feature-dev

Related Skills

Looking for an alternative to product or another community skill for your workflow? Explore these related open-source skills.

View All

openclaw-release-maintainer

Logo of openclaw
openclaw

openclaw-release-maintainer is an AI agent skill for openclaw release maintainer.

333.8k
0
AI

widget-generator

Logo of f
f

Generate customizable widget plugins for the prompts.chat feed system

149.6k
0
AI

flags

Logo of vercel
vercel

flags is an AI agent skill for use this skill when adding or changing framework feature flags in next.js internals.

138.4k
0
Browser

pr-review

Logo of pytorch
pytorch

pr-review is an AI agent skill for pytorch pr review skill.

98.6k
0
Developer