KS
Killer-Skills

team-implement — how to use team-implement how to use team-implement, team-implement setup guide, parallel implementation with Agent Teams, team-implement alternative, team-implement vs competitor, team-implement install, what is team-implement, team-implement documentation, team-implement tutorial

v1.0.0
GitHub

About this Skill

Ideal for Project Management Agents requiring parallel task execution and team composition analysis. team-implement is a skill that enables parallel implementation using Agent Teams, executing plans approved in /startproject with documented architecture in .claude/docs/DESIGN.md

Features

Analyzes task dependencies from plans in /startproject
Determines team composition based on task dependencies
Spawns Agent Teams per module/layer
Launches teammates per module/layer
Executes parallel implementation based on approved plans

# Core Topics

DeL-TaiseiOzaki DeL-TaiseiOzaki
[113]
[18]
Updated: 2/25/2026

Quality Score

Top 5%
49
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add DeL-TaiseiOzaki/claude-code-orchestra/team-implement

Agent Capability Analysis

The team-implement MCP Server by DeL-TaiseiOzaki is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use team-implement, team-implement setup guide, parallel implementation with Agent Teams.

Ideal Agent Persona

Ideal for Project Management Agents requiring parallel task execution and team composition analysis.

Core Value

Empowers agents to execute parallel implementation using Agent Teams based on approved plans in /startproject, analyzing task dependencies and determining team composition, utilizing documented architecture in .claude/docs/DESIGN.md and task lists.

Capabilities Granted for team-implement MCP Server

Automating project plan execution
Analyzing task dependencies for team composition
Optimizing parallel implementation workflows

! Prerequisites & Limits

  • /startproject must be complete with an approved plan
  • Architecture must be documented in .claude/docs/DESIGN.md
  • Task list must be created
Project
SKILL.md
6.5 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8

# Tags

[No tags]
SKILL.md
Readonly

Team Implement

Parallel implementation using Agent Teams. Executes based on the plan approved in /startproject.

Prerequisites

  • /startproject is complete and the plan has been approved by the user
  • Architecture is documented in .claude/docs/DESIGN.md
  • Task list has been created

Workflow

Step 1: Analyze Plan & Design Team
  Analyze task dependencies from the plan and determine team composition
    ↓
Step 2: Spawn Agent Team
  Launch Teammates per module/layer
    ↓
Step 3: Monitor & Coordinate
  Lead monitors, integrates, and manages quality
    ↓
Step 4: Integration & Verification
  After all tasks complete, run integration tests

Step 1: Analyze Plan & Design Team

Identify parallelizable workstreams from the task list.

Team Design Principles

  1. File ownership separation: Each Teammate owns a different set of files
  2. Respect dependencies: Dependent tasks go to the same Teammate or execute in dependency order
  3. Appropriate granularity: Target 5-6 tasks per Teammate

Common Team Patterns

Pattern A: Module-Based (Recommended)

Teammate 1: Module A (models, core logic)
Teammate 2: Module B (API, endpoints)
Teammate 3: Tests (unit + integration)

Pattern B: Layer-Based

Teammate 1: Data layer (models, DB)
Teammate 2: Business logic (services)
Teammate 3: Interface layer (API/CLI)

Pattern C: Feature-Based

Teammate 1: Feature X (all layers)
Teammate 2: Feature Y (all layers)
Teammate 3: Shared infrastructure

Anti-patterns

  • Two Teammates editing the same file → overwrite risk
  • Too many tasks per Teammate → risk of prolonged idle time
  • Overly complex dependencies → coordination costs outweigh benefits

Step 2: Spawn Agent Team

Launch the team based on the plan.

Create an agent team for implementing: {feature}

Each teammate receives:
- Project Brief from CLAUDE.md
- Architecture from .claude/docs/DESIGN.md
- Library constraints from .claude/docs/libraries/
- Their specific task assignments

Spawn teammates:

1. **Implementer-{module}** for each module/workstream
   Prompt: "You are implementing {module} for project: {feature}.

   Read these files for context:
   - CLAUDE.md (project context)
   - .claude/docs/DESIGN.md (architecture)
   - .claude/docs/libraries/ (library constraints)

   Your assigned tasks:
   {task list for this teammate}

   Your file ownership:
   {list of files this teammate owns}

   Rules:
   - ONLY edit files in your ownership set
   - Follow existing codebase patterns
   - Write type hints on all functions
   - Run ruff check after each file change
   - Communicate with other teammates if you need interface changes

   When done with each task, mark it completed in the task list.

   IMPORTANT — Work Log:
   When ALL your assigned tasks are complete, write a work log file to:
     .claude/logs/agent-teams/{team-name}/{your-teammate-name}.md

   Use this format:
   # Work Log: {your-teammate-name}
   ## Summary
   (1-2 sentence summary of what you accomplished)
   ## Tasks Completed
   - [x] {task}: {brief description of what was done}
   ## Files Modified
   - `{file path}`: {what was changed and why}
   ## Key Decisions
   - {decision made during implementation and rationale}
   ## Communication with Teammates
   - → {recipient}: {summary of message sent}
   - ← {sender}: {summary of message received}
   ## Issues Encountered
   - {issue}: {how it was resolved}
   (If none, write 'None')
   "

2. **Tester** (optional but recommended)
   Prompt: "You are the Tester for project: {feature}.

   Read:
   - CLAUDE.md, .claude/docs/DESIGN.md
   - Existing test patterns in tests/

   Your tasks:
   - Write tests for each module as implementers complete them
   - Follow TDD where possible (write test stubs first)
   - Run uv run pytest after each test file
   - Report failing tests to the relevant implementer

   Test coverage target: 80%+

   IMPORTANT — Work Log:
   When ALL your assigned tasks are complete, write a work log file to:
     .claude/logs/agent-teams/{team-name}/{your-teammate-name}.md

   Use this format:
   # Work Log: {your-teammate-name}
   ## Summary
   (1-2 sentence summary of what you accomplished)
   ## Tasks Completed
   - [x] {task}: {brief description of what was done}
   ## Files Modified
   - `{file path}`: {what was changed and why}
   ## Key Decisions
   - {decision made during implementation and rationale}
   ## Communication with Teammates
   - → {recipient}: {summary of message sent}
   - ← {sender}: {summary of message received}
   ## Issues Encountered
   - {issue}: {how it was resolved}
   (If none, write 'None')
   "

Use delegate mode (Shift+Tab) to prevent Lead from implementing directly.
Wait for all teammates to complete their tasks.

Step 3: Monitor & Coordinate

Lead focuses on monitoring and integration, not implementing.

Monitoring Checklist

  • Check task list progress (Ctrl+T)
  • Review each Teammate's output (Shift+Up/Down)
  • Verify no file conflicts
  • Check if any Teammate is stuck

Intervention Triggers

SituationResponse
Teammate not making progress for a long timeSend a message to check, re-instruct if needed
File conflict detectedReassign file ownership
Tests keep failingSend message to the relevant Implementer
Unexpected technical issueConsult Codex (via subagent)

Quality Gates (via Hooks)

TeammateIdle hook and TaskCompleted hook automatically run quality checks:

  • Lint check (ruff)
  • Test execution (pytest)
  • Type check (ty)

Step 4: Integration & Verification

After all tasks are complete, run integration verification.

bash
1# All quality checks 2uv run ruff check . 3uv run ruff format --check . 4uv run ty check src/ 5uv run pytest -v 6 7# Or via poe 8poe all

Integration Report

markdown
1## Implementation Complete: {feature} 2 3### Completed Tasks 4- [x] {task 1} 5- [x] {task 2} 6... 7 8### Quality Checks 9- ruff: PASS / FAIL 10- ty: PASS / FAIL 11- pytest: PASS ({N} tests passed) 12- coverage: {N}% 13 14### Next Steps 15Run `/team-review` for parallel review

Cleanup

Clean up the team

Tips

  • Delegate mode: Use Shift+Tab to prevent Lead from implementing directly
  • Task granularity: 5-6 tasks per Teammate is optimal
  • File conflict prevention: Module-level ownership separation is the most important factor
  • Separate Tester: Having a dedicated Tester separate from Implementers enables a TDD-like workflow
  • Cost awareness: Each Teammate is an independent Claude instance (high token consumption)

Related Skills

Looking for an alternative to team-implement or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication