KS
Killer-Skills

start — how to use start how to use start, start ai development session, start alternative, start vs claude code, start install, start setup guide, what is start, start workflow management, start ai coding, start ai agent

v1.0.0
GitHub

About this Skill

Ideal for AI Development Agents needing streamlined workflow management and initialization. Start is an AI development session initializer that provides a workflow guide and supports various operation types, including Bash scripts and tool calls executed by AI.

Features

Initializes AI development sessions using a workflow guide
Supports operation types, including Bash scripts and tool calls executed by AI
Executes skills using markers such as [AI] and [USER]
Provides a workflow guide in Markdown format, accessible via `cat .trellis/workflow.md`
Enables developers to work with AI agents, AI coding, and workflow management
Utilizes technologies such as TypeScript and CLI

# Core Topics

mindfold-ai mindfold-ai
[2.9k]
[149]
Updated: 3/1/2026

Quality Score

Top 5%
55
Excellent
Based on code quality & docs
Installation
SYS Universal Install (Auto-Detect)
Cursor IDE Windsurf IDE VS Code IDE
> npx killer-skills add mindfold-ai/Trellis/start

Agent Capability Analysis

The start MCP Server by mindfold-ai is an open-source Categories.community integration for Claude and other AI agents, enabling seamless task automation and capability expansion. Optimized for how to use start, start ai development session, start alternative.

Ideal Agent Persona

Ideal for AI Development Agents needing streamlined workflow management and initialization.

Core Value

Empowers agents to manage AI development sessions, execute Bash scripts, and interact with workflow guides using Markdown files like 'workflow.md'. It provides a comprehensive framework for task initialization and workflow understanding.

Capabilities Granted for start MCP Server

Initializing AI development sessions
Executing Bash scripts for workflow automation
Reading workflow guides for development process understanding

! Prerequisites & Limits

  • Requires access to '.trellis/workflow.md' file
  • Limited to Bash script execution
Project
SKILL.md
8.6 KB
.cursorrules
1.2 KB
package.json
240 B
Ready
UTF-8
SKILL.md
Readonly

Start Session

Initialize your AI development session and begin working on tasks.


Operation Types

MarkerMeaningExecutor
[AI]Bash scripts or tool calls executed by AIYou (AI)
[USER]Skills executed by userUser

Initialization [AI]

Step 1: Understand Development Workflow

First, read the workflow guide to understand the development process:

bash
1cat .trellis/workflow.md

Follow the instructions in workflow.md - it contains:

  • Core principles (Read Before Write, Follow Standards, etc.)
  • File system structure
  • Development process
  • Best practices

Step 2: Get Current Context

bash
1python3 ./.trellis/scripts/get_context.py

This shows: developer identity, git status, current task (if any), active tasks.

Step 3: Read Guidelines Index

bash
1cat .trellis/spec/frontend/index.md # Frontend guidelines 2cat .trellis/spec/backend/index.md # Backend guidelines 3cat .trellis/spec/guides/index.md # Thinking guides

Step 4: Report and Ask

Report what you learned and ask: "What would you like to work on?"


Task Classification

When user describes a task, classify it:

TypeCriteriaWorkflow
QuestionUser asks about code, architecture, or how something worksAnswer directly
Trivial FixTypo fix, comment update, single-line change, < 5 minutesDirect Edit
Simple TaskClear goal, 1-2 files, well-defined scopeQuick confirm → Task Workflow
Complex TaskVague goal, multiple files, architectural decisionsBrainstorm → Task Workflow

Decision Rule

If in doubt, use Brainstorm + Task Workflow.

Task Workflow ensures code-specs are injected to the right context, resulting in higher quality code. The overhead is minimal, but the benefit is significant.

Subtask Decomposition: If brainstorm reveals multiple independent work items, consider creating subtasks using --parent flag or add-subtask command. See the brainstorm skill's Step 8 for details.


Question / Trivial Fix

For questions or trivial fixes, work directly:

  1. Answer question or make the fix
  2. If code was changed, remind user to run $finish-work

Simple Task

For simple, well-defined tasks:

  1. Quick confirm: "I understand you want to [goal]. Ready to proceed?"
  2. If yes, proceed to Task Workflow Phase 1 Path B (create task, write PRD, then research)
  3. If no, clarify and confirm again

Complex Task - Brainstorm First

For complex or vague tasks, use the brainstorm process to clarify requirements.

See $brainstorm for the full process. Summary:

  1. Acknowledge and classify - State your understanding
  2. Create task directory - Track evolving requirements in prd.md
  3. Ask questions one at a time - Update PRD after each answer
  4. Propose approaches - For architectural decisions
  5. Confirm final requirements - Get explicit approval
  6. Proceed to Task Workflow - With clear requirements in PRD

Task Workflow (Development Tasks)

Why this workflow?

  • Run a dedicated research pass before coding
  • Configure specs in jsonl context files
  • Implement using injected context
  • Verify with a separate check pass
  • Result: Code that follows project conventions automatically

Overview: Two Entry Points

From Brainstorm (Complex Task):
  PRD confirmed → Research → Configure Context → Activate → Implement → Check → Complete

From Simple Task:
  Confirm → Create Task → Write PRD → Research → Configure Context → Activate → Implement → Check → Complete

Key principle: Research happens AFTER requirements are clear (PRD exists).


Phase 1: Establish Requirements

Path A: From Brainstorm (skip to Phase 2)

PRD and task directory already exist from brainstorm. Skip directly to Phase 2.

Path B: From Simple Task

Step 1: Confirm Understanding [AI]

Quick confirm:

  • What is the goal?
  • What type of development? (frontend / backend / fullstack)
  • Any specific requirements or constraints?

If unclear, ask clarifying questions.

Step 2: Create Task Directory [AI]

bash
1TASK_DIR=$(python3 ./.trellis/scripts/task.py create "<title>" --slug <name>)

Step 3: Write PRD [AI]

Create prd.md in the task directory with:

markdown
1# <Task Title> 2 3## Goal 4<What we're trying to achieve> 5 6## Requirements 7- <Requirement 1> 8- <Requirement 2> 9 10## Acceptance Criteria 11- [ ] <Criterion 1> 12- [ ] <Criterion 2> 13 14## Technical Notes 15<Any technical decisions or constraints>

Phase 2: Prepare for Implementation (shared)

Both paths converge here. PRD and task directory must exist before proceeding.

Step 4: Code-Spec Depth Check [AI]

If the task touches infra or cross-layer contracts, do not start implementation until code-spec depth is defined.

Trigger this requirement when the change includes any of:

  • New or changed command/API signatures
  • Database schema or migration changes
  • Infra integrations (storage, queue, cache, secrets, env contracts)
  • Cross-layer payload transformations

Must-have before proceeding:

  • Target code-spec files to update are identified
  • Concrete contract is defined (signature, fields, env keys)
  • Validation and error matrix is defined
  • At least one Good/Base/Bad case is defined

Step 5: Research the Codebase [AI]

Based on the confirmed PRD, run a focused research pass and produce:

  1. Relevant spec files in .trellis/spec/
  2. Existing code patterns to follow (2-3 examples)
  3. Files that will likely need modification

Use this output format:

markdown
1## Relevant Specs 2- <path>: <why it's relevant> 3 4## Code Patterns Found 5- <pattern>: <example file path> 6 7## Files to Modify 8- <path>: <what change>

Step 6: Configure Context [AI]

Initialize default context:

bash
1python3 ./.trellis/scripts/task.py init-context "$TASK_DIR" <type> 2# type: backend | frontend | fullstack

Add specs found in your research pass:

bash
1# For each relevant spec and code pattern: 2python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" implement "<path>" "<reason>" 3python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" check "<path>" "<reason>"

Step 7: Activate Task [AI]

bash
1python3 ./.trellis/scripts/task.py start "$TASK_DIR"

This sets .current-task so hooks can inject context.


Phase 3: Execute (shared)

Step 8: Implement [AI]

Implement the task described in prd.md.

  • Follow all specs injected into implement context
  • Keep changes scoped to requirements
  • Run lint and typecheck before finishing

Step 9: Check Quality [AI]

Run a quality pass against check context:

  • Review all code changes against the specs
  • Fix issues directly
  • Ensure lint and typecheck pass

Step 10: Complete [AI]

  1. Verify lint and typecheck pass
  2. Report what was implemented
  3. Remind user to:
    • Test the changes
    • Commit when ready
    • Run $record-session to record this session

Continuing Existing Task

If get_context.py shows a current task:

  1. Read the task's prd.md to understand the goal
  2. Check task.json for current status and phase
  3. Ask user: "Continue working on <task-name>?"

If yes, resume from the appropriate step (usually Step 7 or 8).


Skills Reference

User Skills [USER]

SkillWhen to Use
$startBegin a session (this skill)
$finish-workBefore committing changes
$record-sessionAfter completing a task

AI Scripts [AI]

ScriptPurpose
python3 ./.trellis/scripts/get_context.pyGet session context
python3 ./.trellis/scripts/task.py createCreate task directory
python3 ./.trellis/scripts/task.py init-contextInitialize jsonl files
python3 ./.trellis/scripts/task.py add-contextAdd spec to jsonl
python3 ./.trellis/scripts/task.py startSet current task
python3 ./.trellis/scripts/task.py finishClear current task
python3 ./.trellis/scripts/task.py archiveArchive completed task

Workflow Phases [AI]

PhasePurposeContext Source
researchAnalyze codebasedirect repo inspection
implementWrite codeimplement.jsonl
checkReview & fixcheck.jsonl
debugFix specific issuesdebug.jsonl

Key Principle

Code-spec context is injected, not remembered.

The Task Workflow ensures agents receive relevant code-spec context automatically. This is more reliable than hoping the AI "remembers" conventions.

Related Skills

Looking for an alternative to start or building a Categories.community AI Agent? Explore these related open-source MCP Servers.

View All

widget-generator

Logo of f
f

widget-generator is an open-source AI agent skill for creating widget plugins that are injected into prompt feeds on prompts.chat. It supports two rendering modes: standard prompt widgets using default PromptCard styling and custom render widgets built as full React components.

149.6k
0
Design

chat-sdk

Logo of lobehub
lobehub

chat-sdk is a unified TypeScript SDK for building chat bots across multiple platforms, providing a single interface for deploying bot logic.

73.0k
0
Communication

zustand

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication

data-fetching

Logo of lobehub
lobehub

The ultimate space for work and life — to find, build, and collaborate with agent teammates that grow with you. We are taking agent harness to the next level — enabling multi-agent collaboration, effortless agent team design, and introducing agents as the unit of work interaction.

72.8k
0
Communication