Start Session
Initialize your AI development session and begin working on tasks.
Operation Types
| Marker | Meaning | Executor |
|---|---|---|
[AI] | Bash scripts or tool calls executed by AI | You (AI) |
[USER] | Skills executed by user | User |
Initialization [AI]
Step 1: Understand Development Workflow
First, read the workflow guide to understand the development process:
bash1cat .trellis/workflow.md
Follow the instructions in workflow.md - it contains:
- Core principles (Read Before Write, Follow Standards, etc.)
- File system structure
- Development process
- Best practices
Step 2: Get Current Context
bash1python3 ./.trellis/scripts/get_context.py
This shows: developer identity, git status, current task (if any), active tasks.
Step 3: Read Guidelines Index
bash1cat .trellis/spec/frontend/index.md # Frontend guidelines 2cat .trellis/spec/backend/index.md # Backend guidelines 3cat .trellis/spec/guides/index.md # Thinking guides
Step 4: Report and Ask
Report what you learned and ask: "What would you like to work on?"
Task Classification
When user describes a task, classify it:
| Type | Criteria | Workflow |
|---|---|---|
| Question | User asks about code, architecture, or how something works | Answer directly |
| Trivial Fix | Typo fix, comment update, single-line change, < 5 minutes | Direct Edit |
| Simple Task | Clear goal, 1-2 files, well-defined scope | Quick confirm → Task Workflow |
| Complex Task | Vague goal, multiple files, architectural decisions | Brainstorm → Task Workflow |
Decision Rule
If in doubt, use Brainstorm + Task Workflow.
Task Workflow ensures code-specs are injected to the right context, resulting in higher quality code. The overhead is minimal, but the benefit is significant.
Subtask Decomposition: If brainstorm reveals multiple independent work items, consider creating subtasks using
--parentflag oradd-subtaskcommand. See the brainstorm skill's Step 8 for details.
Question / Trivial Fix
For questions or trivial fixes, work directly:
- Answer question or make the fix
- If code was changed, remind user to run
$finish-work
Simple Task
For simple, well-defined tasks:
- Quick confirm: "I understand you want to [goal]. Ready to proceed?"
- If yes, proceed to Task Workflow Phase 1 Path B (create task, write PRD, then research)
- If no, clarify and confirm again
Complex Task - Brainstorm First
For complex or vague tasks, use the brainstorm process to clarify requirements.
See $brainstorm for the full process. Summary:
- Acknowledge and classify - State your understanding
- Create task directory - Track evolving requirements in
prd.md - Ask questions one at a time - Update PRD after each answer
- Propose approaches - For architectural decisions
- Confirm final requirements - Get explicit approval
- Proceed to Task Workflow - With clear requirements in PRD
Task Workflow (Development Tasks)
Why this workflow?
- Run a dedicated research pass before coding
- Configure specs in jsonl context files
- Implement using injected context
- Verify with a separate check pass
- Result: Code that follows project conventions automatically
Overview: Two Entry Points
From Brainstorm (Complex Task):
PRD confirmed → Research → Configure Context → Activate → Implement → Check → Complete
From Simple Task:
Confirm → Create Task → Write PRD → Research → Configure Context → Activate → Implement → Check → Complete
Key principle: Research happens AFTER requirements are clear (PRD exists).
Phase 1: Establish Requirements
Path A: From Brainstorm (skip to Phase 2)
PRD and task directory already exist from brainstorm. Skip directly to Phase 2.
Path B: From Simple Task
Step 1: Confirm Understanding [AI]
Quick confirm:
- What is the goal?
- What type of development? (frontend / backend / fullstack)
- Any specific requirements or constraints?
If unclear, ask clarifying questions.
Step 2: Create Task Directory [AI]
bash1TASK_DIR=$(python3 ./.trellis/scripts/task.py create "<title>" --slug <name>)
Step 3: Write PRD [AI]
Create prd.md in the task directory with:
markdown1# <Task Title> 2 3## Goal 4<What we're trying to achieve> 5 6## Requirements 7- <Requirement 1> 8- <Requirement 2> 9 10## Acceptance Criteria 11- [ ] <Criterion 1> 12- [ ] <Criterion 2> 13 14## Technical Notes 15<Any technical decisions or constraints>
Phase 2: Prepare for Implementation (shared)
Both paths converge here. PRD and task directory must exist before proceeding.
Step 4: Code-Spec Depth Check [AI]
If the task touches infra or cross-layer contracts, do not start implementation until code-spec depth is defined.
Trigger this requirement when the change includes any of:
- New or changed command/API signatures
- Database schema or migration changes
- Infra integrations (storage, queue, cache, secrets, env contracts)
- Cross-layer payload transformations
Must-have before proceeding:
- Target code-spec files to update are identified
- Concrete contract is defined (signature, fields, env keys)
- Validation and error matrix is defined
- At least one Good/Base/Bad case is defined
Step 5: Research the Codebase [AI]
Based on the confirmed PRD, run a focused research pass and produce:
- Relevant spec files in
.trellis/spec/ - Existing code patterns to follow (2-3 examples)
- Files that will likely need modification
Use this output format:
markdown1## Relevant Specs 2- <path>: <why it's relevant> 3 4## Code Patterns Found 5- <pattern>: <example file path> 6 7## Files to Modify 8- <path>: <what change>
Step 6: Configure Context [AI]
Initialize default context:
bash1python3 ./.trellis/scripts/task.py init-context "$TASK_DIR" <type> 2# type: backend | frontend | fullstack
Add specs found in your research pass:
bash1# For each relevant spec and code pattern: 2python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" implement "<path>" "<reason>" 3python3 ./.trellis/scripts/task.py add-context "$TASK_DIR" check "<path>" "<reason>"
Step 7: Activate Task [AI]
bash1python3 ./.trellis/scripts/task.py start "$TASK_DIR"
This sets .current-task so hooks can inject context.
Phase 3: Execute (shared)
Step 8: Implement [AI]
Implement the task described in prd.md.
- Follow all specs injected into implement context
- Keep changes scoped to requirements
- Run lint and typecheck before finishing
Step 9: Check Quality [AI]
Run a quality pass against check context:
- Review all code changes against the specs
- Fix issues directly
- Ensure lint and typecheck pass
Step 10: Complete [AI]
- Verify lint and typecheck pass
- Report what was implemented
- Remind user to:
- Test the changes
- Commit when ready
- Run
$record-sessionto record this session
Continuing Existing Task
If get_context.py shows a current task:
- Read the task's
prd.mdto understand the goal - Check
task.jsonfor current status and phase - Ask user: "Continue working on <task-name>?"
If yes, resume from the appropriate step (usually Step 7 or 8).
Skills Reference
User Skills [USER]
| Skill | When to Use |
|---|---|
$start | Begin a session (this skill) |
$finish-work | Before committing changes |
$record-session | After completing a task |
AI Scripts [AI]
| Script | Purpose |
|---|---|
python3 ./.trellis/scripts/get_context.py | Get session context |
python3 ./.trellis/scripts/task.py create | Create task directory |
python3 ./.trellis/scripts/task.py init-context | Initialize jsonl files |
python3 ./.trellis/scripts/task.py add-context | Add spec to jsonl |
python3 ./.trellis/scripts/task.py start | Set current task |
python3 ./.trellis/scripts/task.py finish | Clear current task |
python3 ./.trellis/scripts/task.py archive | Archive completed task |
Workflow Phases [AI]
| Phase | Purpose | Context Source |
|---|---|---|
| research | Analyze codebase | direct repo inspection |
| implement | Write code | implement.jsonl |
| check | Review & fix | check.jsonl |
| debug | Fix specific issues | debug.jsonl |
Key Principle
Code-spec context is injected, not remembered.
The Task Workflow ensures agents receive relevant code-spec context automatically. This is more reliable than hoping the AI "remembers" conventions.