Initial commit
This commit is contained in:
@@ -0,0 +1,34 @@
|
||||
---
|
||||
name: trellis-before-dev
|
||||
description: "Discovers and injects project-specific coding guidelines from .trellis/spec/ before implementation begins. Reads spec indexes, pre-development checklists, and shared thinking guides for the target package. Use when starting a new coding task, before writing any code, switching to a different package, or needing to refresh project conventions and standards."
|
||||
---
|
||||
|
||||
Read the relevant development guidelines before starting your task.
|
||||
|
||||
Execute these steps:
|
||||
|
||||
1. **Discover packages and their spec layers**:
|
||||
```bash
|
||||
python3 ./.trellis/scripts/get_context.py --mode packages
|
||||
```
|
||||
|
||||
2. **Identify which specs apply** to your task based on:
|
||||
- Which package you're modifying (e.g., `cli/`, `docs-site/`)
|
||||
- What type of work (backend, frontend, unit-test, docs, etc.)
|
||||
|
||||
3. **Read the spec index** for each relevant module:
|
||||
```bash
|
||||
cat .trellis/spec/<package>/<layer>/index.md
|
||||
```
|
||||
Follow the **"Pre-Development Checklist"** section in the index.
|
||||
|
||||
4. **Read the specific guideline files** listed in the Pre-Development Checklist that are relevant to your task. The index is NOT the goal — it points you to the actual guideline files (e.g., `error-handling.md`, `conventions.md`, `mock-strategies.md`). Read those files to understand the coding standards and patterns.
|
||||
|
||||
5. **Always read shared guides**:
|
||||
```bash
|
||||
cat .trellis/spec/guides/index.md
|
||||
```
|
||||
|
||||
6. Understand the coding standards and patterns you need to follow, then proceed with your development plan.
|
||||
|
||||
This step is **mandatory** before writing any code.
|
||||
@@ -0,0 +1,535 @@
|
||||
---
|
||||
name: trellis-brainstorm
|
||||
description: "Guides collaborative requirements discovery before implementation. Creates task directory, seeds PRD, asks high-value questions one at a time, researches technical choices, and converges on MVP scope. Use when requirements are unclear, there are multiple valid approaches, or the user describes a new feature or complex task."
|
||||
---
|
||||
|
||||
# Brainstorm - Requirements Discovery (AI Coding Enhanced)
|
||||
|
||||
Guide AI through collaborative requirements discovery **before implementation**, optimized for AI coding workflows:
|
||||
|
||||
* **Task-first** (capture ideas immediately)
|
||||
* **Action-before-asking** (reduce low-value questions)
|
||||
* **Research-first** for technical choices (avoid asking users to invent options)
|
||||
* **Diverge → Converge** (expand thinking, then lock MVP)
|
||||
|
||||
---
|
||||
|
||||
## When to Use
|
||||
|
||||
Triggered from $start when the user describes a development task, especially when:
|
||||
|
||||
* requirements are unclear or evolving
|
||||
* there are multiple valid implementation paths
|
||||
* trade-offs matter (UX, reliability, maintainability, cost, performance)
|
||||
* the user might not know the best options up front
|
||||
|
||||
---
|
||||
|
||||
## Core Principles (Non-negotiable)
|
||||
|
||||
1. **Task-first (capture early)**
|
||||
Always ensure a task exists at the start so the user's ideas are recorded immediately.
|
||||
|
||||
2. **Action before asking**
|
||||
If you can derive the answer from repo code, docs, configs, conventions, or quick research — do that first.
|
||||
|
||||
3. **One question per message**
|
||||
Never overwhelm the user with a list of questions. Ask one, update PRD, repeat.
|
||||
|
||||
4. **Prefer concrete options**
|
||||
For preference/decision questions, present 2–3 feasible, specific approaches with trade-offs.
|
||||
|
||||
5. **Research-first for technical choices**
|
||||
If the decision depends on industry conventions / similar tools / established patterns, do research first, then propose options.
|
||||
|
||||
6. **Diverge → Converge**
|
||||
After initial understanding, proactively consider future evolution, related scenarios, and failure/edge cases — then converge to an MVP with explicit out-of-scope.
|
||||
|
||||
7. **No meta questions**
|
||||
Do not ask "should I search?" or "can you paste the code so I can continue?"
|
||||
If you need information: search/inspect. If blocked: ask the minimal blocking question.
|
||||
|
||||
---
|
||||
|
||||
## Step 0: Ensure Task Exists (ALWAYS)
|
||||
|
||||
Before any Q&A, ensure a task exists. If none exists, create one immediately.
|
||||
|
||||
* Use a **temporary working title** derived from the user's message.
|
||||
* It's OK if the title is imperfect — refine later in PRD.
|
||||
|
||||
```bash
|
||||
TASK_DIR=$(python3 ./.trellis/scripts/task.py create "brainstorm: <short goal>" --slug <auto>)
|
||||
```
|
||||
|
||||
Create/seed `prd.md` immediately with what you know:
|
||||
|
||||
```markdown
|
||||
# brainstorm: <short goal>
|
||||
|
||||
## Goal
|
||||
|
||||
<one paragraph: what + why>
|
||||
|
||||
## What I already know
|
||||
|
||||
* <facts from user message>
|
||||
* <facts discovered from repo/docs>
|
||||
|
||||
## Assumptions (temporary)
|
||||
|
||||
* <assumptions to validate>
|
||||
|
||||
## Open Questions
|
||||
|
||||
* <ONLY Blocking / Preference questions; keep list short>
|
||||
|
||||
## Requirements (evolving)
|
||||
|
||||
* <start with what is known>
|
||||
|
||||
## Acceptance Criteria (evolving)
|
||||
|
||||
* [ ] <testable criterion>
|
||||
|
||||
## Definition of Done (team quality bar)
|
||||
|
||||
* Tests added/updated (unit/integration where appropriate)
|
||||
* Lint / typecheck / CI green
|
||||
* Docs/notes updated if behavior changes
|
||||
* Rollout/rollback considered if risky
|
||||
|
||||
## Out of Scope (explicit)
|
||||
|
||||
* <what we will not do in this task>
|
||||
|
||||
## Technical Notes
|
||||
|
||||
* <files inspected, constraints, links, references>
|
||||
* <research notes summary if applicable>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Auto-Context (DO THIS BEFORE ASKING QUESTIONS)
|
||||
|
||||
Before asking questions like "what does the code look like?", gather context yourself:
|
||||
|
||||
### Repo inspection checklist
|
||||
|
||||
* Identify likely modules/files impacted
|
||||
* Locate existing patterns (similar features, conventions, error handling style)
|
||||
* Check configs, scripts, existing command definitions
|
||||
* Note any constraints (runtime, dependency policy, build tooling)
|
||||
|
||||
### Documentation checklist
|
||||
|
||||
* Look for existing PRDs/specs/templates
|
||||
* Look for command usage examples, README, ADRs if any
|
||||
|
||||
Write findings into PRD:
|
||||
|
||||
* Add to `What I already know`
|
||||
* Add constraints/links to `Technical Notes`
|
||||
|
||||
---
|
||||
|
||||
## Step 2: Classify Complexity (still useful, not gating task creation)
|
||||
|
||||
| Complexity | Criteria | Action |
|
||||
| ------------ | ------------------------------------------------------ | ------------------------------------------- |
|
||||
| **Trivial** | Single-line fix, typo, obvious change | Skip brainstorm, implement directly |
|
||||
| **Simple** | Clear goal, 1–2 files, scope well-defined | Ask 1 confirm question, then implement |
|
||||
| **Moderate** | Multiple files, some ambiguity | Light brainstorm (2–3 high-value questions) |
|
||||
| **Complex** | Vague goal, architectural choices, multiple approaches | Full brainstorm |
|
||||
|
||||
> Note: Task already exists from Step 0. Classification only affects depth of brainstorming.
|
||||
|
||||
---
|
||||
|
||||
## Step 3: Question Gate (Ask ONLY high-value questions)
|
||||
|
||||
Before asking ANY question, run the following gate:
|
||||
|
||||
### Gate A — Can I derive this without the user?
|
||||
|
||||
If answer is available via:
|
||||
|
||||
* repo inspection (code/config)
|
||||
* docs/specs/conventions
|
||||
* quick market/OSS research
|
||||
|
||||
→ **Do not ask.** Fetch it, summarize, update PRD.
|
||||
|
||||
### Gate B — Is this a meta/lazy question?
|
||||
|
||||
Examples:
|
||||
|
||||
* "Should I search?"
|
||||
* "Can you paste the code so I can proceed?"
|
||||
* "What does the code look like?" (when repo is available)
|
||||
|
||||
→ **Do not ask.** Take action.
|
||||
|
||||
### Gate C — What type of question is it?
|
||||
|
||||
* **Blocking**: cannot proceed without user input
|
||||
* **Preference**: multiple valid choices, depends on product/UX/risk preference
|
||||
* **Derivable**: should be answered by inspection/research
|
||||
|
||||
→ Only ask **Blocking** or **Preference**.
|
||||
|
||||
---
|
||||
|
||||
## Step 4: Research-first Mode (Mandatory for technical choices)
|
||||
|
||||
### Trigger conditions (any → research-first)
|
||||
|
||||
* The task involves selecting an approach, library, protocol, framework, template system, plugin mechanism, or CLI UX convention
|
||||
* The user asks for "best practice", "how others do it", "recommendation"
|
||||
* The user can't reasonably enumerate options
|
||||
|
||||
### Delegate to `trellis-research` sub-agent (don't research inline)
|
||||
|
||||
For each research topic, **spawn a `trellis-research` sub-agent via the Task tool** — don't do WebFetch / WebSearch / `gh api` inline in the main conversation.
|
||||
|
||||
Why:
|
||||
- The sub-agent has its own context window → doesn't pollute brainstorm context with raw tool output
|
||||
- It persists findings to `{TASK_DIR}/research/<topic>.md` (the contract — see `workflow.md` Phase 1.2)
|
||||
- It returns only `{file path, one-line summary}` to the main agent
|
||||
- Independent topics can be **parallelized** — spawn multiple sub-agents in one tool call
|
||||
|
||||
Agent type: `trellis-research`
|
||||
Task description template: "Research <specific question>; persist findings to `{TASK_DIR}/research/<topic-slug>.md`."
|
||||
|
||||
❌ Bad (what you must NOT do):
|
||||
```
|
||||
Main agent: WebFetch(url-A) → WebFetch(url-B) → Bash(gh api ...)
|
||||
→ WebSearch(q1) → WebSearch(q2) → ... (10+ inline calls)
|
||||
→ Write(research/topic.md)
|
||||
```
|
||||
→ Pollutes main context with raw HTML/JSON, burns tokens.
|
||||
|
||||
✅ Good:
|
||||
```
|
||||
Main agent: Task(subagent_type="trellis-research",
|
||||
prompt="Research topic A; persist to research/topic-a.md")
|
||||
+ Task(subagent_type="trellis-research",
|
||||
prompt="Research topic B; persist to research/topic-b.md")
|
||||
+ Task(subagent_type="trellis-research",
|
||||
prompt="Research topic C; persist to research/topic-c.md")
|
||||
→ Reads research/topic-{a,b,c}.md after they finish.
|
||||
```
|
||||
|
||||
### Research steps (to pass into each sub-agent prompt)
|
||||
|
||||
Each `trellis-research` sub-agent should:
|
||||
|
||||
1. Identify 2–4 comparable tools/patterns for its topic
|
||||
2. Summarize common conventions and why they exist
|
||||
3. Map conventions onto our repo constraints
|
||||
4. Write findings to `{TASK_DIR}/research/<topic>.md`
|
||||
|
||||
Main agent then reads the persisted files and produces **2–3 feasible approaches** in PRD.
|
||||
|
||||
### Research output format (PRD)
|
||||
|
||||
The PRD itself should only reference the persisted research files, not duplicate their content. Add a `## Research References` section pointing at `research/*.md`.
|
||||
|
||||
Optionally, add a convergence section with feasible approaches derived from the research:
|
||||
|
||||
```markdown
|
||||
## Research References
|
||||
|
||||
* [`research/<topic-a>.md`](research/<topic-a>.md) — <one-line takeaway>
|
||||
* [`research/<topic-b>.md`](research/<topic-b>.md) — <one-line takeaway>
|
||||
|
||||
## Research Notes
|
||||
|
||||
### What similar tools do
|
||||
|
||||
* ...
|
||||
* ...
|
||||
|
||||
### Constraints from our repo/project
|
||||
|
||||
* ...
|
||||
|
||||
### Feasible approaches here
|
||||
|
||||
**Approach A: <name>** (Recommended)
|
||||
|
||||
* How it works:
|
||||
* Pros:
|
||||
* Cons:
|
||||
|
||||
**Approach B: <name>**
|
||||
|
||||
* How it works:
|
||||
* Pros:
|
||||
* Cons:
|
||||
|
||||
**Approach C: <name>** (optional)
|
||||
|
||||
* ...
|
||||
```
|
||||
|
||||
Then ask **one** preference question:
|
||||
|
||||
* "Which approach do you prefer: A / B / C (or other)?"
|
||||
|
||||
---
|
||||
|
||||
## Step 5: Expansion Sweep (DIVERGE) — Required after initial understanding
|
||||
|
||||
After you can summarize the goal, proactively broaden thinking before converging.
|
||||
|
||||
### Expansion categories (keep to 1–2 bullets each)
|
||||
|
||||
1. **Future evolution**
|
||||
|
||||
* What might this feature become in 1–3 months?
|
||||
* What extension points are worth preserving now?
|
||||
|
||||
2. **Related scenarios**
|
||||
|
||||
* What adjacent commands/flows should remain consistent with this?
|
||||
* Are there parity expectations (create vs update, import vs export, etc.)?
|
||||
|
||||
3. **Failure & edge cases**
|
||||
|
||||
* Conflicts, offline/network failure, retries, idempotency, compatibility, rollback
|
||||
* Input validation, security boundaries, permission checks
|
||||
|
||||
### Expansion message template (to user)
|
||||
|
||||
```markdown
|
||||
I understand you want to implement: <current goal>.
|
||||
|
||||
Before diving into design, let me quickly diverge to consider three categories (to avoid rework later):
|
||||
|
||||
1. Future evolution: <1–2 bullets>
|
||||
2. Related scenarios: <1–2 bullets>
|
||||
3. Failure/edge cases: <1–2 bullets>
|
||||
|
||||
For this MVP, which would you like to include (or none)?
|
||||
|
||||
1. Current requirement only (minimal viable)
|
||||
2. Add <X> (reserve for future extension)
|
||||
3. Add <Y> (improve robustness/consistency)
|
||||
4. Other: describe your preference
|
||||
```
|
||||
|
||||
Then update PRD:
|
||||
|
||||
* What's in MVP → `Requirements`
|
||||
* What's excluded → `Out of Scope`
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Q&A Loop (CONVERGE)
|
||||
|
||||
### Rules
|
||||
|
||||
* One question per message
|
||||
* Prefer multiple-choice when possible
|
||||
* After each user answer:
|
||||
|
||||
* Update PRD immediately
|
||||
* Move answered items from `Open Questions` → `Requirements`
|
||||
* Update `Acceptance Criteria` with testable checkboxes
|
||||
* Clarify `Out of Scope`
|
||||
|
||||
### Question priority (recommended)
|
||||
|
||||
1. **MVP scope boundary** (what is included/excluded)
|
||||
2. **Preference decisions** (after presenting concrete options)
|
||||
3. **Failure/edge behavior** (only for MVP-critical paths)
|
||||
4. **Success metrics & Acceptance Criteria** (what proves it works)
|
||||
|
||||
### Preferred question format (multiple choice)
|
||||
|
||||
```markdown
|
||||
For <topic>, which approach do you prefer?
|
||||
|
||||
1. **Option A** — <what it means + trade-off>
|
||||
2. **Option B** — <what it means + trade-off>
|
||||
3. **Option C** — <what it means + trade-off>
|
||||
4. **Other** — describe your preference
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 7: Propose Approaches + Record Decisions (Complex tasks)
|
||||
|
||||
After requirements are clear enough, propose 2–3 approaches (if not already done via research-first):
|
||||
|
||||
```markdown
|
||||
Based on current information, here are 2–3 feasible approaches:
|
||||
|
||||
**Approach A: <name>** (Recommended)
|
||||
|
||||
* How:
|
||||
* Pros:
|
||||
* Cons:
|
||||
|
||||
**Approach B: <name>**
|
||||
|
||||
* How:
|
||||
* Pros:
|
||||
* Cons:
|
||||
|
||||
Which direction do you prefer?
|
||||
```
|
||||
|
||||
Record the outcome in PRD as an ADR-lite section:
|
||||
|
||||
```markdown
|
||||
## Decision (ADR-lite)
|
||||
|
||||
**Context**: Why this decision was needed
|
||||
**Decision**: Which approach was chosen
|
||||
**Consequences**: Trade-offs, risks, potential future improvements
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Step 8: Final Confirmation + Implementation Plan
|
||||
|
||||
When open questions are resolved, confirm complete requirements with a structured summary:
|
||||
|
||||
### Final confirmation format
|
||||
|
||||
```markdown
|
||||
Here's my understanding of the complete requirements:
|
||||
|
||||
**Goal**: <one sentence>
|
||||
|
||||
**Requirements**:
|
||||
|
||||
* ...
|
||||
* ...
|
||||
|
||||
**Acceptance Criteria**:
|
||||
|
||||
* [ ] ...
|
||||
* [ ] ...
|
||||
|
||||
**Definition of Done**:
|
||||
|
||||
* ...
|
||||
|
||||
**Out of Scope**:
|
||||
|
||||
* ...
|
||||
|
||||
**Technical Approach**:
|
||||
<brief summary + key decisions>
|
||||
|
||||
**Implementation Plan (small PRs)**:
|
||||
|
||||
* PR1: <scaffolding + tests + minimal plumbing>
|
||||
* PR2: <core behavior>
|
||||
* PR3: <edge cases + docs + cleanup>
|
||||
|
||||
Does this look correct? If yes, I'll proceed with implementation.
|
||||
```
|
||||
|
||||
### Subtask Decomposition (Complex Tasks)
|
||||
|
||||
For complex tasks with multiple independent work items, create subtasks:
|
||||
|
||||
```bash
|
||||
# Create child tasks
|
||||
CHILD1=$(python3 ./.trellis/scripts/task.py create "Child task 1" --slug child1 --parent "$TASK_DIR")
|
||||
CHILD2=$(python3 ./.trellis/scripts/task.py create "Child task 2" --slug child2 --parent "$TASK_DIR")
|
||||
|
||||
# Or link existing tasks
|
||||
python3 ./.trellis/scripts/task.py add-subtask "$TASK_DIR" "$CHILD_DIR"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## PRD Target Structure (final)
|
||||
|
||||
`prd.md` should converge to:
|
||||
|
||||
```markdown
|
||||
# <Task Title>
|
||||
|
||||
## Goal
|
||||
|
||||
<why + what>
|
||||
|
||||
## Requirements
|
||||
|
||||
* ...
|
||||
|
||||
## Acceptance Criteria
|
||||
|
||||
* [ ] ...
|
||||
|
||||
## Definition of Done
|
||||
|
||||
* ...
|
||||
|
||||
## Technical Approach
|
||||
|
||||
<key design + decisions>
|
||||
|
||||
## Decision (ADR-lite)
|
||||
|
||||
Context / Decision / Consequences
|
||||
|
||||
## Out of Scope
|
||||
|
||||
* ...
|
||||
|
||||
## Technical Notes
|
||||
|
||||
<constraints, references, files, research notes>
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Anti-Patterns (Hard Avoid)
|
||||
|
||||
* Asking user for code/context that can be derived from repo
|
||||
* Asking user to choose an approach before presenting concrete options
|
||||
* Meta questions about whether to research
|
||||
* Staying narrowly on the initial request without considering evolution/edges
|
||||
* Letting brainstorming drift without updating PRD
|
||||
|
||||
---
|
||||
|
||||
## Integration with Start Workflow
|
||||
|
||||
After brainstorm completes (Step 8 confirmation approved), the flow continues to the Task Workflow's **Phase 2: Prepare for Implementation**:
|
||||
|
||||
```text
|
||||
Brainstorm
|
||||
Step 0: Create task directory + seed PRD
|
||||
Step 1–7: Discover requirements, research, converge
|
||||
Step 8: Final confirmation → user approves
|
||||
↓
|
||||
Task Workflow Phase 2 (Prepare for Implementation)
|
||||
Code-Spec Depth Check (if applicable)
|
||||
→ Research codebase (based on confirmed PRD)
|
||||
→ Configure code-spec context (jsonl files)
|
||||
→ Activate task
|
||||
↓
|
||||
Task Workflow Phase 3 (Execute)
|
||||
Implement → Check → Complete
|
||||
```
|
||||
|
||||
The task directory and PRD already exist from brainstorm, so Phase 1 of the Task Workflow is skipped entirely.
|
||||
|
||||
---
|
||||
|
||||
## Related Commands
|
||||
|
||||
| Command | When to Use |
|
||||
|---------|-------------|
|
||||
| `$start` | Entry point that triggers brainstorm |
|
||||
| `$finish-work` | After implementation is complete |
|
||||
| `$update-spec` | If new patterns emerge during work |
|
||||
@@ -0,0 +1,130 @@
|
||||
---
|
||||
name: trellis-break-loop
|
||||
description: "Deep bug analysis to break the fix-forget-repeat cycle. Analyzes root cause category, why fixes failed, prevention mechanisms, and captures knowledge into specs. Use after fixing a bug to prevent the same class of bugs."
|
||||
---
|
||||
|
||||
# Break the Loop - Deep Bug Analysis
|
||||
|
||||
When debug is complete, use this for deep analysis to break the "fix bug -> forget -> repeat" cycle.
|
||||
|
||||
---
|
||||
|
||||
## Analysis Framework
|
||||
|
||||
Analyze the bug you just fixed from these 5 dimensions:
|
||||
|
||||
### 1. Root Cause Category
|
||||
|
||||
Which category does this bug belong to?
|
||||
|
||||
| Category | Characteristics | Example |
|
||||
|----------|-----------------|---------|
|
||||
| **A. Missing Spec** | No documentation on how to do it | New feature without checklist |
|
||||
| **B. Cross-Layer Contract** | Interface between layers unclear | API returns different format than expected |
|
||||
| **C. Change Propagation Failure** | Changed one place, missed others | Changed function signature, missed call sites |
|
||||
| **D. Test Coverage Gap** | Unit test passes, integration fails | Works alone, breaks when combined |
|
||||
| **E. Implicit Assumption** | Code relies on undocumented assumption | Timestamp seconds vs milliseconds |
|
||||
|
||||
### 2. Why Fixes Failed (if applicable)
|
||||
|
||||
If you tried multiple fixes before succeeding, analyze each failure:
|
||||
|
||||
- **Surface Fix**: Fixed symptom, not root cause
|
||||
- **Incomplete Scope**: Found root cause, didn't cover all cases
|
||||
- **Tool Limitation**: grep missed it, type check wasn't strict
|
||||
- **Mental Model**: Kept looking in same layer, didn't think cross-layer
|
||||
|
||||
### 3. Prevention Mechanisms
|
||||
|
||||
What mechanisms would prevent this from happening again?
|
||||
|
||||
| Type | Description | Example |
|
||||
|------|-------------|---------|
|
||||
| **Documentation** | Write it down so people know | Update thinking guide |
|
||||
| **Architecture** | Make the error impossible structurally | Type-safe wrappers |
|
||||
| **Compile-time** | Strict type checking, no escape hatches | Signature change causes compile error |
|
||||
| **Runtime** | Monitoring, alerts, scans | Detect orphan entities |
|
||||
| **Test Coverage** | E2E tests, integration tests | Verify full flow |
|
||||
| **Code Review** | Checklist, PR template | "Did you check X?" |
|
||||
|
||||
### 4. Systematic Expansion
|
||||
|
||||
What broader problems does this bug reveal?
|
||||
|
||||
- **Similar Issues**: Where else might this problem exist?
|
||||
- **Design Flaw**: Is there a fundamental architecture issue?
|
||||
- **Process Flaw**: Is there a development process improvement?
|
||||
- **Knowledge Gap**: Is the team missing some understanding?
|
||||
|
||||
### 5. Knowledge Capture
|
||||
|
||||
Solidify insights into the system:
|
||||
|
||||
- [ ] Update `.trellis/spec/guides/` thinking guides
|
||||
- [ ] Update relevant `.trellis/spec/` docs
|
||||
- [ ] Create issue record (if applicable)
|
||||
- [ ] Create feature ticket for root fix
|
||||
- [ ] Update check guidelines if needed
|
||||
|
||||
---
|
||||
|
||||
## Output Format
|
||||
|
||||
Please output analysis in this format:
|
||||
|
||||
```markdown
|
||||
## Bug Analysis: [Short Description]
|
||||
|
||||
### 1. Root Cause Category
|
||||
- **Category**: [A/B/C/D/E] - [Category Name]
|
||||
- **Specific Cause**: [Detailed description]
|
||||
|
||||
### 2. Why Fixes Failed (if applicable)
|
||||
1. [First attempt]: [Why it failed]
|
||||
2. [Second attempt]: [Why it failed]
|
||||
...
|
||||
|
||||
### 3. Prevention Mechanisms
|
||||
| Priority | Mechanism | Specific Action | Status |
|
||||
|----------|-----------|-----------------|--------|
|
||||
| P0 | ... | ... | TODO/DONE |
|
||||
|
||||
### 4. Systematic Expansion
|
||||
- **Similar Issues**: [List places with similar problems]
|
||||
- **Design Improvement**: [Architecture-level suggestions]
|
||||
- **Process Improvement**: [Development process suggestions]
|
||||
|
||||
### 5. Knowledge Capture
|
||||
- [ ] [Documents to update / tickets to create]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
> **The value of debugging is not in fixing the bug, but in making this class of bugs never happen again.**
|
||||
|
||||
Three levels of insight:
|
||||
1. **Tactical**: How to fix THIS bug
|
||||
2. **Strategic**: How to prevent THIS CLASS of bugs
|
||||
3. **Philosophical**: How to expand thinking patterns
|
||||
|
||||
30 minutes of analysis saves 30 hours of future debugging.
|
||||
|
||||
---
|
||||
|
||||
## After Analysis: Immediate Actions
|
||||
|
||||
**IMPORTANT**: After completing the analysis above, you MUST immediately:
|
||||
|
||||
1. **Update spec/guides** - Don't just list TODOs, actually update the relevant files:
|
||||
- If it's a cross-platform issue → update `cross-platform-thinking-guide.md`
|
||||
- If it's a cross-layer issue → update `cross-layer-thinking-guide.md`
|
||||
- If it's a code reuse issue → update `code-reuse-thinking-guide.md`
|
||||
- If it's domain-specific → update `backend/*.md` or `frontend/*.md`
|
||||
|
||||
2. **Sync templates** - After updating `.trellis/spec/`, sync to `src/templates/markdown/spec/`
|
||||
|
||||
3. **Commit the spec updates** - This is the primary output, not just the analysis text
|
||||
|
||||
> **The analysis is worthless if it stays in chat. The value is in the updated specs.**
|
||||
@@ -0,0 +1,92 @@
|
||||
---
|
||||
name: trellis-check
|
||||
description: "Comprehensive quality verification: spec compliance, lint, type-check, tests, cross-layer data flow, code reuse, and consistency checks. Use when code is written and needs quality verification, before committing changes, or to catch context drift during long sessions."
|
||||
---
|
||||
|
||||
# Code Quality Check
|
||||
|
||||
Comprehensive quality verification for recently written code. Combines spec compliance, cross-layer safety, and pre-commit checks.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Identify What Changed
|
||||
|
||||
```bash
|
||||
git diff --name-only HEAD
|
||||
git status
|
||||
```
|
||||
|
||||
## Step 2: Read Applicable Specs
|
||||
|
||||
```bash
|
||||
python3 ./.trellis/scripts/get_context.py --mode packages
|
||||
```
|
||||
|
||||
For each changed package/layer, read the spec index and follow its **Quality Check** section:
|
||||
|
||||
```bash
|
||||
cat .trellis/spec/<package>/<layer>/index.md
|
||||
```
|
||||
|
||||
Read the specific guideline files referenced — the index is a pointer, not the goal.
|
||||
|
||||
## Step 3: Run Project Checks
|
||||
|
||||
Run the project's lint, type-check, and test commands. Fix any failures before proceeding.
|
||||
|
||||
## Step 4: Review Against Checklist
|
||||
|
||||
### Code Quality
|
||||
|
||||
- [ ] Linter passes?
|
||||
- [ ] Type checker passes (if applicable)?
|
||||
- [ ] Tests pass?
|
||||
- [ ] No debug logging left in?
|
||||
- [ ] No suppressed warnings or type-safety bypasses?
|
||||
|
||||
### Test Coverage
|
||||
|
||||
- [ ] New function → unit test added?
|
||||
- [ ] Bug fix → regression test added?
|
||||
- [ ] Changed behavior → existing tests updated?
|
||||
|
||||
### Spec Sync
|
||||
|
||||
- [ ] Does `.trellis/spec/` need updates? (new patterns, conventions, lessons learned)
|
||||
|
||||
> "If I fixed a bug or discovered something non-obvious, should I document it so future me won't hit the same issue?" → If YES, update the relevant spec doc.
|
||||
|
||||
## Step 5: Cross-Layer Dimensions (if applicable)
|
||||
|
||||
Skip this step if your change is confined to a single layer.
|
||||
|
||||
### A. Data Flow (changes touch 3+ layers)
|
||||
|
||||
- [ ] Read flow traces correctly: Storage → Service → API → UI
|
||||
- [ ] Write flow traces correctly: UI → API → Service → Storage
|
||||
- [ ] Types/schemas correctly passed between layers?
|
||||
- [ ] Errors properly propagated to caller?
|
||||
|
||||
### B. Code Reuse (modifying constants, creating utilities)
|
||||
|
||||
- [ ] Searched for existing similar code before creating new?
|
||||
```bash
|
||||
grep -r "pattern" src/
|
||||
```
|
||||
- [ ] If 2+ places define same value → extracted to shared constant?
|
||||
- [ ] After batch modification, all occurrences updated?
|
||||
|
||||
### C. Import/Dependency (creating new files)
|
||||
|
||||
- [ ] Correct import paths (relative vs absolute)?
|
||||
- [ ] No circular dependencies?
|
||||
|
||||
### D. Same-Layer Consistency
|
||||
|
||||
- [ ] Other places using the same concept are consistent?
|
||||
|
||||
---
|
||||
|
||||
## Step 6: Report and Fix
|
||||
|
||||
Report violations found and fix them directly. Re-run project checks after fixes.
|
||||
@@ -0,0 +1,56 @@
|
||||
---
|
||||
name: trellis-continue
|
||||
description: "Resume work on the current task. Loads the workflow Phase Index, figures out which phase/step to pick up at, then pulls the step-level detail via get_context.py --mode phase. Use when coming back to an in-progress task and you need to know what to do next."
|
||||
---
|
||||
|
||||
# Continue Current Task
|
||||
|
||||
Resume work on the current task — pick up at the right phase/step in `.trellis/workflow.md`.
|
||||
|
||||
---
|
||||
|
||||
## Step 1: Load Current Context
|
||||
|
||||
```bash
|
||||
python3 ./.trellis/scripts/get_context.py
|
||||
```
|
||||
|
||||
Confirms: current task, git state, recent commits.
|
||||
|
||||
## Step 2: Load the Phase Index
|
||||
|
||||
```bash
|
||||
python3 ./.trellis/scripts/get_context.py --mode phase
|
||||
```
|
||||
|
||||
Shows the Phase Index (Plan / Execute / Finish) with routing + skill mapping.
|
||||
|
||||
## Step 3: Decide Where You Are
|
||||
|
||||
Compare the task's `prd.md` + recent activity against the Phase Index:
|
||||
|
||||
- No `prd.md` yet, or requirements unclear → **Phase 1: Plan** (start at step 1.0/1.1)
|
||||
- `prd.md` exists + context configured, but code not written → **Phase 2: Execute** (step 2.1)
|
||||
- Code written, pending final quality gate → **Phase 3: Finish** (step 3.1)
|
||||
|
||||
Phase rules (full detail in `.trellis/workflow.md`):
|
||||
|
||||
1. Run steps **in order** within a phase — `[required]` steps must not be skipped
|
||||
2. `[once]` steps are already done if the output exists (e.g., `prd.md` for 1.1; `implement.jsonl` with curated entries for 1.3) — skip them
|
||||
3. You may go back to an earlier phase if discoveries require it
|
||||
|
||||
## Step 4: Load the Specific Step
|
||||
|
||||
Once you know which step to resume at:
|
||||
|
||||
```bash
|
||||
python3 ./.trellis/scripts/get_context.py --mode phase --step <X.X> --platform codex
|
||||
```
|
||||
|
||||
Follow the loaded instructions. After each `[required]` step completes, move to the next.
|
||||
|
||||
---
|
||||
|
||||
## Reference
|
||||
|
||||
Full workflow, skill routing table, and the DO-NOT-skip table live in `.trellis/workflow.md`. This command is only an entry point — the canonical guidance is there.
|
||||
@@ -0,0 +1,37 @@
|
||||
---
|
||||
name: trellis-finish-work
|
||||
description: "Wrap up the current session: verify quality gate passed, remind user to commit, archive completed tasks, and record session progress to the developer journal. Use when done coding and ready to end the session."
|
||||
---
|
||||
|
||||
# Finish Work
|
||||
|
||||
Wrap up the current session.
|
||||
|
||||
## Step 1: Quality Gate
|
||||
|
||||
`trellis-check` should have already run in Phase 3. If not, trigger it now and do not proceed until lint, type-check, tests, and spec compliance pass.
|
||||
|
||||
## Step 2: Remind User to Commit
|
||||
|
||||
If there are uncommitted changes:
|
||||
|
||||
> "Please review the changes and commit when ready."
|
||||
|
||||
Do NOT run `git commit` — the human commits after testing.
|
||||
|
||||
## Step 3: Record Session (after commit)
|
||||
|
||||
Archive finished tasks (judge by work status, not the `status` field):
|
||||
|
||||
```bash
|
||||
python3 ./.trellis/scripts/task.py archive <task-name>
|
||||
```
|
||||
|
||||
Append a session entry (auto-handles journal rotation, line count, index update):
|
||||
|
||||
```bash
|
||||
python3 ./.trellis/scripts/add_session.py \
|
||||
--title "Session Title" \
|
||||
--commit "hash1,hash2" \
|
||||
--summary "Brief summary"
|
||||
```
|
||||
@@ -0,0 +1,356 @@
|
||||
---
|
||||
name: trellis-update-spec
|
||||
description: "Captures executable contracts and coding conventions into .trellis/spec/ documents. Use when learning something valuable from debugging, implementing, or discussion that should be preserved for future sessions."
|
||||
---
|
||||
|
||||
# Update Code-Spec - Capture Executable Contracts
|
||||
|
||||
When you learn something valuable (from debugging, implementing, or discussion), use this to update the relevant code-spec documents.
|
||||
|
||||
**Timing**: After completing a task, fixing a bug, or discovering a new pattern
|
||||
|
||||
---
|
||||
|
||||
## Code-Spec First Rule (CRITICAL)
|
||||
|
||||
In this project, "spec" for implementation work means **code-spec**:
|
||||
- Executable contracts (not principle-only text)
|
||||
- Concrete signatures, payload fields, env keys, and boundary behavior
|
||||
- Testable validation/error behavior
|
||||
|
||||
If the change touches infra or cross-layer contracts, code-spec depth is mandatory.
|
||||
|
||||
### Mandatory Triggers
|
||||
|
||||
Apply code-spec depth when the change includes any of:
|
||||
- New/changed command or API signature
|
||||
- Cross-layer request/response contract change
|
||||
- Database schema/migration change
|
||||
- Infra integration (storage, queue, cache, secrets, env wiring)
|
||||
|
||||
### Mandatory Output (7 Sections)
|
||||
|
||||
For triggered tasks, include all sections below:
|
||||
1. Scope / Trigger
|
||||
2. Signatures (command/API/DB)
|
||||
3. Contracts (request/response/env)
|
||||
4. Validation & Error Matrix
|
||||
5. Good/Base/Bad Cases
|
||||
6. Tests Required (with assertion points)
|
||||
7. Wrong vs Correct (at least one pair)
|
||||
|
||||
---
|
||||
|
||||
## When to Update Code-Specs
|
||||
|
||||
| Trigger | Example | Target Spec |
|
||||
|---------|---------|-------------|
|
||||
| **Implemented a feature** | Added a new integration or module | Relevant spec file |
|
||||
| **Made a design decision** | Chose extensibility pattern over simplicity | Relevant spec + "Design Decisions" section |
|
||||
| **Fixed a bug** | Found a subtle issue with error handling | Relevant spec (e.g., error-handling docs) |
|
||||
| **Discovered a pattern** | Found a better way to structure code | Relevant spec file |
|
||||
| **Hit a gotcha** | Learned that X must be done before Y | Relevant spec + "Common Mistakes" section |
|
||||
| **Established a convention** | Team agreed on naming pattern | Quality guidelines |
|
||||
| **New thinking trigger** | "Don't forget to check X before doing Y" | `guides/*.md` (as a checklist item) |
|
||||
|
||||
**Key Insight**: Code-spec updates are NOT just for problems. Every feature implementation contains design decisions and contracts that future AI/developers need to execute safely.
|
||||
|
||||
---
|
||||
|
||||
## Spec Structure Overview
|
||||
|
||||
```
|
||||
.trellis/spec/
|
||||
├── <layer>/ # Per-layer coding standards (e.g., backend/, frontend/, api/)
|
||||
│ ├── index.md # Overview and links
|
||||
│ └── *.md # Topic-specific guidelines
|
||||
└── guides/ # Thinking checklists (NOT coding specs!)
|
||||
├── index.md # Guide index
|
||||
└── *.md # Topic-specific guides
|
||||
```
|
||||
|
||||
### CRITICAL: Code-Spec vs Guide - Know the Difference
|
||||
|
||||
| Type | Location | Purpose | Content Style |
|
||||
|------|----------|---------|---------------|
|
||||
| **Code-Spec** | `<layer>/*.md` | Tell AI "how to implement safely" | Signatures, contracts, matrices, cases, test points |
|
||||
| **Guide** | `guides/*.md` | Help AI "what to think about" | Checklists, questions, pointers to specs |
|
||||
|
||||
**Decision Rule**: Ask yourself:
|
||||
|
||||
- "This is **how to write** the code" → Put in a spec layer directory
|
||||
- "This is **what to consider** before writing" → Put in `guides/`
|
||||
|
||||
**Example**:
|
||||
|
||||
| Learning | Wrong Location | Correct Location |
|
||||
|----------|----------------|------------------|
|
||||
| "Use API X not API Y for this task" | ❌ `guides/` (too specific for a thinking guide) | ✅ Relevant spec file (concrete convention) |
|
||||
| "Remember to check X when doing Y" | ❌ Spec file (too abstract for a spec) | ✅ `guides/` (thinking checklist) |
|
||||
|
||||
**Guides should be short checklists that point to specs**, not duplicate the detailed rules.
|
||||
|
||||
---
|
||||
|
||||
## Update Process
|
||||
|
||||
### Step 1: Identify What You Learned
|
||||
|
||||
Answer these questions:
|
||||
|
||||
1. **What did you learn?** (Be specific)
|
||||
2. **Why is it important?** (What problem does it prevent?)
|
||||
3. **Where does it belong?** (Which spec file?)
|
||||
|
||||
### Step 2: Classify the Update Type
|
||||
|
||||
| Type | Description | Action |
|
||||
|------|-------------|--------|
|
||||
| **Design Decision** | Why we chose approach X over Y | Add to "Design Decisions" section |
|
||||
| **Project Convention** | How we do X in this project | Add to relevant section with examples |
|
||||
| **New Pattern** | A reusable approach discovered | Add to "Patterns" section |
|
||||
| **Forbidden Pattern** | Something that causes problems | Add to "Anti-patterns" or "Don't" section |
|
||||
| **Common Mistake** | Easy-to-make error | Add to "Common Mistakes" section |
|
||||
| **Convention** | Agreed-upon standard | Add to relevant section |
|
||||
| **Gotcha** | Non-obvious behavior | Add warning callout |
|
||||
|
||||
### Step 3: Read the Target Code-Spec
|
||||
|
||||
Before editing, read the current code-spec to:
|
||||
- Understand existing structure
|
||||
- Avoid duplicating content
|
||||
- Find the right section for your update
|
||||
|
||||
```bash
|
||||
cat .trellis/spec/<category>/<file>.md
|
||||
```
|
||||
|
||||
### Step 4: Make the Update
|
||||
|
||||
Follow these principles:
|
||||
|
||||
1. **Be Specific**: Include concrete examples, not just abstract rules
|
||||
2. **Explain Why**: State the problem this prevents
|
||||
3. **Show Contracts**: Add signatures, payload fields, and error behavior
|
||||
4. **Show Code**: Add code snippets for key patterns
|
||||
5. **Keep it Short**: One concept per section
|
||||
|
||||
### Step 5: Update the Index (if needed)
|
||||
|
||||
If you added a new section or the code-spec status changed, update the category's `index.md`.
|
||||
|
||||
---
|
||||
|
||||
## Update Templates
|
||||
|
||||
### Mandatory Template for Infra/Cross-Layer Work
|
||||
|
||||
```markdown
|
||||
## Scenario: <name>
|
||||
|
||||
### 1. Scope / Trigger
|
||||
- Trigger: <why this requires code-spec depth>
|
||||
|
||||
### 2. Signatures
|
||||
- Backend command/API/DB signature(s)
|
||||
|
||||
### 3. Contracts
|
||||
- Request fields (name, type, constraints)
|
||||
- Response fields (name, type, constraints)
|
||||
- Environment keys (required/optional)
|
||||
|
||||
### 4. Validation & Error Matrix
|
||||
- <condition> -> <error>
|
||||
|
||||
### 5. Good/Base/Bad Cases
|
||||
- Good: ...
|
||||
- Base: ...
|
||||
- Bad: ...
|
||||
|
||||
### 6. Tests Required
|
||||
- Unit/Integration/E2E with assertion points
|
||||
|
||||
### 7. Wrong vs Correct
|
||||
#### Wrong
|
||||
...
|
||||
#### Correct
|
||||
...
|
||||
```
|
||||
|
||||
### Adding a Design Decision
|
||||
|
||||
```markdown
|
||||
### Design Decision: [Decision Name]
|
||||
|
||||
**Context**: What problem were we solving?
|
||||
|
||||
**Options Considered**:
|
||||
1. Option A - brief description
|
||||
2. Option B - brief description
|
||||
|
||||
**Decision**: We chose Option X because...
|
||||
|
||||
**Example**:
|
||||
\`\`\`typescript
|
||||
// How it's implemented
|
||||
code example
|
||||
\`\`\`
|
||||
|
||||
**Extensibility**: How to extend this in the future...
|
||||
```
|
||||
|
||||
### Adding a Project Convention
|
||||
|
||||
```markdown
|
||||
### Convention: [Convention Name]
|
||||
|
||||
**What**: Brief description of the convention.
|
||||
|
||||
**Why**: Why we do it this way in this project.
|
||||
|
||||
**Example**:
|
||||
\`\`\`typescript
|
||||
// How to follow this convention
|
||||
code example
|
||||
\`\`\`
|
||||
|
||||
**Related**: Links to related conventions or specs.
|
||||
```
|
||||
|
||||
### Adding a New Pattern
|
||||
|
||||
```markdown
|
||||
### Pattern Name
|
||||
|
||||
**Problem**: What problem does this solve?
|
||||
|
||||
**Solution**: Brief description of the approach.
|
||||
|
||||
**Example**:
|
||||
\`\`\`
|
||||
// Good
|
||||
code example
|
||||
|
||||
// Bad
|
||||
code example
|
||||
\`\`\`
|
||||
|
||||
**Why**: Explanation of why this works better.
|
||||
```
|
||||
|
||||
### Adding a Forbidden Pattern
|
||||
|
||||
```markdown
|
||||
### Don't: Pattern Name
|
||||
|
||||
**Problem**:
|
||||
\`\`\`
|
||||
// Don't do this
|
||||
bad code example
|
||||
\`\`\`
|
||||
|
||||
**Why it's bad**: Explanation of the issue.
|
||||
|
||||
**Instead**:
|
||||
\`\`\`
|
||||
// Do this instead
|
||||
good code example
|
||||
\`\`\`
|
||||
```
|
||||
|
||||
### Adding a Common Mistake
|
||||
|
||||
```markdown
|
||||
### Common Mistake: Description
|
||||
|
||||
**Symptom**: What goes wrong
|
||||
|
||||
**Cause**: Why this happens
|
||||
|
||||
**Fix**: How to correct it
|
||||
|
||||
**Prevention**: How to avoid it in the future
|
||||
```
|
||||
|
||||
### Adding a Gotcha
|
||||
|
||||
```markdown
|
||||
> **Warning**: Brief description of the non-obvious behavior.
|
||||
>
|
||||
> Details about when this happens and how to handle it.
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Interactive Mode
|
||||
|
||||
If you're unsure what to update, answer these prompts:
|
||||
|
||||
1. **What did you just finish?**
|
||||
- [ ] Fixed a bug
|
||||
- [ ] Implemented a feature
|
||||
- [ ] Refactored code
|
||||
- [ ] Had a discussion about approach
|
||||
|
||||
2. **What did you learn or decide?**
|
||||
- Design decision (why X over Y)
|
||||
- Project convention (how we do X)
|
||||
- Non-obvious behavior (gotcha)
|
||||
- Better approach (pattern)
|
||||
|
||||
3. **Would future AI/developers need to know this?**
|
||||
- To understand how the code works → Yes, update spec
|
||||
- To maintain or extend the feature → Yes, update spec
|
||||
- To avoid repeating mistakes → Yes, update spec
|
||||
- Purely one-off implementation detail → Maybe skip
|
||||
|
||||
4. **Which area does it relate to?**
|
||||
- [ ] Backend code
|
||||
- [ ] Frontend code
|
||||
- [ ] Cross-layer data flow
|
||||
- [ ] Code organization/reuse
|
||||
- [ ] Quality/testing
|
||||
|
||||
---
|
||||
|
||||
## Quality Checklist
|
||||
|
||||
Before finishing your code-spec update:
|
||||
|
||||
- [ ] Is the content specific and actionable?
|
||||
- [ ] Did you include a code example?
|
||||
- [ ] Did you explain WHY, not just WHAT?
|
||||
- [ ] Did you include executable signatures/contracts?
|
||||
- [ ] Did you include validation and error matrix?
|
||||
- [ ] Did you include Good/Base/Bad cases?
|
||||
- [ ] Did you include required tests with assertion points?
|
||||
- [ ] Is it in the right code-spec file?
|
||||
- [ ] Does it duplicate existing content?
|
||||
- [ ] Would a new team member understand it?
|
||||
|
||||
---
|
||||
|
||||
## Relationship to Other Commands
|
||||
|
||||
```
|
||||
Development Flow:
|
||||
Learn something → $update-spec → Knowledge captured
|
||||
↑ ↓
|
||||
$break-loop ←──────────────────── Future sessions benefit
|
||||
(deep bug analysis)
|
||||
```
|
||||
|
||||
- `$break-loop` - Analyzes bugs deeply, often reveals spec updates needed
|
||||
- `$update-spec` - Actually makes the updates
|
||||
- `$finish-work` - Reminds you to check if specs need updates
|
||||
|
||||
---
|
||||
|
||||
## Core Philosophy
|
||||
|
||||
> **Code-specs are living documents. Every debugging session, every "aha moment" is an opportunity to make the implementation contract clearer.**
|
||||
|
||||
The goal is **institutional memory**:
|
||||
- What one person learns, everyone benefits from
|
||||
- What AI learns in one session, persists to future sessions
|
||||
- Mistakes become documented guardrails
|
||||
Reference in New Issue
Block a user