Compare commits
31 Commits
8b1b87e864
...
0.2.0
| Author | SHA1 | Date | |
|---|---|---|---|
| 5f50ea991b | |||
| fd9175925e | |||
| 63108f4eb5 | |||
| cd71110514 | |||
| 76b48d8b61 | |||
| e5d0dd5f8f | |||
| e77e479e2a | |||
| 80d79c3596 | |||
| 7efe932621 | |||
| a56a26b1f0 | |||
| 906ba99b75 | |||
| da08752642 | |||
| 014b3b0171 | |||
| 33aff5bff5 | |||
| 6de0769d70 | |||
| 6a16255984 | |||
| 2ce3ce0d05 | |||
|
|
ca651d4c05 | ||
|
|
1e065bef18 | ||
|
|
6e655597d7 | ||
|
|
e10b88ee5f | ||
|
|
465fc2178f | ||
|
|
9e48debca7 | ||
|
|
fc344d3ca0 | ||
|
|
e04a86399c | ||
|
|
0069747e68 | ||
| 1e7b0cb56c | |||
|
|
d248bcd944 | ||
| 074f4ef1ee | |||
| 1afdd0a462 | |||
| 6aeb4b8bca |
184
.claude/commands/speckit.analyze.md
Normal file
184
.claude/commands/speckit.analyze.md
Normal file
@@ -0,0 +1,184 @@
|
|||||||
|
---
|
||||||
|
description: Perform a non-destructive cross-artifact consistency and quality analysis across spec.md, plan.md, and tasks.md after task generation.
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Goal
|
||||||
|
|
||||||
|
Identify inconsistencies, duplications, ambiguities, and underspecified items across the three core artifacts (`spec.md`, `plan.md`, `tasks.md`) before implementation. This command MUST run only after `/speckit.tasks` has successfully produced a complete `tasks.md`.
|
||||||
|
|
||||||
|
## Operating Constraints
|
||||||
|
|
||||||
|
**STRICTLY READ-ONLY**: Do **not** modify any files. Output a structured analysis report. Offer an optional remediation plan (user must explicitly approve before any follow-up editing commands would be invoked manually).
|
||||||
|
|
||||||
|
**Constitution Authority**: The project constitution (`.specify/memory/constitution.md`) is **non-negotiable** within this analysis scope. Constitution conflicts are automatically CRITICAL and require adjustment of the spec, plan, or tasks—not dilution, reinterpretation, or silent ignoring of the principle. If a principle itself needs to change, that must occur in a separate, explicit constitution update outside `/speckit.analyze`.
|
||||||
|
|
||||||
|
## Execution Steps
|
||||||
|
|
||||||
|
### 1. Initialize Analysis Context
|
||||||
|
|
||||||
|
Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` once from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS. Derive absolute paths:
|
||||||
|
|
||||||
|
- SPEC = FEATURE_DIR/spec.md
|
||||||
|
- PLAN = FEATURE_DIR/plan.md
|
||||||
|
- TASKS = FEATURE_DIR/tasks.md
|
||||||
|
|
||||||
|
Abort with an error message if any required file is missing (instruct the user to run missing prerequisite command).
|
||||||
|
For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
### 2. Load Artifacts (Progressive Disclosure)
|
||||||
|
|
||||||
|
Load only the minimal necessary context from each artifact:
|
||||||
|
|
||||||
|
**From spec.md:**
|
||||||
|
|
||||||
|
- Overview/Context
|
||||||
|
- Functional Requirements
|
||||||
|
- Non-Functional Requirements
|
||||||
|
- User Stories
|
||||||
|
- Edge Cases (if present)
|
||||||
|
|
||||||
|
**From plan.md:**
|
||||||
|
|
||||||
|
- Architecture/stack choices
|
||||||
|
- Data Model references
|
||||||
|
- Phases
|
||||||
|
- Technical constraints
|
||||||
|
|
||||||
|
**From tasks.md:**
|
||||||
|
|
||||||
|
- Task IDs
|
||||||
|
- Descriptions
|
||||||
|
- Phase grouping
|
||||||
|
- Parallel markers [P]
|
||||||
|
- Referenced file paths
|
||||||
|
|
||||||
|
**From constitution:**
|
||||||
|
|
||||||
|
- Load `.specify/memory/constitution.md` for principle validation
|
||||||
|
|
||||||
|
### 3. Build Semantic Models
|
||||||
|
|
||||||
|
Create internal representations (do not include raw artifacts in output):
|
||||||
|
|
||||||
|
- **Requirements inventory**: Each functional + non-functional requirement with a stable key (derive slug based on imperative phrase; e.g., "User can upload file" → `user-can-upload-file`)
|
||||||
|
- **User story/action inventory**: Discrete user actions with acceptance criteria
|
||||||
|
- **Task coverage mapping**: Map each task to one or more requirements or stories (inference by keyword / explicit reference patterns like IDs or key phrases)
|
||||||
|
- **Constitution rule set**: Extract principle names and MUST/SHOULD normative statements
|
||||||
|
|
||||||
|
### 4. Detection Passes (Token-Efficient Analysis)
|
||||||
|
|
||||||
|
Focus on high-signal findings. Limit to 50 findings total; aggregate remainder in overflow summary.
|
||||||
|
|
||||||
|
#### A. Duplication Detection
|
||||||
|
|
||||||
|
- Identify near-duplicate requirements
|
||||||
|
- Mark lower-quality phrasing for consolidation
|
||||||
|
|
||||||
|
#### B. Ambiguity Detection
|
||||||
|
|
||||||
|
- Flag vague adjectives (fast, scalable, secure, intuitive, robust) lacking measurable criteria
|
||||||
|
- Flag unresolved placeholders (TODO, TKTK, ???, `<placeholder>`, etc.)
|
||||||
|
|
||||||
|
#### C. Underspecification
|
||||||
|
|
||||||
|
- Requirements with verbs but missing object or measurable outcome
|
||||||
|
- User stories missing acceptance criteria alignment
|
||||||
|
- Tasks referencing files or components not defined in spec/plan
|
||||||
|
|
||||||
|
#### D. Constitution Alignment
|
||||||
|
|
||||||
|
- Any requirement or plan element conflicting with a MUST principle
|
||||||
|
- Missing mandated sections or quality gates from constitution
|
||||||
|
|
||||||
|
#### E. Coverage Gaps
|
||||||
|
|
||||||
|
- Requirements with zero associated tasks
|
||||||
|
- Tasks with no mapped requirement/story
|
||||||
|
- Non-functional requirements not reflected in tasks (e.g., performance, security)
|
||||||
|
|
||||||
|
#### F. Inconsistency
|
||||||
|
|
||||||
|
- Terminology drift (same concept named differently across files)
|
||||||
|
- Data entities referenced in plan but absent in spec (or vice versa)
|
||||||
|
- Task ordering contradictions (e.g., integration tasks before foundational setup tasks without dependency note)
|
||||||
|
- Conflicting requirements (e.g., one requires Next.js while other specifies Vue)
|
||||||
|
|
||||||
|
### 5. Severity Assignment
|
||||||
|
|
||||||
|
Use this heuristic to prioritize findings:
|
||||||
|
|
||||||
|
- **CRITICAL**: Violates constitution MUST, missing core spec artifact, or requirement with zero coverage that blocks baseline functionality
|
||||||
|
- **HIGH**: Duplicate or conflicting requirement, ambiguous security/performance attribute, untestable acceptance criterion
|
||||||
|
- **MEDIUM**: Terminology drift, missing non-functional task coverage, underspecified edge case
|
||||||
|
- **LOW**: Style/wording improvements, minor redundancy not affecting execution order
|
||||||
|
|
||||||
|
### 6. Produce Compact Analysis Report
|
||||||
|
|
||||||
|
Output a Markdown report (no file writes) with the following structure:
|
||||||
|
|
||||||
|
## Specification Analysis Report
|
||||||
|
|
||||||
|
| ID | Category | Severity | Location(s) | Summary | Recommendation |
|
||||||
|
|----|----------|----------|-------------|---------|----------------|
|
||||||
|
| A1 | Duplication | HIGH | spec.md:L120-134 | Two similar requirements ... | Merge phrasing; keep clearer version |
|
||||||
|
|
||||||
|
(Add one row per finding; generate stable IDs prefixed by category initial.)
|
||||||
|
|
||||||
|
**Coverage Summary Table:**
|
||||||
|
|
||||||
|
| Requirement Key | Has Task? | Task IDs | Notes |
|
||||||
|
|-----------------|-----------|----------|-------|
|
||||||
|
|
||||||
|
**Constitution Alignment Issues:** (if any)
|
||||||
|
|
||||||
|
**Unmapped Tasks:** (if any)
|
||||||
|
|
||||||
|
**Metrics:**
|
||||||
|
|
||||||
|
- Total Requirements
|
||||||
|
- Total Tasks
|
||||||
|
- Coverage % (requirements with >=1 task)
|
||||||
|
- Ambiguity Count
|
||||||
|
- Duplication Count
|
||||||
|
- Critical Issues Count
|
||||||
|
|
||||||
|
### 7. Provide Next Actions
|
||||||
|
|
||||||
|
At end of report, output a concise Next Actions block:
|
||||||
|
|
||||||
|
- If CRITICAL issues exist: Recommend resolving before `/speckit.implement`
|
||||||
|
- If only LOW/MEDIUM: User may proceed, but provide improvement suggestions
|
||||||
|
- Provide explicit command suggestions: e.g., "Run /speckit.specify with refinement", "Run /speckit.plan to adjust architecture", "Manually edit tasks.md to add coverage for 'performance-metrics'"
|
||||||
|
|
||||||
|
### 8. Offer Remediation
|
||||||
|
|
||||||
|
Ask the user: "Would you like me to suggest concrete remediation edits for the top N issues?" (Do NOT apply them automatically.)
|
||||||
|
|
||||||
|
## Operating Principles
|
||||||
|
|
||||||
|
### Context Efficiency
|
||||||
|
|
||||||
|
- **Minimal high-signal tokens**: Focus on actionable findings, not exhaustive documentation
|
||||||
|
- **Progressive disclosure**: Load artifacts incrementally; don't dump all content into analysis
|
||||||
|
- **Token-efficient output**: Limit findings table to 50 rows; summarize overflow
|
||||||
|
- **Deterministic results**: Rerunning without changes should produce consistent IDs and counts
|
||||||
|
|
||||||
|
### Analysis Guidelines
|
||||||
|
|
||||||
|
- **NEVER modify files** (this is read-only analysis)
|
||||||
|
- **NEVER hallucinate missing sections** (if absent, report them accurately)
|
||||||
|
- **Prioritize constitution violations** (these are always CRITICAL)
|
||||||
|
- **Use examples over exhaustive rules** (cite specific instances, not generic patterns)
|
||||||
|
- **Report zero issues gracefully** (emit success report with coverage statistics)
|
||||||
|
|
||||||
|
## Context
|
||||||
|
|
||||||
|
$ARGUMENTS
|
||||||
295
.claude/commands/speckit.checklist.md
Normal file
295
.claude/commands/speckit.checklist.md
Normal file
@@ -0,0 +1,295 @@
|
|||||||
|
---
|
||||||
|
description: Generate a custom checklist for the current feature based on user requirements.
|
||||||
|
---
|
||||||
|
|
||||||
|
## Checklist Purpose: "Unit Tests for English"
|
||||||
|
|
||||||
|
**CRITICAL CONCEPT**: Checklists are **UNIT TESTS FOR REQUIREMENTS WRITING** - they validate the quality, clarity, and completeness of requirements in a given domain.
|
||||||
|
|
||||||
|
**NOT for verification/testing**:
|
||||||
|
|
||||||
|
- ❌ NOT "Verify the button clicks correctly"
|
||||||
|
- ❌ NOT "Test error handling works"
|
||||||
|
- ❌ NOT "Confirm the API returns 200"
|
||||||
|
- ❌ NOT checking if code/implementation matches the spec
|
||||||
|
|
||||||
|
**FOR requirements quality validation**:
|
||||||
|
|
||||||
|
- ✅ "Are visual hierarchy requirements defined for all card types?" (completeness)
|
||||||
|
- ✅ "Is 'prominent display' quantified with specific sizing/positioning?" (clarity)
|
||||||
|
- ✅ "Are hover state requirements consistent across all interactive elements?" (consistency)
|
||||||
|
- ✅ "Are accessibility requirements defined for keyboard navigation?" (coverage)
|
||||||
|
- ✅ "Does the spec define what happens when logo image fails to load?" (edge cases)
|
||||||
|
|
||||||
|
**Metaphor**: If your spec is code written in English, the checklist is its unit test suite. You're testing whether the requirements are well-written, complete, unambiguous, and ready for implementation - NOT whether the implementation works.
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Execution Steps
|
||||||
|
|
||||||
|
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse JSON for FEATURE_DIR and AVAILABLE_DOCS list.
|
||||||
|
- All file paths must be absolute.
|
||||||
|
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. **Clarify intent (dynamic)**: Derive up to THREE initial contextual clarifying questions (no pre-baked catalog). They MUST:
|
||||||
|
- Be generated from the user's phrasing + extracted signals from spec/plan/tasks
|
||||||
|
- Only ask about information that materially changes checklist content
|
||||||
|
- Be skipped individually if already unambiguous in `$ARGUMENTS`
|
||||||
|
- Prefer precision over breadth
|
||||||
|
|
||||||
|
Generation algorithm:
|
||||||
|
1. Extract signals: feature domain keywords (e.g., auth, latency, UX, API), risk indicators ("critical", "must", "compliance"), stakeholder hints ("QA", "review", "security team"), and explicit deliverables ("a11y", "rollback", "contracts").
|
||||||
|
2. Cluster signals into candidate focus areas (max 4) ranked by relevance.
|
||||||
|
3. Identify probable audience & timing (author, reviewer, QA, release) if not explicit.
|
||||||
|
4. Detect missing dimensions: scope breadth, depth/rigor, risk emphasis, exclusion boundaries, measurable acceptance criteria.
|
||||||
|
5. Formulate questions chosen from these archetypes:
|
||||||
|
- Scope refinement (e.g., "Should this include integration touchpoints with X and Y or stay limited to local module correctness?")
|
||||||
|
- Risk prioritization (e.g., "Which of these potential risk areas should receive mandatory gating checks?")
|
||||||
|
- Depth calibration (e.g., "Is this a lightweight pre-commit sanity list or a formal release gate?")
|
||||||
|
- Audience framing (e.g., "Will this be used by the author only or peers during PR review?")
|
||||||
|
- Boundary exclusion (e.g., "Should we explicitly exclude performance tuning items this round?")
|
||||||
|
- Scenario class gap (e.g., "No recovery flows detected—are rollback / partial failure paths in scope?")
|
||||||
|
|
||||||
|
Question formatting rules:
|
||||||
|
- If presenting options, generate a compact table with columns: Option | Candidate | Why It Matters
|
||||||
|
- Limit to A–E options maximum; omit table if a free-form answer is clearer
|
||||||
|
- Never ask the user to restate what they already said
|
||||||
|
- Avoid speculative categories (no hallucination). If uncertain, ask explicitly: "Confirm whether X belongs in scope."
|
||||||
|
|
||||||
|
Defaults when interaction impossible:
|
||||||
|
- Depth: Standard
|
||||||
|
- Audience: Reviewer (PR) if code-related; Author otherwise
|
||||||
|
- Focus: Top 2 relevance clusters
|
||||||
|
|
||||||
|
Output the questions (label Q1/Q2/Q3). After answers: if ≥2 scenario classes (Alternate / Exception / Recovery / Non-Functional domain) remain unclear, you MAY ask up to TWO more targeted follow‑ups (Q4/Q5) with a one-line justification each (e.g., "Unresolved recovery path risk"). Do not exceed five total questions. Skip escalation if user explicitly declines more.
|
||||||
|
|
||||||
|
3. **Understand user request**: Combine `$ARGUMENTS` + clarifying answers:
|
||||||
|
- Derive checklist theme (e.g., security, review, deploy, ux)
|
||||||
|
- Consolidate explicit must-have items mentioned by user
|
||||||
|
- Map focus selections to category scaffolding
|
||||||
|
- Infer any missing context from spec/plan/tasks (do NOT hallucinate)
|
||||||
|
|
||||||
|
4. **Load feature context**: Read from FEATURE_DIR:
|
||||||
|
- spec.md: Feature requirements and scope
|
||||||
|
- plan.md (if exists): Technical details, dependencies
|
||||||
|
- tasks.md (if exists): Implementation tasks
|
||||||
|
|
||||||
|
**Context Loading Strategy**:
|
||||||
|
- Load only necessary portions relevant to active focus areas (avoid full-file dumping)
|
||||||
|
- Prefer summarizing long sections into concise scenario/requirement bullets
|
||||||
|
- Use progressive disclosure: add follow-on retrieval only if gaps detected
|
||||||
|
- If source docs are large, generate interim summary items instead of embedding raw text
|
||||||
|
|
||||||
|
5. **Generate checklist** - Create "Unit Tests for Requirements":
|
||||||
|
- Create `FEATURE_DIR/checklists/` directory if it doesn't exist
|
||||||
|
- Generate unique checklist filename:
|
||||||
|
- Use short, descriptive name based on domain (e.g., `ux.md`, `api.md`, `security.md`)
|
||||||
|
- Format: `[domain].md`
|
||||||
|
- File handling behavior:
|
||||||
|
- If file does NOT exist: Create new file and number items starting from CHK001
|
||||||
|
- If file exists: Append new items to existing file, continuing from the last CHK ID (e.g., if last item is CHK015, start new items at CHK016)
|
||||||
|
- Never delete or replace existing checklist content - always preserve and append
|
||||||
|
|
||||||
|
**CORE PRINCIPLE - Test the Requirements, Not the Implementation**:
|
||||||
|
Every checklist item MUST evaluate the REQUIREMENTS THEMSELVES for:
|
||||||
|
- **Completeness**: Are all necessary requirements present?
|
||||||
|
- **Clarity**: Are requirements unambiguous and specific?
|
||||||
|
- **Consistency**: Do requirements align with each other?
|
||||||
|
- **Measurability**: Can requirements be objectively verified?
|
||||||
|
- **Coverage**: Are all scenarios/edge cases addressed?
|
||||||
|
|
||||||
|
**Category Structure** - Group items by requirement quality dimensions:
|
||||||
|
- **Requirement Completeness** (Are all necessary requirements documented?)
|
||||||
|
- **Requirement Clarity** (Are requirements specific and unambiguous?)
|
||||||
|
- **Requirement Consistency** (Do requirements align without conflicts?)
|
||||||
|
- **Acceptance Criteria Quality** (Are success criteria measurable?)
|
||||||
|
- **Scenario Coverage** (Are all flows/cases addressed?)
|
||||||
|
- **Edge Case Coverage** (Are boundary conditions defined?)
|
||||||
|
- **Non-Functional Requirements** (Performance, Security, Accessibility, etc. - are they specified?)
|
||||||
|
- **Dependencies & Assumptions** (Are they documented and validated?)
|
||||||
|
- **Ambiguities & Conflicts** (What needs clarification?)
|
||||||
|
|
||||||
|
**HOW TO WRITE CHECKLIST ITEMS - "Unit Tests for English"**:
|
||||||
|
|
||||||
|
❌ **WRONG** (Testing implementation):
|
||||||
|
- "Verify landing page displays 3 episode cards"
|
||||||
|
- "Test hover states work on desktop"
|
||||||
|
- "Confirm logo click navigates home"
|
||||||
|
|
||||||
|
✅ **CORRECT** (Testing requirements quality):
|
||||||
|
- "Are the exact number and layout of featured episodes specified?" [Completeness]
|
||||||
|
- "Is 'prominent display' quantified with specific sizing/positioning?" [Clarity]
|
||||||
|
- "Are hover state requirements consistent across all interactive elements?" [Consistency]
|
||||||
|
- "Are keyboard navigation requirements defined for all interactive UI?" [Coverage]
|
||||||
|
- "Is the fallback behavior specified when logo image fails to load?" [Edge Cases]
|
||||||
|
- "Are loading states defined for asynchronous episode data?" [Completeness]
|
||||||
|
- "Does the spec define visual hierarchy for competing UI elements?" [Clarity]
|
||||||
|
|
||||||
|
**ITEM STRUCTURE**:
|
||||||
|
Each item should follow this pattern:
|
||||||
|
- Question format asking about requirement quality
|
||||||
|
- Focus on what's WRITTEN (or not written) in the spec/plan
|
||||||
|
- Include quality dimension in brackets [Completeness/Clarity/Consistency/etc.]
|
||||||
|
- Reference spec section `[Spec §X.Y]` when checking existing requirements
|
||||||
|
- Use `[Gap]` marker when checking for missing requirements
|
||||||
|
|
||||||
|
**EXAMPLES BY QUALITY DIMENSION**:
|
||||||
|
|
||||||
|
Completeness:
|
||||||
|
- "Are error handling requirements defined for all API failure modes? [Gap]"
|
||||||
|
- "Are accessibility requirements specified for all interactive elements? [Completeness]"
|
||||||
|
- "Are mobile breakpoint requirements defined for responsive layouts? [Gap]"
|
||||||
|
|
||||||
|
Clarity:
|
||||||
|
- "Is 'fast loading' quantified with specific timing thresholds? [Clarity, Spec §NFR-2]"
|
||||||
|
- "Are 'related episodes' selection criteria explicitly defined? [Clarity, Spec §FR-5]"
|
||||||
|
- "Is 'prominent' defined with measurable visual properties? [Ambiguity, Spec §FR-4]"
|
||||||
|
|
||||||
|
Consistency:
|
||||||
|
- "Do navigation requirements align across all pages? [Consistency, Spec §FR-10]"
|
||||||
|
- "Are card component requirements consistent between landing and detail pages? [Consistency]"
|
||||||
|
|
||||||
|
Coverage:
|
||||||
|
- "Are requirements defined for zero-state scenarios (no episodes)? [Coverage, Edge Case]"
|
||||||
|
- "Are concurrent user interaction scenarios addressed? [Coverage, Gap]"
|
||||||
|
- "Are requirements specified for partial data loading failures? [Coverage, Exception Flow]"
|
||||||
|
|
||||||
|
Measurability:
|
||||||
|
- "Are visual hierarchy requirements measurable/testable? [Acceptance Criteria, Spec §FR-1]"
|
||||||
|
- "Can 'balanced visual weight' be objectively verified? [Measurability, Spec §FR-2]"
|
||||||
|
|
||||||
|
**Scenario Classification & Coverage** (Requirements Quality Focus):
|
||||||
|
- Check if requirements exist for: Primary, Alternate, Exception/Error, Recovery, Non-Functional scenarios
|
||||||
|
- For each scenario class, ask: "Are [scenario type] requirements complete, clear, and consistent?"
|
||||||
|
- If scenario class missing: "Are [scenario type] requirements intentionally excluded or missing? [Gap]"
|
||||||
|
- Include resilience/rollback when state mutation occurs: "Are rollback requirements defined for migration failures? [Gap]"
|
||||||
|
|
||||||
|
**Traceability Requirements**:
|
||||||
|
- MINIMUM: ≥80% of items MUST include at least one traceability reference
|
||||||
|
- Each item should reference: spec section `[Spec §X.Y]`, or use markers: `[Gap]`, `[Ambiguity]`, `[Conflict]`, `[Assumption]`
|
||||||
|
- If no ID system exists: "Is a requirement & acceptance criteria ID scheme established? [Traceability]"
|
||||||
|
|
||||||
|
**Surface & Resolve Issues** (Requirements Quality Problems):
|
||||||
|
Ask questions about the requirements themselves:
|
||||||
|
- Ambiguities: "Is the term 'fast' quantified with specific metrics? [Ambiguity, Spec §NFR-1]"
|
||||||
|
- Conflicts: "Do navigation requirements conflict between §FR-10 and §FR-10a? [Conflict]"
|
||||||
|
- Assumptions: "Is the assumption of 'always available podcast API' validated? [Assumption]"
|
||||||
|
- Dependencies: "Are external podcast API requirements documented? [Dependency, Gap]"
|
||||||
|
- Missing definitions: "Is 'visual hierarchy' defined with measurable criteria? [Gap]"
|
||||||
|
|
||||||
|
**Content Consolidation**:
|
||||||
|
- Soft cap: If raw candidate items > 40, prioritize by risk/impact
|
||||||
|
- Merge near-duplicates checking the same requirement aspect
|
||||||
|
- If >5 low-impact edge cases, create one item: "Are edge cases X, Y, Z addressed in requirements? [Coverage]"
|
||||||
|
|
||||||
|
**🚫 ABSOLUTELY PROHIBITED** - These make it an implementation test, not a requirements test:
|
||||||
|
- ❌ Any item starting with "Verify", "Test", "Confirm", "Check" + implementation behavior
|
||||||
|
- ❌ References to code execution, user actions, system behavior
|
||||||
|
- ❌ "Displays correctly", "works properly", "functions as expected"
|
||||||
|
- ❌ "Click", "navigate", "render", "load", "execute"
|
||||||
|
- ❌ Test cases, test plans, QA procedures
|
||||||
|
- ❌ Implementation details (frameworks, APIs, algorithms)
|
||||||
|
|
||||||
|
**✅ REQUIRED PATTERNS** - These test requirements quality:
|
||||||
|
- ✅ "Are [requirement type] defined/specified/documented for [scenario]?"
|
||||||
|
- ✅ "Is [vague term] quantified/clarified with specific criteria?"
|
||||||
|
- ✅ "Are requirements consistent between [section A] and [section B]?"
|
||||||
|
- ✅ "Can [requirement] be objectively measured/verified?"
|
||||||
|
- ✅ "Are [edge cases/scenarios] addressed in requirements?"
|
||||||
|
- ✅ "Does the spec define [missing aspect]?"
|
||||||
|
|
||||||
|
6. **Structure Reference**: Generate the checklist following the canonical template in `.specify/templates/checklist-template.md` for title, meta section, category headings, and ID formatting. If template is unavailable, use: H1 title, purpose/created meta lines, `##` category sections containing `- [ ] CHK### <requirement item>` lines with globally incrementing IDs starting at CHK001.
|
||||||
|
|
||||||
|
7. **Report**: Output full path to checklist file, item count, and summarize whether the run created a new file or appended to an existing one. Summarize:
|
||||||
|
- Focus areas selected
|
||||||
|
- Depth level
|
||||||
|
- Actor/timing
|
||||||
|
- Any explicit user-specified must-have items incorporated
|
||||||
|
|
||||||
|
**Important**: Each `/speckit.checklist` command invocation uses a short, descriptive checklist filename and either creates a new file or appends to an existing one. This allows:
|
||||||
|
|
||||||
|
- Multiple checklists of different types (e.g., `ux.md`, `test.md`, `security.md`)
|
||||||
|
- Simple, memorable filenames that indicate checklist purpose
|
||||||
|
- Easy identification and navigation in the `checklists/` folder
|
||||||
|
|
||||||
|
To avoid clutter, use descriptive types and clean up obsolete checklists when done.
|
||||||
|
|
||||||
|
## Example Checklist Types & Sample Items
|
||||||
|
|
||||||
|
**UX Requirements Quality:** `ux.md`
|
||||||
|
|
||||||
|
Sample items (testing the requirements, NOT the implementation):
|
||||||
|
|
||||||
|
- "Are visual hierarchy requirements defined with measurable criteria? [Clarity, Spec §FR-1]"
|
||||||
|
- "Is the number and positioning of UI elements explicitly specified? [Completeness, Spec §FR-1]"
|
||||||
|
- "Are interaction state requirements (hover, focus, active) consistently defined? [Consistency]"
|
||||||
|
- "Are accessibility requirements specified for all interactive elements? [Coverage, Gap]"
|
||||||
|
- "Is fallback behavior defined when images fail to load? [Edge Case, Gap]"
|
||||||
|
- "Can 'prominent display' be objectively measured? [Measurability, Spec §FR-4]"
|
||||||
|
|
||||||
|
**API Requirements Quality:** `api.md`
|
||||||
|
|
||||||
|
Sample items:
|
||||||
|
|
||||||
|
- "Are error response formats specified for all failure scenarios? [Completeness]"
|
||||||
|
- "Are rate limiting requirements quantified with specific thresholds? [Clarity]"
|
||||||
|
- "Are authentication requirements consistent across all endpoints? [Consistency]"
|
||||||
|
- "Are retry/timeout requirements defined for external dependencies? [Coverage, Gap]"
|
||||||
|
- "Is versioning strategy documented in requirements? [Gap]"
|
||||||
|
|
||||||
|
**Performance Requirements Quality:** `performance.md`
|
||||||
|
|
||||||
|
Sample items:
|
||||||
|
|
||||||
|
- "Are performance requirements quantified with specific metrics? [Clarity]"
|
||||||
|
- "Are performance targets defined for all critical user journeys? [Coverage]"
|
||||||
|
- "Are performance requirements under different load conditions specified? [Completeness]"
|
||||||
|
- "Can performance requirements be objectively measured? [Measurability]"
|
||||||
|
- "Are degradation requirements defined for high-load scenarios? [Edge Case, Gap]"
|
||||||
|
|
||||||
|
**Security Requirements Quality:** `security.md`
|
||||||
|
|
||||||
|
Sample items:
|
||||||
|
|
||||||
|
- "Are authentication requirements specified for all protected resources? [Coverage]"
|
||||||
|
- "Are data protection requirements defined for sensitive information? [Completeness]"
|
||||||
|
- "Is the threat model documented and requirements aligned to it? [Traceability]"
|
||||||
|
- "Are security requirements consistent with compliance obligations? [Consistency]"
|
||||||
|
- "Are security failure/breach response requirements defined? [Gap, Exception Flow]"
|
||||||
|
|
||||||
|
## Anti-Examples: What NOT To Do
|
||||||
|
|
||||||
|
**❌ WRONG - These test implementation, not requirements:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
- [ ] CHK001 - Verify landing page displays 3 episode cards [Spec §FR-001]
|
||||||
|
- [ ] CHK002 - Test hover states work correctly on desktop [Spec §FR-003]
|
||||||
|
- [ ] CHK003 - Confirm logo click navigates to home page [Spec §FR-010]
|
||||||
|
- [ ] CHK004 - Check that related episodes section shows 3-5 items [Spec §FR-005]
|
||||||
|
```
|
||||||
|
|
||||||
|
**✅ CORRECT - These test requirements quality:**
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
- [ ] CHK001 - Are the number and layout of featured episodes explicitly specified? [Completeness, Spec §FR-001]
|
||||||
|
- [ ] CHK002 - Are hover state requirements consistently defined for all interactive elements? [Consistency, Spec §FR-003]
|
||||||
|
- [ ] CHK003 - Are navigation requirements clear for all clickable brand elements? [Clarity, Spec §FR-010]
|
||||||
|
- [ ] CHK004 - Is the selection criteria for related episodes documented? [Gap, Spec §FR-005]
|
||||||
|
- [ ] CHK005 - Are loading state requirements defined for asynchronous episode data? [Gap]
|
||||||
|
- [ ] CHK006 - Can "visual hierarchy" requirements be objectively measured? [Measurability, Spec §FR-001]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Key Differences:**
|
||||||
|
|
||||||
|
- Wrong: Tests if the system works correctly
|
||||||
|
- Correct: Tests if the requirements are written correctly
|
||||||
|
- Wrong: Verification of behavior
|
||||||
|
- Correct: Validation of requirement quality
|
||||||
|
- Wrong: "Does it do X?"
|
||||||
|
- Correct: "Is X clearly specified?"
|
||||||
181
.claude/commands/speckit.clarify.md
Normal file
181
.claude/commands/speckit.clarify.md
Normal file
@@ -0,0 +1,181 @@
|
|||||||
|
---
|
||||||
|
description: Identify underspecified areas in the current feature spec by asking up to 5 highly targeted clarification questions and encoding answers back into the spec.
|
||||||
|
handoffs:
|
||||||
|
- label: Build Technical Plan
|
||||||
|
agent: speckit.plan
|
||||||
|
prompt: Create a plan for the spec. I am building with...
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
Goal: Detect and reduce ambiguity or missing decision points in the active feature specification and record the clarifications directly in the spec file.
|
||||||
|
|
||||||
|
Note: This clarification workflow is expected to run (and be completed) BEFORE invoking `/speckit.plan`. If the user explicitly states they are skipping clarification (e.g., exploratory spike), you may proceed, but must warn that downstream rework risk increases.
|
||||||
|
|
||||||
|
Execution steps:
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --paths-only` from repo root **once** (combined `--json --paths-only` mode / `-Json -PathsOnly`). Parse minimal JSON payload fields:
|
||||||
|
- `FEATURE_DIR`
|
||||||
|
- `FEATURE_SPEC`
|
||||||
|
- (Optionally capture `IMPL_PLAN`, `TASKS` for future chained flows.)
|
||||||
|
- If JSON parsing fails, abort and instruct user to re-run `/speckit.specify` or verify feature branch environment.
|
||||||
|
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. Load the current spec file. Perform a structured ambiguity & coverage scan using this taxonomy. For each category, mark status: Clear / Partial / Missing. Produce an internal coverage map used for prioritization (do not output raw map unless no questions will be asked).
|
||||||
|
|
||||||
|
Functional Scope & Behavior:
|
||||||
|
- Core user goals & success criteria
|
||||||
|
- Explicit out-of-scope declarations
|
||||||
|
- User roles / personas differentiation
|
||||||
|
|
||||||
|
Domain & Data Model:
|
||||||
|
- Entities, attributes, relationships
|
||||||
|
- Identity & uniqueness rules
|
||||||
|
- Lifecycle/state transitions
|
||||||
|
- Data volume / scale assumptions
|
||||||
|
|
||||||
|
Interaction & UX Flow:
|
||||||
|
- Critical user journeys / sequences
|
||||||
|
- Error/empty/loading states
|
||||||
|
- Accessibility or localization notes
|
||||||
|
|
||||||
|
Non-Functional Quality Attributes:
|
||||||
|
- Performance (latency, throughput targets)
|
||||||
|
- Scalability (horizontal/vertical, limits)
|
||||||
|
- Reliability & availability (uptime, recovery expectations)
|
||||||
|
- Observability (logging, metrics, tracing signals)
|
||||||
|
- Security & privacy (authN/Z, data protection, threat assumptions)
|
||||||
|
- Compliance / regulatory constraints (if any)
|
||||||
|
|
||||||
|
Integration & External Dependencies:
|
||||||
|
- External services/APIs and failure modes
|
||||||
|
- Data import/export formats
|
||||||
|
- Protocol/versioning assumptions
|
||||||
|
|
||||||
|
Edge Cases & Failure Handling:
|
||||||
|
- Negative scenarios
|
||||||
|
- Rate limiting / throttling
|
||||||
|
- Conflict resolution (e.g., concurrent edits)
|
||||||
|
|
||||||
|
Constraints & Tradeoffs:
|
||||||
|
- Technical constraints (language, storage, hosting)
|
||||||
|
- Explicit tradeoffs or rejected alternatives
|
||||||
|
|
||||||
|
Terminology & Consistency:
|
||||||
|
- Canonical glossary terms
|
||||||
|
- Avoided synonyms / deprecated terms
|
||||||
|
|
||||||
|
Completion Signals:
|
||||||
|
- Acceptance criteria testability
|
||||||
|
- Measurable Definition of Done style indicators
|
||||||
|
|
||||||
|
Misc / Placeholders:
|
||||||
|
- TODO markers / unresolved decisions
|
||||||
|
- Ambiguous adjectives ("robust", "intuitive") lacking quantification
|
||||||
|
|
||||||
|
For each category with Partial or Missing status, add a candidate question opportunity unless:
|
||||||
|
- Clarification would not materially change implementation or validation strategy
|
||||||
|
- Information is better deferred to planning phase (note internally)
|
||||||
|
|
||||||
|
3. Generate (internally) a prioritized queue of candidate clarification questions (maximum 5). Do NOT output them all at once. Apply these constraints:
|
||||||
|
- Maximum of 5 total questions across the whole session.
|
||||||
|
- Each question must be answerable with EITHER:
|
||||||
|
- A short multiple‑choice selection (2–5 distinct, mutually exclusive options), OR
|
||||||
|
- A one-word / short‑phrase answer (explicitly constrain: "Answer in <=5 words").
|
||||||
|
- Only include questions whose answers materially impact architecture, data modeling, task decomposition, test design, UX behavior, operational readiness, or compliance validation.
|
||||||
|
- Ensure category coverage balance: attempt to cover the highest impact unresolved categories first; avoid asking two low-impact questions when a single high-impact area (e.g., security posture) is unresolved.
|
||||||
|
- Exclude questions already answered, trivial stylistic preferences, or plan-level execution details (unless blocking correctness).
|
||||||
|
- Favor clarifications that reduce downstream rework risk or prevent misaligned acceptance tests.
|
||||||
|
- If more than 5 categories remain unresolved, select the top 5 by (Impact * Uncertainty) heuristic.
|
||||||
|
|
||||||
|
4. Sequential questioning loop (interactive):
|
||||||
|
- Present EXACTLY ONE question at a time.
|
||||||
|
- For multiple‑choice questions:
|
||||||
|
- **Analyze all options** and determine the **most suitable option** based on:
|
||||||
|
- Best practices for the project type
|
||||||
|
- Common patterns in similar implementations
|
||||||
|
- Risk reduction (security, performance, maintainability)
|
||||||
|
- Alignment with any explicit project goals or constraints visible in the spec
|
||||||
|
- Present your **recommended option prominently** at the top with clear reasoning (1-2 sentences explaining why this is the best choice).
|
||||||
|
- Format as: `**Recommended:** Option [X] - <reasoning>`
|
||||||
|
- Then render all options as a Markdown table:
|
||||||
|
|
||||||
|
| Option | Description |
|
||||||
|
|--------|-------------|
|
||||||
|
| A | <Option A description> |
|
||||||
|
| B | <Option B description> |
|
||||||
|
| C | <Option C description> (add D/E as needed up to 5) |
|
||||||
|
| Short | Provide a different short answer (<=5 words) (Include only if free-form alternative is appropriate) |
|
||||||
|
|
||||||
|
- After the table, add: `You can reply with the option letter (e.g., "A"), accept the recommendation by saying "yes" or "recommended", or provide your own short answer.`
|
||||||
|
- For short‑answer style (no meaningful discrete options):
|
||||||
|
- Provide your **suggested answer** based on best practices and context.
|
||||||
|
- Format as: `**Suggested:** <your proposed answer> - <brief reasoning>`
|
||||||
|
- Then output: `Format: Short answer (<=5 words). You can accept the suggestion by saying "yes" or "suggested", or provide your own answer.`
|
||||||
|
- After the user answers:
|
||||||
|
- If the user replies with "yes", "recommended", or "suggested", use your previously stated recommendation/suggestion as the answer.
|
||||||
|
- Otherwise, validate the answer maps to one option or fits the <=5 word constraint.
|
||||||
|
- If ambiguous, ask for a quick disambiguation (count still belongs to same question; do not advance).
|
||||||
|
- Once satisfactory, record it in working memory (do not yet write to disk) and move to the next queued question.
|
||||||
|
- Stop asking further questions when:
|
||||||
|
- All critical ambiguities resolved early (remaining queued items become unnecessary), OR
|
||||||
|
- User signals completion ("done", "good", "no more"), OR
|
||||||
|
- You reach 5 asked questions.
|
||||||
|
- Never reveal future queued questions in advance.
|
||||||
|
- If no valid questions exist at start, immediately report no critical ambiguities.
|
||||||
|
|
||||||
|
5. Integration after EACH accepted answer (incremental update approach):
|
||||||
|
- Maintain in-memory representation of the spec (loaded once at start) plus the raw file contents.
|
||||||
|
- For the first integrated answer in this session:
|
||||||
|
- Ensure a `## Clarifications` section exists (create it just after the highest-level contextual/overview section per the spec template if missing).
|
||||||
|
- Under it, create (if not present) a `### Session YYYY-MM-DD` subheading for today.
|
||||||
|
- Append a bullet line immediately after acceptance: `- Q: <question> → A: <final answer>`.
|
||||||
|
- Then immediately apply the clarification to the most appropriate section(s):
|
||||||
|
- Functional ambiguity → Update or add a bullet in Functional Requirements.
|
||||||
|
- User interaction / actor distinction → Update User Stories or Actors subsection (if present) with clarified role, constraint, or scenario.
|
||||||
|
- Data shape / entities → Update Data Model (add fields, types, relationships) preserving ordering; note added constraints succinctly.
|
||||||
|
- Non-functional constraint → Add/modify measurable criteria in Non-Functional / Quality Attributes section (convert vague adjective to metric or explicit target).
|
||||||
|
- Edge case / negative flow → Add a new bullet under Edge Cases / Error Handling (or create such subsection if template provides placeholder for it).
|
||||||
|
- Terminology conflict → Normalize term across spec; retain original only if necessary by adding `(formerly referred to as "X")` once.
|
||||||
|
- If the clarification invalidates an earlier ambiguous statement, replace that statement instead of duplicating; leave no obsolete contradictory text.
|
||||||
|
- Save the spec file AFTER each integration to minimize risk of context loss (atomic overwrite).
|
||||||
|
- Preserve formatting: do not reorder unrelated sections; keep heading hierarchy intact.
|
||||||
|
- Keep each inserted clarification minimal and testable (avoid narrative drift).
|
||||||
|
|
||||||
|
6. Validation (performed after EACH write plus final pass):
|
||||||
|
- Clarifications session contains exactly one bullet per accepted answer (no duplicates).
|
||||||
|
- Total asked (accepted) questions ≤ 5.
|
||||||
|
- Updated sections contain no lingering vague placeholders the new answer was meant to resolve.
|
||||||
|
- No contradictory earlier statement remains (scan for now-invalid alternative choices removed).
|
||||||
|
- Markdown structure valid; only allowed new headings: `## Clarifications`, `### Session YYYY-MM-DD`.
|
||||||
|
- Terminology consistency: same canonical term used across all updated sections.
|
||||||
|
|
||||||
|
7. Write the updated spec back to `FEATURE_SPEC`.
|
||||||
|
|
||||||
|
8. Report completion (after questioning loop ends or early termination):
|
||||||
|
- Number of questions asked & answered.
|
||||||
|
- Path to updated spec.
|
||||||
|
- Sections touched (list names).
|
||||||
|
- Coverage summary table listing each taxonomy category with Status: Resolved (was Partial/Missing and addressed), Deferred (exceeds question quota or better suited for planning), Clear (already sufficient), Outstanding (still Partial/Missing but low impact).
|
||||||
|
- If any Outstanding or Deferred remain, recommend whether to proceed to `/speckit.plan` or run `/speckit.clarify` again later post-plan.
|
||||||
|
- Suggested next command.
|
||||||
|
|
||||||
|
Behavior rules:
|
||||||
|
|
||||||
|
- If no meaningful ambiguities found (or all potential questions would be low-impact), respond: "No critical ambiguities detected worth formal clarification." and suggest proceeding.
|
||||||
|
- If spec file missing, instruct user to run `/speckit.specify` first (do not create a new spec here).
|
||||||
|
- Never exceed 5 total asked questions (clarification retries for a single question do not count as new questions).
|
||||||
|
- Avoid speculative tech stack questions unless the absence blocks functional clarity.
|
||||||
|
- Respect user early termination signals ("stop", "done", "proceed").
|
||||||
|
- If no questions asked due to full coverage, output a compact coverage summary (all categories Clear) then suggest advancing.
|
||||||
|
- If quota reached with unresolved high-impact categories remaining, explicitly flag them under Deferred with rationale.
|
||||||
|
|
||||||
|
Context for prioritization: $ARGUMENTS
|
||||||
84
.claude/commands/speckit.constitution.md
Normal file
84
.claude/commands/speckit.constitution.md
Normal file
@@ -0,0 +1,84 @@
|
|||||||
|
---
|
||||||
|
description: Create or update the project constitution from interactive or provided principle inputs, ensuring all dependent templates stay in sync.
|
||||||
|
handoffs:
|
||||||
|
- label: Build Specification
|
||||||
|
agent: speckit.specify
|
||||||
|
prompt: Implement the feature specification based on the updated constitution. I want to build...
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
You are updating the project constitution at `.specify/memory/constitution.md`. This file is a TEMPLATE containing placeholder tokens in square brackets (e.g. `[PROJECT_NAME]`, `[PRINCIPLE_1_NAME]`). Your job is to (a) collect/derive concrete values, (b) fill the template precisely, and (c) propagate any amendments across dependent artifacts.
|
||||||
|
|
||||||
|
**Note**: If `.specify/memory/constitution.md` does not exist yet, it should have been initialized from `.specify/templates/constitution-template.md` during project setup. If it's missing, copy the template first.
|
||||||
|
|
||||||
|
Follow this execution flow:
|
||||||
|
|
||||||
|
1. Load the existing constitution at `.specify/memory/constitution.md`.
|
||||||
|
- Identify every placeholder token of the form `[ALL_CAPS_IDENTIFIER]`.
|
||||||
|
**IMPORTANT**: The user might require less or more principles than the ones used in the template. If a number is specified, respect that - follow the general template. You will update the doc accordingly.
|
||||||
|
|
||||||
|
2. Collect/derive values for placeholders:
|
||||||
|
- If user input (conversation) supplies a value, use it.
|
||||||
|
- Otherwise infer from existing repo context (README, docs, prior constitution versions if embedded).
|
||||||
|
- For governance dates: `RATIFICATION_DATE` is the original adoption date (if unknown ask or mark TODO), `LAST_AMENDED_DATE` is today if changes are made, otherwise keep previous.
|
||||||
|
- `CONSTITUTION_VERSION` must increment according to semantic versioning rules:
|
||||||
|
- MAJOR: Backward incompatible governance/principle removals or redefinitions.
|
||||||
|
- MINOR: New principle/section added or materially expanded guidance.
|
||||||
|
- PATCH: Clarifications, wording, typo fixes, non-semantic refinements.
|
||||||
|
- If version bump type ambiguous, propose reasoning before finalizing.
|
||||||
|
|
||||||
|
3. Draft the updated constitution content:
|
||||||
|
- Replace every placeholder with concrete text (no bracketed tokens left except intentionally retained template slots that the project has chosen not to define yet—explicitly justify any left).
|
||||||
|
- Preserve heading hierarchy and comments can be removed once replaced unless they still add clarifying guidance.
|
||||||
|
- Ensure each Principle section: succinct name line, paragraph (or bullet list) capturing non‑negotiable rules, explicit rationale if not obvious.
|
||||||
|
- Ensure Governance section lists amendment procedure, versioning policy, and compliance review expectations.
|
||||||
|
|
||||||
|
4. Consistency propagation checklist (convert prior checklist into active validations):
|
||||||
|
- Read `.specify/templates/plan-template.md` and ensure any "Constitution Check" or rules align with updated principles.
|
||||||
|
- Read `.specify/templates/spec-template.md` for scope/requirements alignment—update if constitution adds/removes mandatory sections or constraints.
|
||||||
|
- Read `.specify/templates/tasks-template.md` and ensure task categorization reflects new or removed principle-driven task types (e.g., observability, versioning, testing discipline).
|
||||||
|
- Read each command file in `.specify/templates/commands/*.md` (including this one) to verify no outdated references (agent-specific names like CLAUDE only) remain when generic guidance is required.
|
||||||
|
- Read any runtime guidance docs (e.g., `README.md`, `docs/quickstart.md`, or agent-specific guidance files if present). Update references to principles changed.
|
||||||
|
|
||||||
|
5. Produce a Sync Impact Report (prepend as an HTML comment at top of the constitution file after update):
|
||||||
|
- Version change: old → new
|
||||||
|
- List of modified principles (old title → new title if renamed)
|
||||||
|
- Added sections
|
||||||
|
- Removed sections
|
||||||
|
- Templates requiring updates (✅ updated / ⚠ pending) with file paths
|
||||||
|
- Follow-up TODOs if any placeholders intentionally deferred.
|
||||||
|
|
||||||
|
6. Validation before final output:
|
||||||
|
- No remaining unexplained bracket tokens.
|
||||||
|
- Version line matches report.
|
||||||
|
- Dates ISO format YYYY-MM-DD.
|
||||||
|
- Principles are declarative, testable, and free of vague language ("should" → replace with MUST/SHOULD rationale where appropriate).
|
||||||
|
|
||||||
|
7. Write the completed constitution back to `.specify/memory/constitution.md` (overwrite).
|
||||||
|
|
||||||
|
8. Output a final summary to the user with:
|
||||||
|
- New version and bump rationale.
|
||||||
|
- Any files flagged for manual follow-up.
|
||||||
|
- Suggested commit message (e.g., `docs: amend constitution to vX.Y.Z (principle additions + governance update)`).
|
||||||
|
|
||||||
|
Formatting & Style Requirements:
|
||||||
|
|
||||||
|
- Use Markdown headings exactly as in the template (do not demote/promote levels).
|
||||||
|
- Wrap long rationale lines to keep readability (<100 chars ideally) but do not hard enforce with awkward breaks.
|
||||||
|
- Keep a single blank line between sections.
|
||||||
|
- Avoid trailing whitespace.
|
||||||
|
|
||||||
|
If the user supplies partial updates (e.g., only one principle revision), still perform validation and version decision steps.
|
||||||
|
|
||||||
|
If critical info missing (e.g., ratification date truly unknown), insert `TODO(<FIELD_NAME>): explanation` and include in the Sync Impact Report under deferred items.
|
||||||
|
|
||||||
|
Do not create a new template; always operate on the existing `.specify/memory/constitution.md` file.
|
||||||
135
.claude/commands/speckit.implement.md
Normal file
135
.claude/commands/speckit.implement.md
Normal file
@@ -0,0 +1,135 @@
|
|||||||
|
---
|
||||||
|
description: Execute the implementation plan by processing and executing all tasks defined in tasks.md
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. **Check checklists status** (if FEATURE_DIR/checklists/ exists):
|
||||||
|
- Scan all checklist files in the checklists/ directory
|
||||||
|
- For each checklist, count:
|
||||||
|
- Total items: All lines matching `- [ ]` or `- [X]` or `- [x]`
|
||||||
|
- Completed items: Lines matching `- [X]` or `- [x]`
|
||||||
|
- Incomplete items: Lines matching `- [ ]`
|
||||||
|
- Create a status table:
|
||||||
|
|
||||||
|
```text
|
||||||
|
| Checklist | Total | Completed | Incomplete | Status |
|
||||||
|
|-----------|-------|-----------|------------|--------|
|
||||||
|
| ux.md | 12 | 12 | 0 | ✓ PASS |
|
||||||
|
| test.md | 8 | 5 | 3 | ✗ FAIL |
|
||||||
|
| security.md | 6 | 6 | 0 | ✓ PASS |
|
||||||
|
```
|
||||||
|
|
||||||
|
- Calculate overall status:
|
||||||
|
- **PASS**: All checklists have 0 incomplete items
|
||||||
|
- **FAIL**: One or more checklists have incomplete items
|
||||||
|
|
||||||
|
- **If any checklist is incomplete**:
|
||||||
|
- Display the table with incomplete item counts
|
||||||
|
- **STOP** and ask: "Some checklists are incomplete. Do you want to proceed with implementation anyway? (yes/no)"
|
||||||
|
- Wait for user response before continuing
|
||||||
|
- If user says "no" or "wait" or "stop", halt execution
|
||||||
|
- If user says "yes" or "proceed" or "continue", proceed to step 3
|
||||||
|
|
||||||
|
- **If all checklists are complete**:
|
||||||
|
- Display the table showing all checklists passed
|
||||||
|
- Automatically proceed to step 3
|
||||||
|
|
||||||
|
3. Load and analyze the implementation context:
|
||||||
|
- **REQUIRED**: Read tasks.md for the complete task list and execution plan
|
||||||
|
- **REQUIRED**: Read plan.md for tech stack, architecture, and file structure
|
||||||
|
- **IF EXISTS**: Read data-model.md for entities and relationships
|
||||||
|
- **IF EXISTS**: Read contracts/ for API specifications and test requirements
|
||||||
|
- **IF EXISTS**: Read research.md for technical decisions and constraints
|
||||||
|
- **IF EXISTS**: Read quickstart.md for integration scenarios
|
||||||
|
|
||||||
|
4. **Project Setup Verification**:
|
||||||
|
- **REQUIRED**: Create/verify ignore files based on actual project setup:
|
||||||
|
|
||||||
|
**Detection & Creation Logic**:
|
||||||
|
- Check if the following command succeeds to determine if the repository is a git repo (create/verify .gitignore if so):
|
||||||
|
|
||||||
|
```sh
|
||||||
|
git rev-parse --git-dir 2>/dev/null
|
||||||
|
```
|
||||||
|
|
||||||
|
- Check if Dockerfile* exists or Docker in plan.md → create/verify .dockerignore
|
||||||
|
- Check if .eslintrc* exists → create/verify .eslintignore
|
||||||
|
- Check if eslint.config.* exists → ensure the config's `ignores` entries cover required patterns
|
||||||
|
- Check if .prettierrc* exists → create/verify .prettierignore
|
||||||
|
- Check if .npmrc or package.json exists → create/verify .npmignore (if publishing)
|
||||||
|
- Check if terraform files (*.tf) exist → create/verify .terraformignore
|
||||||
|
- Check if .helmignore needed (helm charts present) → create/verify .helmignore
|
||||||
|
|
||||||
|
**If ignore file already exists**: Verify it contains essential patterns, append missing critical patterns only
|
||||||
|
**If ignore file missing**: Create with full pattern set for detected technology
|
||||||
|
|
||||||
|
**Common Patterns by Technology** (from plan.md tech stack):
|
||||||
|
- **Node.js/JavaScript/TypeScript**: `node_modules/`, `dist/`, `build/`, `*.log`, `.env*`
|
||||||
|
- **Python**: `__pycache__/`, `*.pyc`, `.venv/`, `venv/`, `dist/`, `*.egg-info/`
|
||||||
|
- **Java**: `target/`, `*.class`, `*.jar`, `.gradle/`, `build/`
|
||||||
|
- **C#/.NET**: `bin/`, `obj/`, `*.user`, `*.suo`, `packages/`
|
||||||
|
- **Go**: `*.exe`, `*.test`, `vendor/`, `*.out`
|
||||||
|
- **Ruby**: `.bundle/`, `log/`, `tmp/`, `*.gem`, `vendor/bundle/`
|
||||||
|
- **PHP**: `vendor/`, `*.log`, `*.cache`, `*.env`
|
||||||
|
- **Rust**: `target/`, `debug/`, `release/`, `*.rs.bk`, `*.rlib`, `*.prof*`, `.idea/`, `*.log`, `.env*`
|
||||||
|
- **Kotlin**: `build/`, `out/`, `.gradle/`, `.idea/`, `*.class`, `*.jar`, `*.iml`, `*.log`, `.env*`
|
||||||
|
- **C++**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.so`, `*.a`, `*.exe`, `*.dll`, `.idea/`, `*.log`, `.env*`
|
||||||
|
- **C**: `build/`, `bin/`, `obj/`, `out/`, `*.o`, `*.a`, `*.so`, `*.exe`, `autom4te.cache/`, `config.status`, `config.log`, `.idea/`, `*.log`, `.env*`
|
||||||
|
- **Swift**: `.build/`, `DerivedData/`, `*.swiftpm/`, `Packages/`
|
||||||
|
- **R**: `.Rproj.user/`, `.Rhistory`, `.RData`, `.Ruserdata`, `*.Rproj`, `packrat/`, `renv/`
|
||||||
|
- **Universal**: `.DS_Store`, `Thumbs.db`, `*.tmp`, `*.swp`, `.vscode/`, `.idea/`
|
||||||
|
|
||||||
|
**Tool-Specific Patterns**:
|
||||||
|
- **Docker**: `node_modules/`, `.git/`, `Dockerfile*`, `.dockerignore`, `*.log*`, `.env*`, `coverage/`
|
||||||
|
- **ESLint**: `node_modules/`, `dist/`, `build/`, `coverage/`, `*.min.js`
|
||||||
|
- **Prettier**: `node_modules/`, `dist/`, `build/`, `coverage/`, `package-lock.json`, `yarn.lock`, `pnpm-lock.yaml`
|
||||||
|
- **Terraform**: `.terraform/`, `*.tfstate*`, `*.tfvars`, `.terraform.lock.hcl`
|
||||||
|
- **Kubernetes/k8s**: `*.secret.yaml`, `secrets/`, `.kube/`, `kubeconfig*`, `*.key`, `*.crt`
|
||||||
|
|
||||||
|
5. Parse tasks.md structure and extract:
|
||||||
|
- **Task phases**: Setup, Tests, Core, Integration, Polish
|
||||||
|
- **Task dependencies**: Sequential vs parallel execution rules
|
||||||
|
- **Task details**: ID, description, file paths, parallel markers [P]
|
||||||
|
- **Execution flow**: Order and dependency requirements
|
||||||
|
|
||||||
|
6. Execute implementation following the task plan:
|
||||||
|
- **Phase-by-phase execution**: Complete each phase before moving to the next
|
||||||
|
- **Respect dependencies**: Run sequential tasks in order, parallel tasks [P] can run together
|
||||||
|
- **Follow TDD approach**: Execute test tasks before their corresponding implementation tasks
|
||||||
|
- **File-based coordination**: Tasks affecting the same files must run sequentially
|
||||||
|
- **Validation checkpoints**: Verify each phase completion before proceeding
|
||||||
|
|
||||||
|
7. Implementation execution rules:
|
||||||
|
- **Setup first**: Initialize project structure, dependencies, configuration
|
||||||
|
- **Tests before code**: If you need to write tests for contracts, entities, and integration scenarios
|
||||||
|
- **Core development**: Implement models, services, CLI commands, endpoints
|
||||||
|
- **Integration work**: Database connections, middleware, logging, external services
|
||||||
|
- **Polish and validation**: Unit tests, performance optimization, documentation
|
||||||
|
|
||||||
|
8. Progress tracking and error handling:
|
||||||
|
- Report progress after each completed task
|
||||||
|
- Halt execution if any non-parallel task fails
|
||||||
|
- For parallel tasks [P], continue with successful tasks, report failed ones
|
||||||
|
- Provide clear error messages with context for debugging
|
||||||
|
- Suggest next steps if implementation cannot proceed
|
||||||
|
- **IMPORTANT** For completed tasks, make sure to mark the task off as [X] in the tasks file.
|
||||||
|
|
||||||
|
9. Completion validation:
|
||||||
|
- Verify all required tasks are completed
|
||||||
|
- Check that implemented features match the original specification
|
||||||
|
- Validate that tests pass and coverage meets requirements
|
||||||
|
- Confirm the implementation follows the technical plan
|
||||||
|
- Report final status with summary of completed work
|
||||||
|
|
||||||
|
Note: This command assumes a complete task breakdown exists in tasks.md. If tasks are incomplete or missing, suggest running `/speckit.tasks` first to regenerate the task list.
|
||||||
90
.claude/commands/speckit.plan.md
Normal file
90
.claude/commands/speckit.plan.md
Normal file
@@ -0,0 +1,90 @@
|
|||||||
|
---
|
||||||
|
description: Execute the implementation planning workflow using the plan template to generate design artifacts.
|
||||||
|
handoffs:
|
||||||
|
- label: Create Tasks
|
||||||
|
agent: speckit.tasks
|
||||||
|
prompt: Break the plan into tasks
|
||||||
|
send: true
|
||||||
|
- label: Create Checklist
|
||||||
|
agent: speckit.checklist
|
||||||
|
prompt: Create a checklist for the following domain...
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
1. **Setup**: Run `.specify/scripts/bash/setup-plan.sh --json` from repo root and parse JSON for FEATURE_SPEC, IMPL_PLAN, SPECS_DIR, BRANCH. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. **Load context**: Read FEATURE_SPEC and `.specify/memory/constitution.md`. Load IMPL_PLAN template (already copied).
|
||||||
|
|
||||||
|
3. **Execute plan workflow**: Follow the structure in IMPL_PLAN template to:
|
||||||
|
- Fill Technical Context (mark unknowns as "NEEDS CLARIFICATION")
|
||||||
|
- Fill Constitution Check section from constitution
|
||||||
|
- Evaluate gates (ERROR if violations unjustified)
|
||||||
|
- Phase 0: Generate research.md (resolve all NEEDS CLARIFICATION)
|
||||||
|
- Phase 1: Generate data-model.md, contracts/, quickstart.md
|
||||||
|
- Phase 1: Update agent context by running the agent script
|
||||||
|
- Re-evaluate Constitution Check post-design
|
||||||
|
|
||||||
|
4. **Stop and report**: Command ends after Phase 2 planning. Report branch, IMPL_PLAN path, and generated artifacts.
|
||||||
|
|
||||||
|
## Phases
|
||||||
|
|
||||||
|
### Phase 0: Outline & Research
|
||||||
|
|
||||||
|
1. **Extract unknowns from Technical Context** above:
|
||||||
|
- For each NEEDS CLARIFICATION → research task
|
||||||
|
- For each dependency → best practices task
|
||||||
|
- For each integration → patterns task
|
||||||
|
|
||||||
|
2. **Generate and dispatch research agents**:
|
||||||
|
|
||||||
|
```text
|
||||||
|
For each unknown in Technical Context:
|
||||||
|
Task: "Research {unknown} for {feature context}"
|
||||||
|
For each technology choice:
|
||||||
|
Task: "Find best practices for {tech} in {domain}"
|
||||||
|
```
|
||||||
|
|
||||||
|
3. **Consolidate findings** in `research.md` using format:
|
||||||
|
- Decision: [what was chosen]
|
||||||
|
- Rationale: [why chosen]
|
||||||
|
- Alternatives considered: [what else evaluated]
|
||||||
|
|
||||||
|
**Output**: research.md with all NEEDS CLARIFICATION resolved
|
||||||
|
|
||||||
|
### Phase 1: Design & Contracts
|
||||||
|
|
||||||
|
**Prerequisites:** `research.md` complete
|
||||||
|
|
||||||
|
1. **Extract entities from feature spec** → `data-model.md`:
|
||||||
|
- Entity name, fields, relationships
|
||||||
|
- Validation rules from requirements
|
||||||
|
- State transitions if applicable
|
||||||
|
|
||||||
|
2. **Define interface contracts** (if project has external interfaces) → `/contracts/`:
|
||||||
|
- Identify what interfaces the project exposes to users or other systems
|
||||||
|
- Document the contract format appropriate for the project type
|
||||||
|
- Examples: public APIs for libraries, command schemas for CLI tools, endpoints for web services, grammars for parsers, UI contracts for applications
|
||||||
|
- Skip if project is purely internal (build scripts, one-off tools, etc.)
|
||||||
|
|
||||||
|
3. **Agent context update**:
|
||||||
|
- Run `.specify/scripts/bash/update-agent-context.sh claude`
|
||||||
|
- These scripts detect which AI agent is in use
|
||||||
|
- Update the appropriate agent-specific context file
|
||||||
|
- Add only new technology from current plan
|
||||||
|
- Preserve manual additions between markers
|
||||||
|
|
||||||
|
**Output**: data-model.md, /contracts/*, quickstart.md, agent-specific file
|
||||||
|
|
||||||
|
## Key rules
|
||||||
|
|
||||||
|
- Use absolute paths
|
||||||
|
- ERROR on gate failures or unresolved clarifications
|
||||||
258
.claude/commands/speckit.specify.md
Normal file
258
.claude/commands/speckit.specify.md
Normal file
@@ -0,0 +1,258 @@
|
|||||||
|
---
|
||||||
|
description: Create or update the feature specification from a natural language feature description.
|
||||||
|
handoffs:
|
||||||
|
- label: Build Technical Plan
|
||||||
|
agent: speckit.plan
|
||||||
|
prompt: Create a plan for the spec. I am building with...
|
||||||
|
- label: Clarify Spec Requirements
|
||||||
|
agent: speckit.clarify
|
||||||
|
prompt: Clarify specification requirements
|
||||||
|
send: true
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
The text the user typed after `/speckit.specify` in the triggering message **is** the feature description. Assume you always have it available in this conversation even if `$ARGUMENTS` appears literally below. Do not ask the user to repeat it unless they provided an empty command.
|
||||||
|
|
||||||
|
Given that feature description, do this:
|
||||||
|
|
||||||
|
1. **Generate a concise short name** (2-4 words) for the branch:
|
||||||
|
- Analyze the feature description and extract the most meaningful keywords
|
||||||
|
- Create a 2-4 word short name that captures the essence of the feature
|
||||||
|
- Use action-noun format when possible (e.g., "add-user-auth", "fix-payment-bug")
|
||||||
|
- Preserve technical terms and acronyms (OAuth2, API, JWT, etc.)
|
||||||
|
- Keep it concise but descriptive enough to understand the feature at a glance
|
||||||
|
- Examples:
|
||||||
|
- "I want to add user authentication" → "user-auth"
|
||||||
|
- "Implement OAuth2 integration for the API" → "oauth2-api-integration"
|
||||||
|
- "Create a dashboard for analytics" → "analytics-dashboard"
|
||||||
|
- "Fix payment processing timeout bug" → "fix-payment-timeout"
|
||||||
|
|
||||||
|
2. **Check for existing branches before creating new one**:
|
||||||
|
|
||||||
|
a. First, fetch all remote branches to ensure we have the latest information:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git fetch --all --prune
|
||||||
|
```
|
||||||
|
|
||||||
|
b. Find the highest feature number across all sources for the short-name:
|
||||||
|
- Remote branches: `git ls-remote --heads origin | grep -E 'refs/heads/[0-9]+-<short-name>$'`
|
||||||
|
- Local branches: `git branch | grep -E '^[* ]*[0-9]+-<short-name>$'`
|
||||||
|
- Specs directories: Check for directories matching `specs/[0-9]+-<short-name>`
|
||||||
|
|
||||||
|
c. Determine the next available number:
|
||||||
|
- Extract all numbers from all three sources
|
||||||
|
- Find the highest number N
|
||||||
|
- Use N+1 for the new branch number
|
||||||
|
|
||||||
|
d. Run the script `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS"` with the calculated number and short-name:
|
||||||
|
- Pass `--number N+1` and `--short-name "your-short-name"` along with the feature description
|
||||||
|
- Bash example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" --json --number 5 --short-name "user-auth" "Add user authentication"`
|
||||||
|
- PowerShell example: `.specify/scripts/bash/create-new-feature.sh --json "$ARGUMENTS" -Json -Number 5 -ShortName "user-auth" "Add user authentication"`
|
||||||
|
|
||||||
|
**IMPORTANT**:
|
||||||
|
- Check all three sources (remote branches, local branches, specs directories) to find the highest number
|
||||||
|
- Only match branches/directories with the exact short-name pattern
|
||||||
|
- If no existing branches/directories found with this short-name, start with number 1
|
||||||
|
- You must only ever run this script once per feature
|
||||||
|
- The JSON is provided in the terminal as output - always refer to it to get the actual content you're looking for
|
||||||
|
- The JSON output will contain BRANCH_NAME and SPEC_FILE paths
|
||||||
|
- For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot")
|
||||||
|
|
||||||
|
3. Load `.specify/templates/spec-template.md` to understand required sections.
|
||||||
|
|
||||||
|
4. Follow this execution flow:
|
||||||
|
|
||||||
|
1. Parse user description from Input
|
||||||
|
If empty: ERROR "No feature description provided"
|
||||||
|
2. Extract key concepts from description
|
||||||
|
Identify: actors, actions, data, constraints
|
||||||
|
3. For unclear aspects:
|
||||||
|
- Make informed guesses based on context and industry standards
|
||||||
|
- Only mark with [NEEDS CLARIFICATION: specific question] if:
|
||||||
|
- The choice significantly impacts feature scope or user experience
|
||||||
|
- Multiple reasonable interpretations exist with different implications
|
||||||
|
- No reasonable default exists
|
||||||
|
- **LIMIT: Maximum 3 [NEEDS CLARIFICATION] markers total**
|
||||||
|
- Prioritize clarifications by impact: scope > security/privacy > user experience > technical details
|
||||||
|
4. Fill User Scenarios & Testing section
|
||||||
|
If no clear user flow: ERROR "Cannot determine user scenarios"
|
||||||
|
5. Generate Functional Requirements
|
||||||
|
Each requirement must be testable
|
||||||
|
Use reasonable defaults for unspecified details (document assumptions in Assumptions section)
|
||||||
|
6. Define Success Criteria
|
||||||
|
Create measurable, technology-agnostic outcomes
|
||||||
|
Include both quantitative metrics (time, performance, volume) and qualitative measures (user satisfaction, task completion)
|
||||||
|
Each criterion must be verifiable without implementation details
|
||||||
|
7. Identify Key Entities (if data involved)
|
||||||
|
8. Return: SUCCESS (spec ready for planning)
|
||||||
|
|
||||||
|
5. Write the specification to SPEC_FILE using the template structure, replacing placeholders with concrete details derived from the feature description (arguments) while preserving section order and headings.
|
||||||
|
|
||||||
|
6. **Specification Quality Validation**: After writing the initial spec, validate it against quality criteria:
|
||||||
|
|
||||||
|
a. **Create Spec Quality Checklist**: Generate a checklist file at `FEATURE_DIR/checklists/requirements.md` using the checklist template structure with these validation items:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Specification Quality Checklist: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Purpose**: Validate specification completeness and quality before proceeding to planning
|
||||||
|
**Created**: [DATE]
|
||||||
|
**Feature**: [Link to spec.md]
|
||||||
|
|
||||||
|
## Content Quality
|
||||||
|
|
||||||
|
- [ ] No implementation details (languages, frameworks, APIs)
|
||||||
|
- [ ] Focused on user value and business needs
|
||||||
|
- [ ] Written for non-technical stakeholders
|
||||||
|
- [ ] All mandatory sections completed
|
||||||
|
|
||||||
|
## Requirement Completeness
|
||||||
|
|
||||||
|
- [ ] No [NEEDS CLARIFICATION] markers remain
|
||||||
|
- [ ] Requirements are testable and unambiguous
|
||||||
|
- [ ] Success criteria are measurable
|
||||||
|
- [ ] Success criteria are technology-agnostic (no implementation details)
|
||||||
|
- [ ] All acceptance scenarios are defined
|
||||||
|
- [ ] Edge cases are identified
|
||||||
|
- [ ] Scope is clearly bounded
|
||||||
|
- [ ] Dependencies and assumptions identified
|
||||||
|
|
||||||
|
## Feature Readiness
|
||||||
|
|
||||||
|
- [ ] All functional requirements have clear acceptance criteria
|
||||||
|
- [ ] User scenarios cover primary flows
|
||||||
|
- [ ] Feature meets measurable outcomes defined in Success Criteria
|
||||||
|
- [ ] No implementation details leak into specification
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Items marked incomplete require spec updates before `/speckit.clarify` or `/speckit.plan`
|
||||||
|
```
|
||||||
|
|
||||||
|
b. **Run Validation Check**: Review the spec against each checklist item:
|
||||||
|
- For each item, determine if it passes or fails
|
||||||
|
- Document specific issues found (quote relevant spec sections)
|
||||||
|
|
||||||
|
c. **Handle Validation Results**:
|
||||||
|
|
||||||
|
- **If all items pass**: Mark checklist complete and proceed to step 6
|
||||||
|
|
||||||
|
- **If items fail (excluding [NEEDS CLARIFICATION])**:
|
||||||
|
1. List the failing items and specific issues
|
||||||
|
2. Update the spec to address each issue
|
||||||
|
3. Re-run validation until all items pass (max 3 iterations)
|
||||||
|
4. If still failing after 3 iterations, document remaining issues in checklist notes and warn user
|
||||||
|
|
||||||
|
- **If [NEEDS CLARIFICATION] markers remain**:
|
||||||
|
1. Extract all [NEEDS CLARIFICATION: ...] markers from the spec
|
||||||
|
2. **LIMIT CHECK**: If more than 3 markers exist, keep only the 3 most critical (by scope/security/UX impact) and make informed guesses for the rest
|
||||||
|
3. For each clarification needed (max 3), present options to user in this format:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Question [N]: [Topic]
|
||||||
|
|
||||||
|
**Context**: [Quote relevant spec section]
|
||||||
|
|
||||||
|
**What we need to know**: [Specific question from NEEDS CLARIFICATION marker]
|
||||||
|
|
||||||
|
**Suggested Answers**:
|
||||||
|
|
||||||
|
| Option | Answer | Implications |
|
||||||
|
|--------|--------|--------------|
|
||||||
|
| A | [First suggested answer] | [What this means for the feature] |
|
||||||
|
| B | [Second suggested answer] | [What this means for the feature] |
|
||||||
|
| C | [Third suggested answer] | [What this means for the feature] |
|
||||||
|
| Custom | Provide your own answer | [Explain how to provide custom input] |
|
||||||
|
|
||||||
|
**Your choice**: _[Wait for user response]_
|
||||||
|
```
|
||||||
|
|
||||||
|
4. **CRITICAL - Table Formatting**: Ensure markdown tables are properly formatted:
|
||||||
|
- Use consistent spacing with pipes aligned
|
||||||
|
- Each cell should have spaces around content: `| Content |` not `|Content|`
|
||||||
|
- Header separator must have at least 3 dashes: `|--------|`
|
||||||
|
- Test that the table renders correctly in markdown preview
|
||||||
|
5. Number questions sequentially (Q1, Q2, Q3 - max 3 total)
|
||||||
|
6. Present all questions together before waiting for responses
|
||||||
|
7. Wait for user to respond with their choices for all questions (e.g., "Q1: A, Q2: Custom - [details], Q3: B")
|
||||||
|
8. Update the spec by replacing each [NEEDS CLARIFICATION] marker with the user's selected or provided answer
|
||||||
|
9. Re-run validation after all clarifications are resolved
|
||||||
|
|
||||||
|
d. **Update Checklist**: After each validation iteration, update the checklist file with current pass/fail status
|
||||||
|
|
||||||
|
7. Report completion with branch name, spec file path, checklist results, and readiness for the next phase (`/speckit.clarify` or `/speckit.plan`).
|
||||||
|
|
||||||
|
**NOTE:** The script creates and checks out the new branch and initializes the spec file before writing.
|
||||||
|
|
||||||
|
## General Guidelines
|
||||||
|
|
||||||
|
## Quick Guidelines
|
||||||
|
|
||||||
|
- Focus on **WHAT** users need and **WHY**.
|
||||||
|
- Avoid HOW to implement (no tech stack, APIs, code structure).
|
||||||
|
- Written for business stakeholders, not developers.
|
||||||
|
- DO NOT create any checklists that are embedded in the spec. That will be a separate command.
|
||||||
|
|
||||||
|
### Section Requirements
|
||||||
|
|
||||||
|
- **Mandatory sections**: Must be completed for every feature
|
||||||
|
- **Optional sections**: Include only when relevant to the feature
|
||||||
|
- When a section doesn't apply, remove it entirely (don't leave as "N/A")
|
||||||
|
|
||||||
|
### For AI Generation
|
||||||
|
|
||||||
|
When creating this spec from a user prompt:
|
||||||
|
|
||||||
|
1. **Make informed guesses**: Use context, industry standards, and common patterns to fill gaps
|
||||||
|
2. **Document assumptions**: Record reasonable defaults in the Assumptions section
|
||||||
|
3. **Limit clarifications**: Maximum 3 [NEEDS CLARIFICATION] markers - use only for critical decisions that:
|
||||||
|
- Significantly impact feature scope or user experience
|
||||||
|
- Have multiple reasonable interpretations with different implications
|
||||||
|
- Lack any reasonable default
|
||||||
|
4. **Prioritize clarifications**: scope > security/privacy > user experience > technical details
|
||||||
|
5. **Think like a tester**: Every vague requirement should fail the "testable and unambiguous" checklist item
|
||||||
|
6. **Common areas needing clarification** (only if no reasonable default exists):
|
||||||
|
- Feature scope and boundaries (include/exclude specific use cases)
|
||||||
|
- User types and permissions (if multiple conflicting interpretations possible)
|
||||||
|
- Security/compliance requirements (when legally/financially significant)
|
||||||
|
|
||||||
|
**Examples of reasonable defaults** (don't ask about these):
|
||||||
|
|
||||||
|
- Data retention: Industry-standard practices for the domain
|
||||||
|
- Performance targets: Standard web/mobile app expectations unless specified
|
||||||
|
- Error handling: User-friendly messages with appropriate fallbacks
|
||||||
|
- Authentication method: Standard session-based or OAuth2 for web apps
|
||||||
|
- Integration patterns: Use project-appropriate patterns (REST/GraphQL for web services, function calls for libraries, CLI args for tools, etc.)
|
||||||
|
|
||||||
|
### Success Criteria Guidelines
|
||||||
|
|
||||||
|
Success criteria must be:
|
||||||
|
|
||||||
|
1. **Measurable**: Include specific metrics (time, percentage, count, rate)
|
||||||
|
2. **Technology-agnostic**: No mention of frameworks, languages, databases, or tools
|
||||||
|
3. **User-focused**: Describe outcomes from user/business perspective, not system internals
|
||||||
|
4. **Verifiable**: Can be tested/validated without knowing implementation details
|
||||||
|
|
||||||
|
**Good examples**:
|
||||||
|
|
||||||
|
- "Users can complete checkout in under 3 minutes"
|
||||||
|
- "System supports 10,000 concurrent users"
|
||||||
|
- "95% of searches return results in under 1 second"
|
||||||
|
- "Task completion rate improves by 40%"
|
||||||
|
|
||||||
|
**Bad examples** (implementation-focused):
|
||||||
|
|
||||||
|
- "API response time is under 200ms" (too technical, use "Users see results instantly")
|
||||||
|
- "Database can handle 1000 TPS" (implementation detail, use user-facing metric)
|
||||||
|
- "React components render efficiently" (framework-specific)
|
||||||
|
- "Redis cache hit rate above 80%" (technology-specific)
|
||||||
137
.claude/commands/speckit.tasks.md
Normal file
137
.claude/commands/speckit.tasks.md
Normal file
@@ -0,0 +1,137 @@
|
|||||||
|
---
|
||||||
|
description: Generate an actionable, dependency-ordered tasks.md for the feature based on available design artifacts.
|
||||||
|
handoffs:
|
||||||
|
- label: Analyze For Consistency
|
||||||
|
agent: speckit.analyze
|
||||||
|
prompt: Run a project analysis for consistency
|
||||||
|
send: true
|
||||||
|
- label: Implement Project
|
||||||
|
agent: speckit.implement
|
||||||
|
prompt: Start the implementation in phases
|
||||||
|
send: true
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
1. **Setup**: Run `.specify/scripts/bash/check-prerequisites.sh --json` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
|
||||||
|
2. **Load design documents**: Read from FEATURE_DIR:
|
||||||
|
- **Required**: plan.md (tech stack, libraries, structure), spec.md (user stories with priorities)
|
||||||
|
- **Optional**: data-model.md (entities), contracts/ (interface contracts), research.md (decisions), quickstart.md (test scenarios)
|
||||||
|
- Note: Not all projects have all documents. Generate tasks based on what's available.
|
||||||
|
|
||||||
|
3. **Execute task generation workflow**:
|
||||||
|
- Load plan.md and extract tech stack, libraries, project structure
|
||||||
|
- Load spec.md and extract user stories with their priorities (P1, P2, P3, etc.)
|
||||||
|
- If data-model.md exists: Extract entities and map to user stories
|
||||||
|
- If contracts/ exists: Map interface contracts to user stories
|
||||||
|
- If research.md exists: Extract decisions for setup tasks
|
||||||
|
- Generate tasks organized by user story (see Task Generation Rules below)
|
||||||
|
- Generate dependency graph showing user story completion order
|
||||||
|
- Create parallel execution examples per user story
|
||||||
|
- Validate task completeness (each user story has all needed tasks, independently testable)
|
||||||
|
|
||||||
|
4. **Generate tasks.md**: Use `.specify/templates/tasks-template.md` as structure, fill with:
|
||||||
|
- Correct feature name from plan.md
|
||||||
|
- Phase 1: Setup tasks (project initialization)
|
||||||
|
- Phase 2: Foundational tasks (blocking prerequisites for all user stories)
|
||||||
|
- Phase 3+: One phase per user story (in priority order from spec.md)
|
||||||
|
- Each phase includes: story goal, independent test criteria, tests (if requested), implementation tasks
|
||||||
|
- Final Phase: Polish & cross-cutting concerns
|
||||||
|
- All tasks must follow the strict checklist format (see Task Generation Rules below)
|
||||||
|
- Clear file paths for each task
|
||||||
|
- Dependencies section showing story completion order
|
||||||
|
- Parallel execution examples per story
|
||||||
|
- Implementation strategy section (MVP first, incremental delivery)
|
||||||
|
|
||||||
|
5. **Report**: Output path to generated tasks.md and summary:
|
||||||
|
- Total task count
|
||||||
|
- Task count per user story
|
||||||
|
- Parallel opportunities identified
|
||||||
|
- Independent test criteria for each story
|
||||||
|
- Suggested MVP scope (typically just User Story 1)
|
||||||
|
- Format validation: Confirm ALL tasks follow the checklist format (checkbox, ID, labels, file paths)
|
||||||
|
|
||||||
|
Context for task generation: $ARGUMENTS
|
||||||
|
|
||||||
|
The tasks.md should be immediately executable - each task must be specific enough that an LLM can complete it without additional context.
|
||||||
|
|
||||||
|
## Task Generation Rules
|
||||||
|
|
||||||
|
**CRITICAL**: Tasks MUST be organized by user story to enable independent implementation and testing.
|
||||||
|
|
||||||
|
**Tests are OPTIONAL**: Only generate test tasks if explicitly requested in the feature specification or if user requests TDD approach.
|
||||||
|
|
||||||
|
### Checklist Format (REQUIRED)
|
||||||
|
|
||||||
|
Every task MUST strictly follow this format:
|
||||||
|
|
||||||
|
```text
|
||||||
|
- [ ] [TaskID] [P?] [Story?] Description with file path
|
||||||
|
```
|
||||||
|
|
||||||
|
**Format Components**:
|
||||||
|
|
||||||
|
1. **Checkbox**: ALWAYS start with `- [ ]` (markdown checkbox)
|
||||||
|
2. **Task ID**: Sequential number (T001, T002, T003...) in execution order
|
||||||
|
3. **[P] marker**: Include ONLY if task is parallelizable (different files, no dependencies on incomplete tasks)
|
||||||
|
4. **[Story] label**: REQUIRED for user story phase tasks only
|
||||||
|
- Format: [US1], [US2], [US3], etc. (maps to user stories from spec.md)
|
||||||
|
- Setup phase: NO story label
|
||||||
|
- Foundational phase: NO story label
|
||||||
|
- User Story phases: MUST have story label
|
||||||
|
- Polish phase: NO story label
|
||||||
|
5. **Description**: Clear action with exact file path
|
||||||
|
|
||||||
|
**Examples**:
|
||||||
|
|
||||||
|
- ✅ CORRECT: `- [ ] T001 Create project structure per implementation plan`
|
||||||
|
- ✅ CORRECT: `- [ ] T005 [P] Implement authentication middleware in src/middleware/auth.py`
|
||||||
|
- ✅ CORRECT: `- [ ] T012 [P] [US1] Create User model in src/models/user.py`
|
||||||
|
- ✅ CORRECT: `- [ ] T014 [US1] Implement UserService in src/services/user_service.py`
|
||||||
|
- ❌ WRONG: `- [ ] Create User model` (missing ID and Story label)
|
||||||
|
- ❌ WRONG: `T001 [US1] Create model` (missing checkbox)
|
||||||
|
- ❌ WRONG: `- [ ] [US1] Create User model` (missing Task ID)
|
||||||
|
- ❌ WRONG: `- [ ] T001 [US1] Create model` (missing file path)
|
||||||
|
|
||||||
|
### Task Organization
|
||||||
|
|
||||||
|
1. **From User Stories (spec.md)** - PRIMARY ORGANIZATION:
|
||||||
|
- Each user story (P1, P2, P3...) gets its own phase
|
||||||
|
- Map all related components to their story:
|
||||||
|
- Models needed for that story
|
||||||
|
- Services needed for that story
|
||||||
|
- Interfaces/UI needed for that story
|
||||||
|
- If tests requested: Tests specific to that story
|
||||||
|
- Mark story dependencies (most stories should be independent)
|
||||||
|
|
||||||
|
2. **From Contracts**:
|
||||||
|
- Map each interface contract → to the user story it serves
|
||||||
|
- If tests requested: Each interface contract → contract test task [P] before implementation in that story's phase
|
||||||
|
|
||||||
|
3. **From Data Model**:
|
||||||
|
- Map each entity to the user story(ies) that need it
|
||||||
|
- If entity serves multiple stories: Put in earliest story or Setup phase
|
||||||
|
- Relationships → service layer tasks in appropriate story phase
|
||||||
|
|
||||||
|
4. **From Setup/Infrastructure**:
|
||||||
|
- Shared infrastructure → Setup phase (Phase 1)
|
||||||
|
- Foundational/blocking tasks → Foundational phase (Phase 2)
|
||||||
|
- Story-specific setup → within that story's phase
|
||||||
|
|
||||||
|
### Phase Structure
|
||||||
|
|
||||||
|
- **Phase 1**: Setup (project initialization)
|
||||||
|
- **Phase 2**: Foundational (blocking prerequisites - MUST complete before user stories)
|
||||||
|
- **Phase 3+**: User Stories in priority order (P1, P2, P3...)
|
||||||
|
- Within each story: Tests (if requested) → Models → Services → Endpoints → Integration
|
||||||
|
- Each phase should be a complete, independently testable increment
|
||||||
|
- **Final Phase**: Polish & Cross-Cutting Concerns
|
||||||
30
.claude/commands/speckit.taskstoissues.md
Normal file
30
.claude/commands/speckit.taskstoissues.md
Normal file
@@ -0,0 +1,30 @@
|
|||||||
|
---
|
||||||
|
description: Convert existing tasks into actionable, dependency-ordered GitHub issues for the feature based on available design artifacts.
|
||||||
|
tools: ['github/github-mcp-server/issue_write']
|
||||||
|
---
|
||||||
|
|
||||||
|
## User Input
|
||||||
|
|
||||||
|
```text
|
||||||
|
$ARGUMENTS
|
||||||
|
```
|
||||||
|
|
||||||
|
You **MUST** consider the user input before proceeding (if not empty).
|
||||||
|
|
||||||
|
## Outline
|
||||||
|
|
||||||
|
1. Run `.specify/scripts/bash/check-prerequisites.sh --json --require-tasks --include-tasks` from repo root and parse FEATURE_DIR and AVAILABLE_DOCS list. All paths must be absolute. For single quotes in args like "I'm Groot", use escape syntax: e.g 'I'\''m Groot' (or double-quote if possible: "I'm Groot").
|
||||||
|
1. From the executed script, extract the path to **tasks**.
|
||||||
|
1. Get the Git remote by running:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
git config --get remote.origin.url
|
||||||
|
```
|
||||||
|
|
||||||
|
> [!CAUTION]
|
||||||
|
> ONLY PROCEED TO NEXT STEPS IF THE REMOTE IS A GITHUB URL
|
||||||
|
|
||||||
|
1. For each task in the list, use the GitHub MCP server to create a new issue in the repository that is representative of the Git remote.
|
||||||
|
|
||||||
|
> [!CAUTION]
|
||||||
|
> UNDER NO CIRCUMSTANCES EVER CREATE ISSUES IN REPOSITORIES THAT DO NOT MATCH THE REMOTE URL
|
||||||
@@ -16,7 +16,7 @@ cd "$CLAUDE_PROJECT_DIR/frontend"
|
|||||||
ERRORS=""
|
ERRORS=""
|
||||||
|
|
||||||
# Type-check
|
# Type-check
|
||||||
if OUTPUT=$(npx vue-tsc --noEmit 2>&1); then
|
if OUTPUT=$(npm run type-check 2>&1); then
|
||||||
:
|
:
|
||||||
else
|
else
|
||||||
ERRORS+="Type-check failed:\n$OUTPUT\n\n"
|
ERRORS+="Type-check failed:\n$OUTPUT\n\n"
|
||||||
|
|||||||
@@ -7,10 +7,10 @@ jobs:
|
|||||||
backend-test:
|
backend-test:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v6
|
||||||
|
|
||||||
- name: Set up JDK 25
|
- name: Set up JDK 25
|
||||||
uses: actions/setup-java@v4
|
uses: actions/setup-java@v5
|
||||||
with:
|
with:
|
||||||
distribution: temurin
|
distribution: temurin
|
||||||
java-version: 25
|
java-version: 25
|
||||||
@@ -21,10 +21,10 @@ jobs:
|
|||||||
frontend-test:
|
frontend-test:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v6
|
||||||
|
|
||||||
- name: Set up Node 24
|
- name: Set up Node 24
|
||||||
uses: actions/setup-node@v4
|
uses: actions/setup-node@v6
|
||||||
with:
|
with:
|
||||||
node-version: 24
|
node-version: 24
|
||||||
|
|
||||||
@@ -49,10 +49,10 @@ jobs:
|
|||||||
frontend-e2e:
|
frontend-e2e:
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v6
|
||||||
|
|
||||||
- name: Set up Node 24
|
- name: Set up Node 24
|
||||||
uses: actions/setup-node@v4
|
uses: actions/setup-node@v6
|
||||||
with:
|
with:
|
||||||
node-version: 24
|
node-version: 24
|
||||||
|
|
||||||
@@ -66,7 +66,7 @@ jobs:
|
|||||||
run: cd frontend && npm run test:e2e
|
run: cd frontend && npm run test:e2e
|
||||||
|
|
||||||
- name: Upload Playwright report
|
- name: Upload Playwright report
|
||||||
uses: actions/upload-artifact@v4
|
uses: actions/upload-artifact@v7
|
||||||
if: ${{ !cancelled() }}
|
if: ${{ !cancelled() }}
|
||||||
with:
|
with:
|
||||||
name: playwright-report
|
name: playwright-report
|
||||||
@@ -78,7 +78,7 @@ jobs:
|
|||||||
if: startsWith(github.ref, 'refs/tags/') && contains(github.ref_name, '.')
|
if: startsWith(github.ref, 'refs/tags/') && contains(github.ref_name, '.')
|
||||||
runs-on: ubuntu-latest
|
runs-on: ubuntu-latest
|
||||||
steps:
|
steps:
|
||||||
- uses: actions/checkout@v4
|
- uses: actions/checkout@v6
|
||||||
|
|
||||||
- name: Parse SemVer tag
|
- name: Parse SemVer tag
|
||||||
id: semver
|
id: semver
|
||||||
|
|||||||
4
.gitignore
vendored
4
.gitignore
vendored
@@ -12,6 +12,10 @@ Thumbs.db
|
|||||||
.mcp.json
|
.mcp.json
|
||||||
.rodney/
|
.rodney/
|
||||||
.agent-tests/
|
.agent-tests/
|
||||||
|
.ralph/*/iteration-*.jsonl
|
||||||
|
|
||||||
|
# Test results (Playwright artifacts)
|
||||||
|
test-results/
|
||||||
|
|
||||||
# Java/Maven
|
# Java/Maven
|
||||||
*.class
|
*.class
|
||||||
|
|||||||
3
.ralph/speckit-migration/answers.md
Normal file
3
.ralph/speckit-migration/answers.md
Normal file
@@ -0,0 +1,3 @@
|
|||||||
|
# Answers
|
||||||
|
|
||||||
|
<!-- Human answers to open questions. Ralph processes these one per iteration. -->
|
||||||
11
.ralph/speckit-migration/chief-wiggum.md
Normal file
11
.ralph/speckit-migration/chief-wiggum.md
Normal file
@@ -0,0 +1,11 @@
|
|||||||
|
# Chief Wiggum's Notes
|
||||||
|
|
||||||
|
<!-- This file is written by the Chief Wiggum session. Ralph reads it but never modifies it. -->
|
||||||
|
|
||||||
|
## Action Required
|
||||||
|
|
||||||
|
(No action items.)
|
||||||
|
|
||||||
|
## Observations
|
||||||
|
|
||||||
|
(No observations.)
|
||||||
298
.ralph/speckit-migration/instructions.md
Normal file
298
.ralph/speckit-migration/instructions.md
Normal file
@@ -0,0 +1,298 @@
|
|||||||
|
# Ralph Loop — Migrate project artifacts to spec-kit format
|
||||||
|
|
||||||
|
Ralph migrates existing project documentation from `spec/`, `docs/agents/`, and `CLAUDE.md` into the spec-kit directory structure. Cross-cutting project knowledge goes to `.specify/memory/`. Feature-specific docs go to `specs/[feature-name]/` with `spec.md`, `research.md`, and `plan.md` files following the spec-kit templates.
|
||||||
|
|
||||||
|
Each iteration migrates exactly ONE file. No batching.
|
||||||
|
|
||||||
|
## CRITICAL RULE: One Task Per Iteration
|
||||||
|
|
||||||
|
You MUST perform exactly ONE task per iteration. Not two, not "a few small ones", not "all remaining items". ONE.
|
||||||
|
|
||||||
|
After completing your single task:
|
||||||
|
1. Append a short summary of what you did to `{{RUN_DIR}}/progress.txt`.
|
||||||
|
2. Stop. Do not look for more work. Do not "while I'm at it" anything.
|
||||||
|
|
||||||
|
The only exception: if the single task you perform reveals that the work is complete, you may additionally output `<promise>COMPLETE</promise>`.
|
||||||
|
|
||||||
|
## Startup: Read Project State
|
||||||
|
|
||||||
|
At the start of every iteration, read these files in order:
|
||||||
|
|
||||||
|
1. `{{RUN_DIR}}/progress.txt` — what previous iterations did (your memory across iterations).
|
||||||
|
2. `{{RUN_DIR}}/chief-wiggum.md` — notes from Chief Wiggum. Items under `## Action Required` have highest priority.
|
||||||
|
3. `{{RUN_DIR}}/answers.md` — check if the human answered any open questions.
|
||||||
|
4. `{{RUN_DIR}}/questions.md` — open and resolved questions.
|
||||||
|
5. `CLAUDE.md` — project statutes and principles.
|
||||||
|
6. `.specify/memory/constitution.md` — the project constitution (already migrated).
|
||||||
|
7. `.specify/templates/spec-template.md` — target format for feature specs.
|
||||||
|
|
||||||
|
Do NOT read all source files upfront. Only read the specific source file needed for the current task.
|
||||||
|
|
||||||
|
## Task Selection (Priority Order)
|
||||||
|
|
||||||
|
Pick the FIRST applicable task from this list. Do that ONE task, then stop.
|
||||||
|
|
||||||
|
### Priority 1: Chief Wiggum action items
|
||||||
|
If `{{RUN_DIR}}/chief-wiggum.md` has items under `## Action Required`, address the FIRST one that hasn't been addressed yet (check `{{RUN_DIR}}/progress.txt`). Do NOT modify `{{RUN_DIR}}/chief-wiggum.md`.
|
||||||
|
|
||||||
|
### Priority 2: Process answers
|
||||||
|
If `{{RUN_DIR}}/answers.md` contains an answer, process it. Remove the processed entry from `{{RUN_DIR}}/answers.md`.
|
||||||
|
|
||||||
|
### Priority 3: Execute next migration task
|
||||||
|
Check `{{RUN_DIR}}/progress.txt` to see which tasks are already done. Find the first task from the Migration Task List (below) that has NOT been logged as completed in progress.txt. Execute that one task, then stop.
|
||||||
|
|
||||||
|
### Priority 4: Update CLAUDE.md references
|
||||||
|
After all migration tasks are complete, update path references in `CLAUDE.md` to point to the new locations. Specifically:
|
||||||
|
- Change `spec/design-system.md` references to `.specify/memory/design-system.md`
|
||||||
|
- Change `spec/` references to `specs/` or `.specify/memory/` as appropriate
|
||||||
|
- Change `docs/agents/research/` and `docs/agents/plan/` references to `specs/[feature]/`
|
||||||
|
- Update the "Agent Documentation" section to reflect the new structure
|
||||||
|
- Do NOT change project statutes, build commands, or anything not related to file paths
|
||||||
|
|
||||||
|
### Priority 5: Cleanup
|
||||||
|
After CLAUDE.md is updated, delete the now-empty old directories and migrated files:
|
||||||
|
- Remove `Ideen.md` (migrated to `.specify/memory/ideen.md`)
|
||||||
|
- Remove `spec/` directory (all content has been migrated)
|
||||||
|
- Remove `docs/agents/` directory (all content has been migrated)
|
||||||
|
- Do NOT remove `docs/` itself if other content exists there
|
||||||
|
|
||||||
|
### Priority 6: Complete
|
||||||
|
If all migration tasks, CLAUDE.md update, and cleanup are done:
|
||||||
|
Output `<promise>COMPLETE</promise>` and stop.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Migration Task List
|
||||||
|
|
||||||
|
Execute these in order, one per iteration. For each task, read the source file, transform it to the target format, and write the result.
|
||||||
|
|
||||||
|
### Phase A: Cross-cutting project docs to `.specify/memory/`
|
||||||
|
|
||||||
|
These files contain project-level knowledge that isn't specific to one feature. Copy them to `.specify/memory/` with minimal reformatting. Preserve all content — do not summarize or cut.
|
||||||
|
|
||||||
|
**A0** — `Ideen.md` -> `.specify/memory/ideen.md`
|
||||||
|
Copy as-is. This is the original brainstorming/founding document of the project (in German). Preserve completely.
|
||||||
|
|
||||||
|
**A1** — `spec/personas.md` -> `.specify/memory/personas.md`
|
||||||
|
Copy as-is. No format changes needed.
|
||||||
|
|
||||||
|
**A2** — `spec/design-system.md` -> `.specify/memory/design-system.md`
|
||||||
|
Copy as-is. No format changes needed.
|
||||||
|
|
||||||
|
**A3** — `spec/implementation-phases.md` -> `.specify/memory/implementation-phases.md`
|
||||||
|
Copy as-is. No format changes needed.
|
||||||
|
|
||||||
|
### Phase B: Cross-cutting research to `.specify/memory/research/`
|
||||||
|
|
||||||
|
These research docs cover topics that apply across multiple features. Move them with date prefix removed from filename.
|
||||||
|
|
||||||
|
**B1** — `docs/agents/research/2026-03-04-backpressure-agentic-coding.md` -> `.specify/memory/research/backpressure-agentic-coding.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**B2** — `docs/agents/research/2026-03-04-api-first-approach.md` -> `.specify/memory/research/api-first-approach.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**B3** — `docs/agents/research/2026-03-04-datetime-best-practices.md` -> `.specify/memory/research/datetime-best-practices.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**B4** — `docs/agents/research/2026-03-04-rfc9457-problem-details.md` -> `.specify/memory/research/rfc9457-problem-details.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**B5** — `docs/agents/research/2026-03-04-sans-serif-fonts.md` -> `.specify/memory/research/sans-serif-fonts.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**B6** — `docs/agents/research/2026-03-04-openapi-validation-pipeline.md` -> `.specify/memory/research/openapi-validation-pipeline.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**B7** — `docs/agents/research/2026-03-05-e2e-testing-playwright-vue3.md` -> `.specify/memory/research/e2e-testing-playwright.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
### Phase C: Cross-cutting plans to `.specify/memory/plans/`
|
||||||
|
|
||||||
|
**C1** — `docs/agents/plan/2026-03-04-backpressure-agentic-coding.md` -> `.specify/memory/plans/backpressure-agentic-coding.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**C2** — `docs/agents/plan/2026-03-05-e2e-testing-playwright-setup.md` -> `.specify/memory/plans/e2e-testing-playwright-setup.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
### Phase D: Setup tasks — extract specs + relocate research/plans
|
||||||
|
|
||||||
|
Each setup task from `spec/setup-tasks.md` becomes its own feature directory under `specs/`. Read the setup-tasks.md file, extract the section for the specific task, and reformat it into a `spec.md` following the spec-kit template structure. Then relocate the corresponding research and plan docs.
|
||||||
|
|
||||||
|
For the spec.md conversion: map the existing acceptance criteria to the spec-kit "User Scenarios & Testing" section as acceptance scenarios. Map the task description to "Requirements". Keep all checkmark states (completed criteria). Add a note at the top indicating this is a setup task (infrastructure), not a user-facing feature.
|
||||||
|
|
||||||
|
**D1** — Extract T-1 from `spec/setup-tasks.md` -> `specs/t-01-monorepo-setup/spec.md`
|
||||||
|
Read `spec/setup-tasks.md`, extract the T-1 section, reformat to spec-kit spec template.
|
||||||
|
|
||||||
|
**D2** — `docs/agents/research/2026-03-04-t1-monorepo-setup.md` -> `specs/t-01-monorepo-setup/research.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**D3** — `docs/agents/plan/2026-03-04-t1-monorepo-setup.md` -> `specs/t-01-monorepo-setup/plan.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**D4** — Extract T-2 from `spec/setup-tasks.md` -> `specs/t-02-docker-deployment/spec.md`
|
||||||
|
|
||||||
|
**D5** — `docs/agents/research/2026-03-04-spa-springboot-docker-patterns.md` -> `specs/t-02-docker-deployment/research.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**D6** — `docs/agents/plan/2026-03-04-t2-docker-deployment.md` -> `specs/t-02-docker-deployment/plan.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**D7** — Extract T-3 from `spec/setup-tasks.md` -> `specs/t-03-cicd-pipeline/spec.md`
|
||||||
|
|
||||||
|
**D8** — `docs/agents/research/2026-03-04-t3-cicd-pipeline.md` -> `specs/t-03-cicd-pipeline/research.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**D9** — `docs/agents/plan/2026-03-04-t3-cicd-pipeline.md` -> `specs/t-03-cicd-pipeline/plan.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**D10** — Extract T-4 from `spec/setup-tasks.md` -> `specs/t-04-dev-infrastructure/spec.md`
|
||||||
|
|
||||||
|
**D11** — `docs/agents/research/2026-03-04-t4-development-infrastructure.md` -> `specs/t-04-dev-infrastructure/research.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**D12** — `docs/agents/plan/2026-03-04-t4-development-infrastructure.md` -> `specs/t-04-dev-infrastructure/plan.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**D13** — Extract T-5 from `spec/setup-tasks.md` -> `specs/t-05-jpa-database/spec.md`
|
||||||
|
T-5 may not have research/plan docs yet. Only create the spec.md.
|
||||||
|
|
||||||
|
### Phase E: User stories — extract specs
|
||||||
|
|
||||||
|
Each user story from `spec/userstories.md` becomes its own feature directory. Read the userstories.md file, extract the section for the specific user story, and reformat it into a `spec.md` following the spec-kit template structure. Map acceptance criteria to acceptance scenarios. Map the story description to requirements. Preserve all checkmark states.
|
||||||
|
|
||||||
|
**E1** — Extract US-1 from `spec/userstories.md` -> `specs/us-01-create-event/spec.md`
|
||||||
|
|
||||||
|
**E2** — `docs/agents/research/2026-03-04-us1-create-event.md` -> `specs/us-01-create-event/research.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**E3** — `docs/agents/plan/2026-03-04-us1-create-event.md` -> `specs/us-01-create-event/plan.md`
|
||||||
|
Copy as-is.
|
||||||
|
|
||||||
|
**E4** — `docs/agents/plan/2026-03-05-us1-review-fixes.md` -> `specs/us-01-create-event/plan-review-fixes.md`
|
||||||
|
Copy as-is. This is a supplementary plan doc for US-1.
|
||||||
|
|
||||||
|
**E5** — `docs/agents/plan/2026-03-05-us1-post-review-fixes.md` -> `specs/us-01-create-event/plan-post-review-fixes.md`
|
||||||
|
Copy as-is. This is a supplementary plan doc for US-1.
|
||||||
|
|
||||||
|
**E6** — Extract US-2 from `spec/userstories.md` -> `specs/us-02-view-event/spec.md`
|
||||||
|
|
||||||
|
**E7** — Extract US-3 from `spec/userstories.md` -> `specs/us-03-rsvp/spec.md`
|
||||||
|
|
||||||
|
**E8** — Extract US-4 from `spec/userstories.md` -> `specs/us-04-guest-list/spec.md`
|
||||||
|
|
||||||
|
**E9** — Extract US-5 from `spec/userstories.md` -> `specs/us-05-edit-event/spec.md`
|
||||||
|
|
||||||
|
**E10** — Extract US-6 from `spec/userstories.md` -> `specs/us-06-calendar-export/spec.md`
|
||||||
|
|
||||||
|
**E11** — Extract US-7 from `spec/userstories.md` -> `specs/us-07-local-event-overview/spec.md`
|
||||||
|
|
||||||
|
**E12** — Extract US-8 from `spec/userstories.md` -> `specs/us-08-comments/spec.md`
|
||||||
|
|
||||||
|
**E13** — Extract US-9 from `spec/userstories.md` -> `specs/us-09-reminders/spec.md`
|
||||||
|
|
||||||
|
**E14** — Extract US-10a from `spec/userstories.md` -> `specs/us-10a-organizer-updates/spec.md`
|
||||||
|
|
||||||
|
**E15** — Extract US-10b from `spec/userstories.md` -> `specs/us-10b-guest-notifications/spec.md`
|
||||||
|
|
||||||
|
**E16** — Extract US-11 from `spec/userstories.md` -> `specs/us-11-qr-code/spec.md`
|
||||||
|
|
||||||
|
**E17** — Extract US-12 from `spec/userstories.md` -> `specs/us-12-recurring-events/spec.md`
|
||||||
|
|
||||||
|
**E18** — Extract US-13 from `spec/userstories.md` -> `specs/us-13-plus-one/spec.md`
|
||||||
|
|
||||||
|
**E19** — Extract US-14 from `spec/userstories.md` -> `specs/us-14-waitlist/spec.md`
|
||||||
|
|
||||||
|
**E20** — Extract US-15 from `spec/userstories.md` -> `specs/us-15-color-themes/spec.md`
|
||||||
|
|
||||||
|
**E21** — Extract US-16 from `spec/userstories.md` -> `specs/us-16-header-image/spec.md`
|
||||||
|
|
||||||
|
**E22** — Extract US-17 from `spec/userstories.md` -> `specs/us-17-dark-mode/spec.md`
|
||||||
|
|
||||||
|
**E23** — Extract US-18 from `spec/userstories.md` -> `specs/us-18-cancel-event/spec.md`
|
||||||
|
|
||||||
|
**E24** — Extract US-19 from `spec/userstories.md` -> `specs/us-19-delete-event/spec.md`
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Spec Conversion Guidelines
|
||||||
|
|
||||||
|
When converting a user story or setup task to a spec-kit `spec.md`, use this structure:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
# Feature Specification: [Title]
|
||||||
|
|
||||||
|
**Feature**: `[id-kebab-case]`
|
||||||
|
**Created**: 2026-03-06
|
||||||
|
**Status**: [Draft | Approved | Implemented]
|
||||||
|
**Source**: Migrated from spec/userstories.md (or spec/setup-tasks.md)
|
||||||
|
|
||||||
|
## User Scenarios & Testing
|
||||||
|
|
||||||
|
### User Story 1 - [Title] (Priority: P1)
|
||||||
|
|
||||||
|
[Story description from the original]
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** ..., **When** ..., **Then** ...
|
||||||
|
[Map each acceptance criterion to a Given/When/Then scenario]
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
[Extract from original if present, otherwise mark as [NEEDS EXPANSION]]
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
[Map the story's requirements and acceptance criteria to FR-XXX items]
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
[Derive from acceptance criteria. Mark as [NEEDS EXPANSION] if not obvious]
|
||||||
|
```
|
||||||
|
|
||||||
|
For setup tasks that are already completed (all criteria checked off), set Status to `Implemented`.
|
||||||
|
For user stories not yet started, set Status to `Draft`.
|
||||||
|
For US-1 (in progress), set Status to `Approved`.
|
||||||
|
|
||||||
|
## File Ownership
|
||||||
|
|
||||||
|
Respect these boundaries strictly:
|
||||||
|
|
||||||
|
| File | Owner | You may... |
|
||||||
|
|------|-------|------------|
|
||||||
|
| `{{RUN_DIR}}/progress.txt` | Ralph | Read and append |
|
||||||
|
| `{{RUN_DIR}}/questions.md` | Ralph | Read and write |
|
||||||
|
| `{{RUN_DIR}}/answers.md` | Human | **Read only.** Only remove entries you have already processed. |
|
||||||
|
| `{{RUN_DIR}}/chief-wiggum.md` | Chief Wiggum | **Read only.** Never modify. |
|
||||||
|
| `CLAUDE.md` | Ralph (Priority 4 only) | Read always. Write ONLY during Priority 4 (update references). |
|
||||||
|
| `Ideen.md` | Source | **Read only.** Never modify original. |
|
||||||
|
| `spec/*` | Source | **Read only.** Never modify originals. |
|
||||||
|
| `docs/agents/**` | Source | **Read only.** Never modify originals. |
|
||||||
|
| `.specify/memory/**` | Ralph | Read and write (create new files). |
|
||||||
|
| `.specify/templates/**` | System | **Read only.** |
|
||||||
|
| `specs/**` | Ralph | Read and write (create new files). |
|
||||||
|
|
||||||
|
## Handling Uncertainty
|
||||||
|
|
||||||
|
If you encounter ambiguity (e.g. a user story doesn't map cleanly to the spec-kit template, or you're unsure where a cross-cutting research doc belongs), add a question to `{{RUN_DIR}}/questions.md` in this format:
|
||||||
|
|
||||||
|
```markdown
|
||||||
|
## Open
|
||||||
|
|
||||||
|
### Q[N]: [Short question]
|
||||||
|
**Context**: [What you were trying to do]
|
||||||
|
**Options**: [A] ... [B] ...
|
||||||
|
```
|
||||||
|
|
||||||
|
Then skip that task and move to the next one. Come back to skipped tasks after the question is answered.
|
||||||
|
|
||||||
|
## Rules
|
||||||
|
|
||||||
|
- NEVER delete or modify source files (`spec/`, `docs/agents/`). Only create new files. Cleanup happens in Priority 5 after everything is verified.
|
||||||
|
- NEVER invent content. If the original doesn't have enough detail for a section, mark it `[NEEDS EXPANSION]`.
|
||||||
|
- ALWAYS preserve acceptance criteria checkmark states (`- [x]` vs `- [ ]`).
|
||||||
|
- Keep file content in English (project statute).
|
||||||
|
- Do not add emojis.
|
||||||
6
.ralph/speckit-migration/meta.md
Normal file
6
.ralph/speckit-migration/meta.md
Normal file
@@ -0,0 +1,6 @@
|
|||||||
|
# Run: speckit-migration
|
||||||
|
|
||||||
|
- **Created**: 2026-03-06
|
||||||
|
- **Description**: (add description here)
|
||||||
|
- **Model**: sonnet
|
||||||
|
- **Max iterations**: 55
|
||||||
146
.ralph/speckit-migration/progress.txt
Normal file
146
.ralph/speckit-migration/progress.txt
Normal file
@@ -0,0 +1,146 @@
|
|||||||
|
# Ralph Loop Progress Log
|
||||||
|
# Each iteration appends its findings and decisions here.
|
||||||
|
|
||||||
|
## Iteration 1 — Task A0
|
||||||
|
Migrated `Ideen.md` -> `.specify/memory/ideen.md`. Copied content as-is (German brainstorming/founding document). No modifications.
|
||||||
|
|
||||||
|
## Iteration 2 — Task A1
|
||||||
|
Migrated `spec/personas.md` -> `.specify/memory/personas.md`. Copied content as-is. No modifications.
|
||||||
|
|
||||||
|
## Iteration 3 — Task A2
|
||||||
|
Migrated `spec/design-system.md` -> `.specify/memory/design-system.md`. Copied content as-is. No modifications.
|
||||||
|
|
||||||
|
## Iteration 4 — Task A3
|
||||||
|
Migrated `spec/implementation-phases.md` -> `.specify/memory/implementation-phases.md`. Copied content as-is. No modifications.
|
||||||
|
|
||||||
|
## Iteration 5 — Task B1
|
||||||
|
Migrated `docs/agents/research/2026-03-04-backpressure-agentic-coding.md` -> `.specify/memory/research/backpressure-agentic-coding.md`. Copied content as-is. No modifications.
|
||||||
|
|
||||||
|
## Chief Wiggum — Bulk copy (B2-B7, C1-C2, D2-D3, D5-D6, D8-D9, D11-D12, E2-E5)
|
||||||
|
All remaining copy-as-is tasks completed manually. Files copied:
|
||||||
|
- B2-B7: 6 cross-cutting research docs -> `.specify/memory/research/`
|
||||||
|
- C1-C2: 2 cross-cutting plan docs -> `.specify/memory/plans/`
|
||||||
|
- D2,D3: T-1 research+plan -> `specs/t-01-monorepo-setup/`
|
||||||
|
- D5,D6: T-2 research+plan -> `specs/t-02-docker-deployment/`
|
||||||
|
- D8,D9: T-3 research+plan -> `specs/t-03-cicd-pipeline/`
|
||||||
|
- D11,D12: T-4 research+plan -> `specs/t-04-dev-infrastructure/`
|
||||||
|
- E2: US-1 research -> `specs/us-01-create-event/research.md`
|
||||||
|
- E3: US-1 plan -> `specs/us-01-create-event/plan.md`
|
||||||
|
- E4: US-1 review fixes plan -> `specs/us-01-create-event/plan-review-fixes.md`
|
||||||
|
- E5: US-1 post-review fixes plan -> `specs/us-01-create-event/plan-post-review-fixes.md`
|
||||||
|
All target directories created.
|
||||||
|
|
||||||
|
NEXT TASK: D4, D7, D10, D13 (remaining setup task spec extractions)
|
||||||
|
Then: E1, E6-E24 (user story spec extractions)
|
||||||
|
Then: Priority 4 (CLAUDE.md update)
|
||||||
|
Then: Priority 5 (cleanup)
|
||||||
|
|
||||||
|
## Iteration 6 — Task D1
|
||||||
|
Extracted T-1 from `spec/setup-tasks.md` -> `specs/t-01-monorepo-setup/spec.md`. All 6 acceptance criteria checked (Implemented). Reformatted to spec-kit template with acceptance scenarios, functional requirements, and success criteria.
|
||||||
|
|
||||||
|
## Iteration 7 — Task D4
|
||||||
|
Extracted T-2 from `spec/setup-tasks.md` -> `specs/t-02-docker-deployment/spec.md`. All 5 acceptance criteria checked (Implemented). Included scope note from the 2026-03-04 addendum (database config deferred to T-4). Reformatted to spec-kit template.
|
||||||
|
|
||||||
|
## Iteration 8 — Task D7
|
||||||
|
Extracted T-3 from `spec/setup-tasks.md` -> `specs/t-03-cicd-pipeline/spec.md`. All 6 acceptance criteria checked (Implemented). Included Q-5 resolution note about Gitea Actions. Reformatted to spec-kit template.
|
||||||
|
|
||||||
|
## Iteration 9 — Task D10
|
||||||
|
Extracted T-4 from `spec/setup-tasks.md` -> `specs/t-04-dev-infrastructure/spec.md`. All 9 acceptance criteria checked (Implemented). Included addendum about absorption of T-2 scope. Reformatted to spec-kit template with 5 setup task stories covering: database connectivity, env var config, backend test infra, frontend test infra, SPA router.
|
||||||
|
|
||||||
|
## Iteration 10 — Task D13
|
||||||
|
Extracted T-5 from `spec/setup-tasks.md` -> `specs/t-05-jpa-database/spec.md`. All 8 acceptance criteria checked (Implemented). T-5 covers API-first tooling setup (OpenAPI code generation for backend + frontend, minimal spec, API client). No research or plan docs exist for this task. Added note about directory name vs. task content mismatch.
|
||||||
|
|
||||||
|
## Iteration 11 — Task E1
|
||||||
|
Extracted US-1 from `spec/userstories.md` -> `specs/us-01-create-event/spec.md`. Status: Approved (in progress). All 8 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (create with required fields P1, optional fields P2, expiry validation P2), acceptance scenarios, 12 functional requirements, key entities, and 5 success criteria.
|
||||||
|
|
||||||
|
## Iteration 12 — Task E6
|
||||||
|
Extracted US-2 from `spec/userstories.md` -> `specs/us-02-view-event/spec.md`. Status: Draft. All 8 acceptance criteria unchecked. Reformatted to spec-kit template with 4 user stories (view event details P1, expired event P2, cancelled event P2 deferred, event not found P2), acceptance scenarios, 8 functional requirements, key entities, and 5 success criteria.
|
||||||
|
|
||||||
|
## Iteration 13 — Task E7
|
||||||
|
Extracted US-3 from `spec/userstories.md` -> `specs/us-03-rsvp/spec.md`. Status: Draft. All 11 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (submit RSVP P1, re-RSVP from same device P2, RSVP blocked on expired/cancelled events P2), 12 functional requirements, 1 key entity, and 5 success criteria. Noted deduplication mechanism as NEEDS EXPANSION (implementation detail).
|
||||||
|
|
||||||
|
## Iteration 14 — Task E8
|
||||||
|
Extracted US-4 from `spec/userstories.md` -> `specs/us-04-guest-list/spec.md`. Status: Draft. All 7 acceptance criteria unchecked. Reformatted to spec-kit template with 1 user story (view and manage RSVPs P1), 6 acceptance scenarios, 8 functional requirements, 2 key entities, and 5 success criteria.
|
||||||
|
|
||||||
|
## Iteration 15 — Task E9
|
||||||
|
Extracted US-5 from `spec/userstories.md` -> `specs/us-05-edit-event/spec.md`. Status: Draft. All 8 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (edit event details P1, expiry date future-date validation P2, organizer token authentication P2), 4 edge cases, 8 functional requirements, 2 key entities, and 5 success criteria. Noted that visual highlighting of changes is deferred to US-9.
|
||||||
|
|
||||||
|
## Iteration 16 — Task E10
|
||||||
|
Extracted US-6 from `spec/userstories.md` -> `specs/us-06-bookmark-event/spec.md`. Status: Draft. All 7 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (bookmark without RSVP P1, bookmark independent of RSVP state P2, bookmark on expired events P2), 3 edge cases, 8 functional requirements, 1 key entity, and 5 success criteria.
|
||||||
|
NOTE: Migration task list labeled this directory "us-06-calendar-export" but US-6 in userstories.md is "Bookmark an event". Calendar export is US-8. Directory was created as "us-06-bookmark-event" to reflect actual content. The instruction label appears to be an error from an earlier version of the story numbering.
|
||||||
|
|
||||||
|
## Iteration 17 — Task E11
|
||||||
|
Extracted US-7 from `spec/userstories.md` -> `specs/us-07-local-event-overview/spec.md`. Status: Draft. All 11 acceptance criteria unchecked. Reformatted to spec-kit template with 4 user stories (view tracked events P1, distinguish past events P2, remove entry P2, handle deleted event P2), 11 functional requirements, 1 key entity (LocalEventEntry), and 5 success criteria. Noted edge cases for localStorage unavailability, duplicate entries, stale cached data, and large list handling.
|
||||||
|
|
||||||
|
## Iteration 18 — Task E12
|
||||||
|
Extracted US-8 from `spec/userstories.md` -> `specs/us-08-calendar-export/spec.md`. Status: Draft. All 9 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (.ics download P1, webcal:// subscription P2, cancelled event STATUS:CANCELLED P3 deferred until US-18), 3 edge cases, 9 functional requirements, 1 key entity (CalendarFeed — virtual, no independent storage), and 5 success criteria.
|
||||||
|
NOTE: Migration task list labeled this directory "us-08-comments" but US-8 in userstories.md is "Add event to calendar". Directory created as "us-08-calendar-export" to reflect actual content. The label mismatch is consistent with the earlier E10/E16 issue (story numbering shifted).
|
||||||
|
|
||||||
|
## Iteration 20 — Task E14
|
||||||
|
Extracted US-10a from `spec/userstories.md` -> `specs/us-10a-organizer-updates/spec.md`. Status: Draft. All 8 acceptance criteria unchecked. Reformatted to spec-kit template with 2 user stories (post and manage update messages P1, block posting after expiry P2), 4 edge cases, 9 functional requirements, 1 key entity (UpdateMessage), and 5 success criteria. Noted that cancellation does not block posting (only expiry does), which is intentional per story notes.
|
||||||
|
|
||||||
|
## Iteration 19 — Task E13
|
||||||
|
Extracted US-9 from `spec/userstories.md` -> `specs/us-09-highlight-changes/spec.md`. Status: Draft. All 8 acceptance criteria unchecked. Reformatted to spec-kit template with 4 user stories (guest sees highlight for changed fields P1, no highlight on first visit P2, highlight clears after viewing P2, only most recent edit tracked P3), 3 edge cases, 10 functional requirements, 2 key entities (EditMetadata server-side, last_seen_at client-side localStorage), and 5 success criteria.
|
||||||
|
NOTE: Migration task list specified directory "us-09-reminders" but US-9 in userstories.md is "Highlight changed event details". Directory created as "us-09-highlight-changes" to match actual content. Consistent with corrections in iterations 16 and 18.
|
||||||
|
|
||||||
|
## Iteration 22 — Task E16
|
||||||
|
Extracted US-11 from `spec/userstories.md` -> `specs/us-11-qr-code/spec.md`. Status: Draft. All 7 acceptance criteria unchecked. Reformatted to spec-kit template with 1 user story (display and download QR code P1), 7 acceptance scenarios, 8 functional requirements, 1 key entity (QRCode — virtual, generated on demand, not persisted), and 5 success criteria. Confirmed server-side generation requirement (no external QR service) is the key constraint, consistent with privacy statutes.
|
||||||
|
|
||||||
|
## Iteration 21 — Task E15
|
||||||
|
Extracted US-10b from `spec/userstories.md` -> `specs/us-10b-guest-notifications/spec.md`. Status: Draft. All 5 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (unread update indicator P1, first visit no indicator P2, no server read-tracking P1), 3 edge cases, 7 functional requirements, 1 key entity (UpdateReadState — client-side only, localStorage), and 5 success criteria. Noted that `updates_last_seen_at` key is distinct from the `last_seen_at` key used in US-9.
|
||||||
|
|
||||||
|
## Iteration 24 — Task E18
|
||||||
|
Extracted US-13 from `spec/userstories.md` -> `specs/us-13-instance-limit/spec.md`. Status: Draft. All 8 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (enforce cap on creation P1, no limit when unset P2, server-side enforcement only P2), 4 edge cases (MAX_ACTIVE_EVENTS=0, non-integer value, race condition, expired events excluded), 8 functional requirements, 1 key entity (active event count), and 5 success criteria.
|
||||||
|
NOTE: Migration task list labeled this directory "us-13-plus-one" but US-13 in userstories.md is "Limit the number of active events per instance". Directory created as "us-13-instance-limit" to match actual content. Consistent with corrections in iterations 16, 18, 19, and 23.
|
||||||
|
|
||||||
|
## Iteration 25 — Task E19
|
||||||
|
Extracted US-14 from `spec/userstories.md` -> `specs/us-14-pwa/spec.md`. Status: Draft. All 7 acceptance criteria unchecked. Reformatted to spec-kit template with 2 user stories (install app on device P1, serve valid manifest P1), 3 edge cases, 9 functional requirements, 2 key entities (Web App Manifest, Service Worker — both purely frontend), and 5 success criteria.
|
||||||
|
NOTE: Migration task list labeled this directory "us-14-waitlist" but US-14 in userstories.md is "Install as Progressive Web App". Directory created as "us-14-pwa" to match actual content. Consistent with corrections in iterations 16, 18, 19, 23, and 24.
|
||||||
|
|
||||||
|
## Iteration 26 — Task E20
|
||||||
|
Extracted US-15 from `spec/userstories.md` -> `specs/us-15-color-themes/spec.md`. Status: Draft. All 7 acceptance criteria unchecked. Reformatted to spec-kit template with 2 user stories (select theme during creation P1, update theme during editing P2), 3 edge cases (removed theme fallback, dark/light mode interaction, legacy events), 8 functional requirements, 1 key entity (ColorTheme — value type on Event), and 5 success criteria.
|
||||||
|
|
||||||
|
## Iteration 27 — Task E21
|
||||||
|
Extracted US-16 from `spec/userstories.md` -> `specs/us-16-header-image/spec.md`. Status: Draft. All 11 acceptance criteria unchecked. Reformatted to spec-kit template with 4 user stories (select/search image P1, event page renders image P1, graceful degradation without API key P2, image deleted with event P2), 4 edge cases, 10 functional requirements, 1 key entity (HeaderImage — stored on disk, deleted with event), and 5 success criteria. Noted server-proxy requirement as the key privacy constraint (guests never contact Unsplash directly).
|
||||||
|
|
||||||
|
## Iteration 23 — Task E17
|
||||||
|
Extracted US-12 from `spec/userstories.md` -> `specs/us-12-data-deletion/spec.md`. Status: Draft. All 7 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (automatic cleanup P1, expiry date extension delays deletion P2, no early deletion P2), 4 edge cases, 8 functional requirements, 2 key entities (Event expiry_date, Cleanup Job), and 5 success criteria.
|
||||||
|
NOTE: Migration task list labeled this directory "us-12-recurring-events" but US-12 in userstories.md is "Automatic data deletion after expiry date". Directory created as "us-12-data-deletion" to match actual content. Consistent with corrections in iterations 16, 18, and 19.
|
||||||
|
|
||||||
|
## Iteration 28 — Task E22
|
||||||
|
Extracted US-17 from `spec/userstories.md` -> `specs/us-17-dark-mode/spec.md`. Status: Draft. All 8 acceptance criteria unchecked. Reformatted to spec-kit template with 5 user stories (system preference on first visit P1, manual toggle with localStorage persistence P1, toggle accessible from any page P2, dark/light mode does not affect event-level themes P2, WCAG AA contrast for both modes P1), 3 edge cases, 8 functional requirements, 1 key entity (DarkLightPreference — client-side only, localStorage), and 5 success criteria.
|
||||||
|
|
||||||
|
## Iteration 29 — Task E23
|
||||||
|
Extracted US-18 from `spec/userstories.md` -> `specs/us-18-cancel-event/spec.md`. Status: Draft. All 12 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (cancel with optional message P1, adjust expiry date during cancellation P2, edit cancellation message after cancellation P3), 5 edge cases (includes one NEEDS EXPANSION for cancellation on already-expired events), 11 functional requirements, 1 key entity (CancellationState — value type on Event, not a separate entity), and 5 success criteria.
|
||||||
|
|
||||||
|
## Iteration 30 — Task E24
|
||||||
|
Extracted US-19 from `spec/userstories.md` -> `specs/us-19-delete-event/spec.md`. Status: Draft. All 10 acceptance criteria unchecked. Reformatted to spec-kit template with 3 user stories (immediately delete an active event P1, delete a cancelled or expired event P2, reject deletion without valid organizer token P2), 4 edge cases, 9 functional requirements, 2 key entities (Event, OrganizerToken), and 5 success criteria.
|
||||||
|
NOTE: spec/userstories.md also contains US-20 (404 page) which is not listed in the migration task list. This story was added post-US-1 review and is not included in E1-E24. No action taken on US-20 — it is out of scope for the task list as defined.
|
||||||
|
|
||||||
|
NEXT TASK: Priority 4 — Update CLAUDE.md references (all E tasks now complete).
|
||||||
|
|
||||||
|
## Iteration 31 — Priority 4: Update CLAUDE.md references
|
||||||
|
Updated 4 path references in `CLAUDE.md`:
|
||||||
|
1. Methodology section: `spec/setup-tasks.md` or `spec/userstories.md` -> `specs/[feature]/spec.md`
|
||||||
|
2. Design section: `spec/design-system.md` -> `.specify/memory/design-system.md`
|
||||||
|
3. Agent Documentation section: replaced `docs/agents/research/` and `docs/agents/plan/` with new structure (`specs/[feature]/` for feature-specific, `.specify/memory/research/` and `.specify/memory/plans/` for cross-cutting)
|
||||||
|
4. Ralph Loops section: `spec/` -> `specs/` and `.specify/memory/` (cross-cutting docs)
|
||||||
|
|
||||||
|
NEXT TASK: Priority 5 — Cleanup (delete migrated source files and empty old directories).
|
||||||
|
|
||||||
|
## Iteration 32 — Priority 5: Cleanup (blocked by sandbox)
|
||||||
|
|
||||||
|
Attempted to delete migrated source files. All `rm` commands were blocked by the sandbox permission mode (even though files are within the allowed working directory).
|
||||||
|
|
||||||
|
All migration content is complete. The following files/directories still need to be manually deleted by the user:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
rm /home/nitrix/Development/fete/Ideen.md
|
||||||
|
rm -rf /home/nitrix/Development/fete/spec/
|
||||||
|
rm -rf /home/nitrix/Development/fete/docs/agents/
|
||||||
|
rmdir /home/nitrix/Development/fete/docs/ 2>/dev/null || true
|
||||||
|
```
|
||||||
|
|
||||||
|
`docs/` only contains `docs/agents/` so it will be empty after removal and can be deleted.
|
||||||
|
|
||||||
|
All migration tasks (A0-A3, B1-B7, C1-C2, D1-D13, E1-E24) are COMPLETE. CLAUDE.md references have been updated (Priority 4). Only the cleanup of source files remains, requiring manual execution.
|
||||||
9
.ralph/speckit-migration/questions.md
Normal file
9
.ralph/speckit-migration/questions.md
Normal file
@@ -0,0 +1,9 @@
|
|||||||
|
# Questions
|
||||||
|
|
||||||
|
## Open
|
||||||
|
|
||||||
|
(No open questions.)
|
||||||
|
|
||||||
|
## Resolved
|
||||||
|
|
||||||
|
(No resolved questions.)
|
||||||
136
.specify/memory/constitution.md
Normal file
136
.specify/memory/constitution.md
Normal file
@@ -0,0 +1,136 @@
|
|||||||
|
<!--
|
||||||
|
Sync Impact Report
|
||||||
|
==================
|
||||||
|
Version change: 0.0.0 (template) -> 1.0.0
|
||||||
|
Modified principles: N/A (initial adoption)
|
||||||
|
Added sections:
|
||||||
|
- 6 Core Principles (Privacy, Methodology, API-First, Quality, Dependencies, Accessibility)
|
||||||
|
- Tech Stack & Constraints
|
||||||
|
- Development Workflow
|
||||||
|
- Governance
|
||||||
|
Removed sections: N/A
|
||||||
|
Templates requiring updates:
|
||||||
|
- .specify/templates/plan-template.md: OK (Constitution Check section already generic)
|
||||||
|
- .specify/templates/spec-template.md: OK (no constitution-specific references)
|
||||||
|
- .specify/templates/tasks-template.md: OK (no constitution-specific references)
|
||||||
|
- .specify/templates/checklist-template.md: OK (no constitution-specific references)
|
||||||
|
- .specify/templates/commands/*.md: N/A (no command files exist)
|
||||||
|
Follow-up TODOs: none
|
||||||
|
-->
|
||||||
|
|
||||||
|
# fete Constitution
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
### I. Privacy by Design
|
||||||
|
|
||||||
|
Privacy is a design constraint, not a feature. It shapes every decision from
|
||||||
|
the start.
|
||||||
|
|
||||||
|
- The system MUST NOT include analytics, telemetry, or tracking of any kind.
|
||||||
|
- The server MUST NOT log PII or IP addresses.
|
||||||
|
- Every feature MUST critically evaluate what data is necessary; only data
|
||||||
|
absolutely required for functionality may be stored.
|
||||||
|
- External dependencies that phone home (CDNs, Google Fonts, tracking-capable
|
||||||
|
libraries) MUST NOT be used.
|
||||||
|
|
||||||
|
### II. Test-Driven Methodology
|
||||||
|
|
||||||
|
Development follows a strict Research -> Spec -> Test -> Implement -> Review
|
||||||
|
sequence. No shortcuts.
|
||||||
|
|
||||||
|
- Tests MUST be written before implementation (TDD). The Red -> Green ->
|
||||||
|
Refactor cycle is strictly enforced.
|
||||||
|
- No implementation code may be written without a specification.
|
||||||
|
- E2E tests are mandatory for every frontend user story.
|
||||||
|
- When a setup task or user story is completed, its acceptance criteria MUST
|
||||||
|
be checked off in the corresponding spec file before committing.
|
||||||
|
|
||||||
|
### III. API-First Development
|
||||||
|
|
||||||
|
The OpenAPI spec (`backend/src/main/resources/openapi/api.yaml`) is the
|
||||||
|
single source of truth for the REST API contract.
|
||||||
|
|
||||||
|
- Endpoints and schemas MUST be defined in the spec first.
|
||||||
|
- Backend interfaces and frontend types MUST be generated from the spec
|
||||||
|
before writing implementation code.
|
||||||
|
- Response schemas MUST include `example:` fields for mock generation and
|
||||||
|
documentation.
|
||||||
|
|
||||||
|
### IV. Simplicity & Quality
|
||||||
|
|
||||||
|
KISS and grugbrain. Engineer it properly, but do not over-engineer.
|
||||||
|
|
||||||
|
- No workarounds. Always fix the root cause, even if it takes longer.
|
||||||
|
- Technical debt MUST be addressed immediately; it MUST NOT accumulate.
|
||||||
|
- Refactoring is permitted freely as long as it does not alter the
|
||||||
|
fundamental architecture.
|
||||||
|
- Every line of code MUST be intentional and traceable to a requirement.
|
||||||
|
No vibe coding.
|
||||||
|
|
||||||
|
### V. Dependency Discipline
|
||||||
|
|
||||||
|
Every dependency is a deliberate, justified decision.
|
||||||
|
|
||||||
|
- A dependency MUST provide substantial value and a significant portion of
|
||||||
|
its features MUST actually be used.
|
||||||
|
- Dependencies MUST be actively maintained and open source (copyleft is
|
||||||
|
acceptable under GPL).
|
||||||
|
- Dependencies that phone home or compromise user privacy MUST NOT be
|
||||||
|
introduced.
|
||||||
|
|
||||||
|
### VI. Accessibility
|
||||||
|
|
||||||
|
Accessibility is a baseline requirement, not an afterthought.
|
||||||
|
|
||||||
|
- All frontend components MUST meet WCAG AA contrast requirements.
|
||||||
|
- Semantic HTML and ARIA attributes MUST be used where appropriate.
|
||||||
|
- The UI MUST be operable via keyboard navigation.
|
||||||
|
|
||||||
|
## Tech Stack & Constraints
|
||||||
|
|
||||||
|
- **Backend:** Java 25 (LTS, SDKMAN), Spring Boot 3.5.x, Maven with wrapper
|
||||||
|
- **Frontend:** Vue 3, TypeScript, Vue Router, Vite, Vitest, ESLint, Prettier
|
||||||
|
- **Testing:** Playwright + MSW for E2E, Vitest for unit tests, JUnit for
|
||||||
|
backend
|
||||||
|
- **Architecture:** Hexagonal (single Maven module, package-level separation),
|
||||||
|
base package `de.fete`
|
||||||
|
- **State management:** Composition API (`ref`/`reactive`) + localStorage;
|
||||||
|
no Pinia
|
||||||
|
- **Database:** No JPA until setup task T-4 is reached
|
||||||
|
- **Design system:** Electric Dusk + Sora (see `.specify/memory/design-system.md`)
|
||||||
|
- **Deployment:** Dockerfile provided; docker-compose example in README
|
||||||
|
|
||||||
|
## Development Workflow
|
||||||
|
|
||||||
|
- Document integrity: when a decision is revised, add an addendum with
|
||||||
|
rationale. Never rewrite or delete the original decision.
|
||||||
|
- The visual design system in `.specify/memory/design-system.md` is authoritative. All
|
||||||
|
frontend implementation MUST follow it.
|
||||||
|
- Feature specs, research, and plans live in `specs/NNN-feature-name/`
|
||||||
|
(spec-kit format). Cross-cutting research goes to
|
||||||
|
`.specify/memory/research/`, cross-cutting plans to
|
||||||
|
`.specify/memory/plans/`.
|
||||||
|
- Conversation and brainstorming in German; code, comments, commits, and
|
||||||
|
documentation in English.
|
||||||
|
- Documentation lives in the README. No wiki, no elaborate docs site.
|
||||||
|
|
||||||
|
## Governance
|
||||||
|
|
||||||
|
This constitution supersedes all ad-hoc practices. It is the authoritative
|
||||||
|
reference for project principles and constraints.
|
||||||
|
|
||||||
|
- **Amendment procedure:** Amendments require documentation of the change,
|
||||||
|
rationale, and an updated version number. The original text MUST be
|
||||||
|
preserved via addendum, not overwritten.
|
||||||
|
- **Versioning:** The constitution follows semantic versioning. MAJOR for
|
||||||
|
principle removals or redefinitions, MINOR for additions or material
|
||||||
|
expansions, PATCH for clarifications and wording fixes.
|
||||||
|
- **Compliance review:** All code changes and architectural decisions MUST
|
||||||
|
be verified against these principles. The plan template's "Constitution
|
||||||
|
Check" gate enforces this before implementation begins.
|
||||||
|
- **Agent governance:** The agent works autonomously on implementation tasks.
|
||||||
|
Architectural decisions, fundamental design questions, tech stack choices,
|
||||||
|
and dependency selections MUST be proposed and approved before proceeding.
|
||||||
|
|
||||||
|
**Version**: 1.0.0 | **Ratified**: 2026-03-06 | **Last Amended**: 2026-03-06
|
||||||
81
.specify/memory/ideen.md
Normal file
81
.specify/memory/ideen.md
Normal file
@@ -0,0 +1,81 @@
|
|||||||
|
# fete
|
||||||
|
|
||||||
|
## Grundsätze
|
||||||
|
* Soll als PWA im Browser laufen
|
||||||
|
* Damit es sich wie eine normale app anfühlt
|
||||||
|
* Soll ein kleiner Helfer sein, den man schnell mal nutzen kann
|
||||||
|
* Ich will es mir selbst hosten können
|
||||||
|
* Das schließt eigentlich schon AI features aus und das ist okay
|
||||||
|
* Privacy als first class citizen
|
||||||
|
* Schon das Produktdesign muss mit privacy im sinn entworfen werden
|
||||||
|
* Keine Registrierung, kein login notwendig (nur z.B. einen code, um den "raum" zu finden oder so)
|
||||||
|
* Alternativ könnten etwaige anfallende Daten auch im local storage gespeichert werden
|
||||||
|
|
||||||
|
## Die Idee
|
||||||
|
Eine alternative zu Facebook Event Gruppen oder Telegram Gruppen, in denen eine Veranstaltung bekanntgegeben wird und Teilnahmen bestätigt werden
|
||||||
|
|
||||||
|
### Zielbild
|
||||||
|
Person erstellt via App eine Veranstaltung und schickt seine Freunden irgendwie via Link eine Einladung. Freunde können zurückmelden, ob sie kommen oder nicht.
|
||||||
|
|
||||||
|
## Gedankensammlung
|
||||||
|
* So ne Art Landingpage zu jedem Event
|
||||||
|
* Ein Link pro Event, den man z.B. in ne WhatsApp-Gruppe werfen kann
|
||||||
|
* Was, wie, wann, wo?
|
||||||
|
* Irgendwie auch Designbar, sofern man das will
|
||||||
|
* RSVP: "Ich komme" (mit Name) / "Ich komme nicht" (optional mit Name)
|
||||||
|
* Wird serverseitig gespeichert + im LocalStorage gemerkt
|
||||||
|
* Duplikatschutz: Kein perfekter Schutz ohne Accounts, aber gegen versehentliche Doppeleinträge reicht Gerätebindung via LocalStorage
|
||||||
|
* Gegen malicious actors (Fake-Namen spammen etc.) kann man ohne Accounts wenig machen — akzeptables Risiko (vgl. spliit)
|
||||||
|
* "Veranstaltung merken/folgen": Rein lokal, kein Serverkontakt, kein Name nötig
|
||||||
|
* Löst das Multi-Geräte-Problem: Am Handy zugesagt, am Laptop einfach "Folgen" drücken
|
||||||
|
* Auch nützlich für Unentschlossene, die sich das Event erstmal merken wollen
|
||||||
|
* View für den Veranstalter:
|
||||||
|
* Updaten der Veranstaltung
|
||||||
|
* Einsicht angemeldete Gäste, kann bei Bedarf Einträge entfernen
|
||||||
|
* Featureideen:
|
||||||
|
* Link-Previews (OpenGraph Meta-Tags): Generische OG-Tags mit App-Branding (z.B. "fete — Du wurdest eingeladen") damit geteilte Links in WhatsApp/Signal/Telegram hübsch aussehen. Keine Event-Daten an Crawler aus Privacy-Gründen. → Eigene User Story.
|
||||||
|
* Kalender-Integration: .ics-Download + optional webcal:// für Live-Updates bei Änderungen
|
||||||
|
* Änderungen zum ursprünglichen Inhalt (z.b. geändertes datum/ort) werden iwi hervorgehoben
|
||||||
|
* Veranstalter kann Updatenachrichten im Event posten, pro Device wird via LocalStorage gemerkt was man schon gesehen hat (Badge/Hervorhebung für neue Updates)
|
||||||
|
* QR Code generieren (z.B. für Plakate/Flyer)
|
||||||
|
* Ablaufdatum als Pflichtfeld, nach dem alle gespeicherten Daten gelöscht werden
|
||||||
|
* Übersichtsliste im LocalStorage: Alle Events die man zugesagt oder gemerkt hat (vgl. spliit)
|
||||||
|
* Sicherheit/Missbrauchsschutz:
|
||||||
|
* Nicht-erratbare Event-Tokens (z.B. UUIDs)
|
||||||
|
* Event-Erstellung ist offen, kein Login/Passwort/Invite-Code nötig
|
||||||
|
* Max aktive Events als serverseitige Konfiguration (env variable)
|
||||||
|
* Honeypot-Felder in Formularen (verstecktes Feld das nur Bots ausfüllen → Anfrage ignorieren)
|
||||||
|
* Abgrenzungskriterien
|
||||||
|
* KEIN Chat
|
||||||
|
* KEIN Discovery Feature über die App: Ohne Zugangslink geht nichts
|
||||||
|
* KEINE Planung des Events! Also kein "wer macht/bringt was?", "was machen wir überhaupt?"
|
||||||
|
|
||||||
|
## Getroffene Designentscheidungen
|
||||||
|
|
||||||
|
Die folgenden Punkte wurden in Diskussionen bereits geklärt und sind verbindlich.
|
||||||
|
|
||||||
|
* RSVP-System:
|
||||||
|
* Ein Link pro Event (NICHT individuelle Einladungslinks pro Person — zu umständlich für den Veranstalter)
|
||||||
|
* Gäste geben beim RSVP einen Namen an, das reicht
|
||||||
|
* Duplikate durch versehentliche Mehrfachanmeldung: LocalStorage-Gerätebindung reicht als Schutz
|
||||||
|
* Bewusste Doppelanmeldung/Spam: Akzeptables Risiko, Veranstalter kann Einträge manuell löschen
|
||||||
|
* Geräte-Sync ohne Account ist nicht sauber lösbar und das ist okay
|
||||||
|
* Missbrauchsschutz:
|
||||||
|
* Rate Limiting: Bewusst rausgelassen — zu viel Infra-Overhead für den Scope
|
||||||
|
* Captcha: Bewusst rausgelassen — entweder Privacy-Problem (Google) oder hässlich
|
||||||
|
* Admin-Passwort/Invite-Code für Event-Erstellung: Bewusst rausgelassen — die App soll organisch weitergegeben werden können
|
||||||
|
* Erfahrungswert: Spliit-Instanz läuft auch komplett offen ohne nennenswerte Probleme
|
||||||
|
* Stattdessen pragmatische Maßnahmen: Nicht-erratbare Tokens, Ablaufdatum als Pflichtfeld, Max Events per Konfiguration, Honeypot-Felder
|
||||||
|
* Zielgruppe:
|
||||||
|
* Primär Freundeskreise, nicht die breite Öffentlichkeit
|
||||||
|
* Trotzdem: Die App hängt im Internet, also muss man grundlegende Absicherung haben
|
||||||
|
* Architektur (bereits entschieden):
|
||||||
|
* SPA + RESTful API Backend, kein SSR
|
||||||
|
* Datenbank: PostgreSQL, wird separat gehostet (nicht im App-Container — der Hoster betreibt seinen eigenen Postgres)
|
||||||
|
* Organizer-Authentifizierung: Zwei separate UUIDs pro Event — ein öffentliches Event-Token (in der URL, für Gäste) und ein geheimes Organizer-Token (in localStorage, für Verwaltung). Interne DB-ID ist ein Implementierungsdetail.
|
||||||
|
* App wird als einzelner Docker-Container ausgeliefert, verbindet sich per Konfiguration (env variable) mit der externen Postgres-Instanz
|
||||||
|
* Techstack:
|
||||||
|
* Backend: Java (neuste LTS Version), Spring Boot, Maven, Hexagonal/Onion Architecture
|
||||||
|
* Frontend: Vue 3 (mit Vite als Bundler, TypeScript, Vue Router)
|
||||||
|
* Architekturentscheidungen die NOCH NICHT getroffen wurden (hier darf nichts eigenmächtig entschieden werden!):
|
||||||
|
* (derzeit keine offenen Architekturentscheidungen)
|
||||||
166
.specify/scripts/bash/check-prerequisites.sh
Executable file
166
.specify/scripts/bash/check-prerequisites.sh
Executable file
@@ -0,0 +1,166 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Consolidated prerequisite checking script
|
||||||
|
#
|
||||||
|
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
|
||||||
|
# It replaces the functionality previously spread across multiple scripts.
|
||||||
|
#
|
||||||
|
# Usage: ./check-prerequisites.sh [OPTIONS]
|
||||||
|
#
|
||||||
|
# OPTIONS:
|
||||||
|
# --json Output in JSON format
|
||||||
|
# --require-tasks Require tasks.md to exist (for implementation phase)
|
||||||
|
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
|
||||||
|
# --paths-only Only output path variables (no validation)
|
||||||
|
# --help, -h Show help message
|
||||||
|
#
|
||||||
|
# OUTPUTS:
|
||||||
|
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
|
||||||
|
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
|
||||||
|
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Parse command line arguments
|
||||||
|
JSON_MODE=false
|
||||||
|
REQUIRE_TASKS=false
|
||||||
|
INCLUDE_TASKS=false
|
||||||
|
PATHS_ONLY=false
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--json)
|
||||||
|
JSON_MODE=true
|
||||||
|
;;
|
||||||
|
--require-tasks)
|
||||||
|
REQUIRE_TASKS=true
|
||||||
|
;;
|
||||||
|
--include-tasks)
|
||||||
|
INCLUDE_TASKS=true
|
||||||
|
;;
|
||||||
|
--paths-only)
|
||||||
|
PATHS_ONLY=true
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
cat << 'EOF'
|
||||||
|
Usage: check-prerequisites.sh [OPTIONS]
|
||||||
|
|
||||||
|
Consolidated prerequisite checking for Spec-Driven Development workflow.
|
||||||
|
|
||||||
|
OPTIONS:
|
||||||
|
--json Output in JSON format
|
||||||
|
--require-tasks Require tasks.md to exist (for implementation phase)
|
||||||
|
--include-tasks Include tasks.md in AVAILABLE_DOCS list
|
||||||
|
--paths-only Only output path variables (no prerequisite validation)
|
||||||
|
--help, -h Show this help message
|
||||||
|
|
||||||
|
EXAMPLES:
|
||||||
|
# Check task prerequisites (plan.md required)
|
||||||
|
./check-prerequisites.sh --json
|
||||||
|
|
||||||
|
# Check implementation prerequisites (plan.md + tasks.md required)
|
||||||
|
./check-prerequisites.sh --json --require-tasks --include-tasks
|
||||||
|
|
||||||
|
# Get feature paths only (no validation)
|
||||||
|
./check-prerequisites.sh --paths-only
|
||||||
|
|
||||||
|
EOF
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Source common functions
|
||||||
|
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get feature paths and validate branch
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
||||||
|
|
||||||
|
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
|
||||||
|
if $PATHS_ONLY; then
|
||||||
|
if $JSON_MODE; then
|
||||||
|
# Minimal JSON paths payload (no validation performed)
|
||||||
|
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
|
||||||
|
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
|
||||||
|
else
|
||||||
|
echo "REPO_ROOT: $REPO_ROOT"
|
||||||
|
echo "BRANCH: $CURRENT_BRANCH"
|
||||||
|
echo "FEATURE_DIR: $FEATURE_DIR"
|
||||||
|
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
||||||
|
echo "IMPL_PLAN: $IMPL_PLAN"
|
||||||
|
echo "TASKS: $TASKS"
|
||||||
|
fi
|
||||||
|
exit 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Validate required directories and files
|
||||||
|
if [[ ! -d "$FEATURE_DIR" ]]; then
|
||||||
|
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
|
||||||
|
echo "Run /speckit.specify first to create the feature structure." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -f "$IMPL_PLAN" ]]; then
|
||||||
|
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
|
||||||
|
echo "Run /speckit.plan first to create the implementation plan." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for tasks.md if required
|
||||||
|
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
|
||||||
|
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
|
||||||
|
echo "Run /speckit.tasks first to create the task list." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Build list of available documents
|
||||||
|
docs=()
|
||||||
|
|
||||||
|
# Always check these optional docs
|
||||||
|
[[ -f "$RESEARCH" ]] && docs+=("research.md")
|
||||||
|
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
|
||||||
|
|
||||||
|
# Check contracts directory (only if it exists and has files)
|
||||||
|
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
|
||||||
|
docs+=("contracts/")
|
||||||
|
fi
|
||||||
|
|
||||||
|
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
|
||||||
|
|
||||||
|
# Include tasks.md if requested and it exists
|
||||||
|
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
|
||||||
|
docs+=("tasks.md")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output results
|
||||||
|
if $JSON_MODE; then
|
||||||
|
# Build JSON array of documents
|
||||||
|
if [[ ${#docs[@]} -eq 0 ]]; then
|
||||||
|
json_docs="[]"
|
||||||
|
else
|
||||||
|
json_docs=$(printf '"%s",' "${docs[@]}")
|
||||||
|
json_docs="[${json_docs%,}]"
|
||||||
|
fi
|
||||||
|
|
||||||
|
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
|
||||||
|
else
|
||||||
|
# Text output
|
||||||
|
echo "FEATURE_DIR:$FEATURE_DIR"
|
||||||
|
echo "AVAILABLE_DOCS:"
|
||||||
|
|
||||||
|
# Show status of each potential document
|
||||||
|
check_file "$RESEARCH" "research.md"
|
||||||
|
check_file "$DATA_MODEL" "data-model.md"
|
||||||
|
check_dir "$CONTRACTS_DIR" "contracts/"
|
||||||
|
check_file "$QUICKSTART" "quickstart.md"
|
||||||
|
|
||||||
|
if $INCLUDE_TASKS; then
|
||||||
|
check_file "$TASKS" "tasks.md"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
156
.specify/scripts/bash/common.sh
Executable file
156
.specify/scripts/bash/common.sh
Executable file
@@ -0,0 +1,156 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
# Common functions and variables for all scripts
|
||||||
|
|
||||||
|
# Get repository root, with fallback for non-git repositories
|
||||||
|
get_repo_root() {
|
||||||
|
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||||
|
git rev-parse --show-toplevel
|
||||||
|
else
|
||||||
|
# Fall back to script location for non-git repos
|
||||||
|
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
(cd "$script_dir/../../.." && pwd)
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Get current branch, with fallback for non-git repositories
|
||||||
|
get_current_branch() {
|
||||||
|
# First check if SPECIFY_FEATURE environment variable is set
|
||||||
|
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
|
||||||
|
echo "$SPECIFY_FEATURE"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Then check git if available
|
||||||
|
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
|
||||||
|
git rev-parse --abbrev-ref HEAD
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
# For non-git repos, try to find the latest feature directory
|
||||||
|
local repo_root=$(get_repo_root)
|
||||||
|
local specs_dir="$repo_root/specs"
|
||||||
|
|
||||||
|
if [[ -d "$specs_dir" ]]; then
|
||||||
|
local latest_feature=""
|
||||||
|
local highest=0
|
||||||
|
|
||||||
|
for dir in "$specs_dir"/*; do
|
||||||
|
if [[ -d "$dir" ]]; then
|
||||||
|
local dirname=$(basename "$dir")
|
||||||
|
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
|
||||||
|
local number=${BASH_REMATCH[1]}
|
||||||
|
number=$((10#$number))
|
||||||
|
if [[ "$number" -gt "$highest" ]]; then
|
||||||
|
highest=$number
|
||||||
|
latest_feature=$dirname
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
if [[ -n "$latest_feature" ]]; then
|
||||||
|
echo "$latest_feature"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "main" # Final fallback
|
||||||
|
}
|
||||||
|
|
||||||
|
# Check if we have git available
|
||||||
|
has_git() {
|
||||||
|
git rev-parse --show-toplevel >/dev/null 2>&1
|
||||||
|
}
|
||||||
|
|
||||||
|
check_feature_branch() {
|
||||||
|
local branch="$1"
|
||||||
|
local has_git_repo="$2"
|
||||||
|
|
||||||
|
# For non-git repos, we can't enforce branch naming but still provide output
|
||||||
|
if [[ "$has_git_repo" != "true" ]]; then
|
||||||
|
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
|
||||||
|
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
|
||||||
|
echo "Feature branches should be named like: 001-feature-name" >&2
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
get_feature_dir() { echo "$1/specs/$2"; }
|
||||||
|
|
||||||
|
# Find feature directory by numeric prefix instead of exact branch match
|
||||||
|
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
|
||||||
|
find_feature_dir_by_prefix() {
|
||||||
|
local repo_root="$1"
|
||||||
|
local branch_name="$2"
|
||||||
|
local specs_dir="$repo_root/specs"
|
||||||
|
|
||||||
|
# Extract numeric prefix from branch (e.g., "004" from "004-whatever")
|
||||||
|
if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
|
||||||
|
# If branch doesn't have numeric prefix, fall back to exact match
|
||||||
|
echo "$specs_dir/$branch_name"
|
||||||
|
return
|
||||||
|
fi
|
||||||
|
|
||||||
|
local prefix="${BASH_REMATCH[1]}"
|
||||||
|
|
||||||
|
# Search for directories in specs/ that start with this prefix
|
||||||
|
local matches=()
|
||||||
|
if [[ -d "$specs_dir" ]]; then
|
||||||
|
for dir in "$specs_dir"/"$prefix"-*; do
|
||||||
|
if [[ -d "$dir" ]]; then
|
||||||
|
matches+=("$(basename "$dir")")
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Handle results
|
||||||
|
if [[ ${#matches[@]} -eq 0 ]]; then
|
||||||
|
# No match found - return the branch name path (will fail later with clear error)
|
||||||
|
echo "$specs_dir/$branch_name"
|
||||||
|
elif [[ ${#matches[@]} -eq 1 ]]; then
|
||||||
|
# Exactly one match - perfect!
|
||||||
|
echo "$specs_dir/${matches[0]}"
|
||||||
|
else
|
||||||
|
# Multiple matches - this shouldn't happen with proper naming convention
|
||||||
|
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
|
||||||
|
echo "Please ensure only one spec directory exists per numeric prefix." >&2
|
||||||
|
echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
get_feature_paths() {
|
||||||
|
local repo_root=$(get_repo_root)
|
||||||
|
local current_branch=$(get_current_branch)
|
||||||
|
local has_git_repo="false"
|
||||||
|
|
||||||
|
if has_git; then
|
||||||
|
has_git_repo="true"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Use prefix-based lookup to support multiple branches per spec
|
||||||
|
local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
|
||||||
|
|
||||||
|
cat <<EOF
|
||||||
|
REPO_ROOT='$repo_root'
|
||||||
|
CURRENT_BRANCH='$current_branch'
|
||||||
|
HAS_GIT='$has_git_repo'
|
||||||
|
FEATURE_DIR='$feature_dir'
|
||||||
|
FEATURE_SPEC='$feature_dir/spec.md'
|
||||||
|
IMPL_PLAN='$feature_dir/plan.md'
|
||||||
|
TASKS='$feature_dir/tasks.md'
|
||||||
|
RESEARCH='$feature_dir/research.md'
|
||||||
|
DATA_MODEL='$feature_dir/data-model.md'
|
||||||
|
QUICKSTART='$feature_dir/quickstart.md'
|
||||||
|
CONTRACTS_DIR='$feature_dir/contracts'
|
||||||
|
EOF
|
||||||
|
}
|
||||||
|
|
||||||
|
check_file() { [[ -f "$1" ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
||||||
|
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
||||||
|
|
||||||
313
.specify/scripts/bash/create-new-feature.sh
Executable file
313
.specify/scripts/bash/create-new-feature.sh
Executable file
@@ -0,0 +1,313 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
JSON_MODE=false
|
||||||
|
SHORT_NAME=""
|
||||||
|
BRANCH_NUMBER=""
|
||||||
|
ARGS=()
|
||||||
|
i=1
|
||||||
|
while [ $i -le $# ]; do
|
||||||
|
arg="${!i}"
|
||||||
|
case "$arg" in
|
||||||
|
--json)
|
||||||
|
JSON_MODE=true
|
||||||
|
;;
|
||||||
|
--short-name)
|
||||||
|
if [ $((i + 1)) -gt $# ]; then
|
||||||
|
echo 'Error: --short-name requires a value' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
i=$((i + 1))
|
||||||
|
next_arg="${!i}"
|
||||||
|
# Check if the next argument is another option (starts with --)
|
||||||
|
if [[ "$next_arg" == --* ]]; then
|
||||||
|
echo 'Error: --short-name requires a value' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
SHORT_NAME="$next_arg"
|
||||||
|
;;
|
||||||
|
--number)
|
||||||
|
if [ $((i + 1)) -gt $# ]; then
|
||||||
|
echo 'Error: --number requires a value' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
i=$((i + 1))
|
||||||
|
next_arg="${!i}"
|
||||||
|
if [[ "$next_arg" == --* ]]; then
|
||||||
|
echo 'Error: --number requires a value' >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
BRANCH_NUMBER="$next_arg"
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
|
||||||
|
echo ""
|
||||||
|
echo "Options:"
|
||||||
|
echo " --json Output in JSON format"
|
||||||
|
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
|
||||||
|
echo " --number N Specify branch number manually (overrides auto-detection)"
|
||||||
|
echo " --help, -h Show this help message"
|
||||||
|
echo ""
|
||||||
|
echo "Examples:"
|
||||||
|
echo " $0 'Add user authentication system' --short-name 'user-auth'"
|
||||||
|
echo " $0 'Implement OAuth2 integration for API' --number 5"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
ARGS+=("$arg")
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
i=$((i + 1))
|
||||||
|
done
|
||||||
|
|
||||||
|
FEATURE_DESCRIPTION="${ARGS[*]}"
|
||||||
|
if [ -z "$FEATURE_DESCRIPTION" ]; then
|
||||||
|
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Trim whitespace and validate description is not empty (e.g., user passed only whitespace)
|
||||||
|
FEATURE_DESCRIPTION=$(echo "$FEATURE_DESCRIPTION" | xargs)
|
||||||
|
if [ -z "$FEATURE_DESCRIPTION" ]; then
|
||||||
|
echo "Error: Feature description cannot be empty or contain only whitespace" >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Function to find the repository root by searching for existing project markers
|
||||||
|
find_repo_root() {
|
||||||
|
local dir="$1"
|
||||||
|
while [ "$dir" != "/" ]; do
|
||||||
|
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
|
||||||
|
echo "$dir"
|
||||||
|
return 0
|
||||||
|
fi
|
||||||
|
dir="$(dirname "$dir")"
|
||||||
|
done
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to get highest number from specs directory
|
||||||
|
get_highest_from_specs() {
|
||||||
|
local specs_dir="$1"
|
||||||
|
local highest=0
|
||||||
|
|
||||||
|
if [ -d "$specs_dir" ]; then
|
||||||
|
for dir in "$specs_dir"/*; do
|
||||||
|
[ -d "$dir" ] || continue
|
||||||
|
dirname=$(basename "$dir")
|
||||||
|
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
|
||||||
|
number=$((10#$number))
|
||||||
|
if [ "$number" -gt "$highest" ]; then
|
||||||
|
highest=$number
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$highest"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to get highest number from git branches
|
||||||
|
get_highest_from_branches() {
|
||||||
|
local highest=0
|
||||||
|
|
||||||
|
# Get all branches (local and remote)
|
||||||
|
branches=$(git branch -a 2>/dev/null || echo "")
|
||||||
|
|
||||||
|
if [ -n "$branches" ]; then
|
||||||
|
while IFS= read -r branch; do
|
||||||
|
# Clean branch name: remove leading markers and remote prefixes
|
||||||
|
clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||')
|
||||||
|
|
||||||
|
# Extract feature number if branch matches pattern ###-*
|
||||||
|
if echo "$clean_branch" | grep -q '^[0-9]\{3\}-'; then
|
||||||
|
number=$(echo "$clean_branch" | grep -o '^[0-9]\{3\}' || echo "0")
|
||||||
|
number=$((10#$number))
|
||||||
|
if [ "$number" -gt "$highest" ]; then
|
||||||
|
highest=$number
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done <<< "$branches"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "$highest"
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to check existing branches (local and remote) and return next available number
|
||||||
|
check_existing_branches() {
|
||||||
|
local specs_dir="$1"
|
||||||
|
|
||||||
|
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
|
||||||
|
git fetch --all --prune 2>/dev/null || true
|
||||||
|
|
||||||
|
# Get highest number from ALL branches (not just matching short name)
|
||||||
|
local highest_branch=$(get_highest_from_branches)
|
||||||
|
|
||||||
|
# Get highest number from ALL specs (not just matching short name)
|
||||||
|
local highest_spec=$(get_highest_from_specs "$specs_dir")
|
||||||
|
|
||||||
|
# Take the maximum of both
|
||||||
|
local max_num=$highest_branch
|
||||||
|
if [ "$highest_spec" -gt "$max_num" ]; then
|
||||||
|
max_num=$highest_spec
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Return next number
|
||||||
|
echo $((max_num + 1))
|
||||||
|
}
|
||||||
|
|
||||||
|
# Function to clean and format a branch name
|
||||||
|
clean_branch_name() {
|
||||||
|
local name="$1"
|
||||||
|
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
|
||||||
|
}
|
||||||
|
|
||||||
|
# Resolve repository root. Prefer git information when available, but fall back
|
||||||
|
# to searching for repository markers so the workflow still functions in repositories that
|
||||||
|
# were initialised with --no-git.
|
||||||
|
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
|
||||||
|
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||||
|
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||||
|
HAS_GIT=true
|
||||||
|
else
|
||||||
|
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
|
||||||
|
if [ -z "$REPO_ROOT" ]; then
|
||||||
|
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
HAS_GIT=false
|
||||||
|
fi
|
||||||
|
|
||||||
|
cd "$REPO_ROOT"
|
||||||
|
|
||||||
|
SPECS_DIR="$REPO_ROOT/specs"
|
||||||
|
mkdir -p "$SPECS_DIR"
|
||||||
|
|
||||||
|
# Function to generate branch name with stop word filtering and length filtering
|
||||||
|
generate_branch_name() {
|
||||||
|
local description="$1"
|
||||||
|
|
||||||
|
# Common stop words to filter out
|
||||||
|
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
|
||||||
|
|
||||||
|
# Convert to lowercase and split into words
|
||||||
|
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
|
||||||
|
|
||||||
|
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
|
||||||
|
local meaningful_words=()
|
||||||
|
for word in $clean_name; do
|
||||||
|
# Skip empty words
|
||||||
|
[ -z "$word" ] && continue
|
||||||
|
|
||||||
|
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
|
||||||
|
if ! echo "$word" | grep -qiE "$stop_words"; then
|
||||||
|
if [ ${#word} -ge 3 ]; then
|
||||||
|
meaningful_words+=("$word")
|
||||||
|
elif echo "$description" | grep -q "\b${word^^}\b"; then
|
||||||
|
# Keep short words if they appear as uppercase in original (likely acronyms)
|
||||||
|
meaningful_words+=("$word")
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# If we have meaningful words, use first 3-4 of them
|
||||||
|
if [ ${#meaningful_words[@]} -gt 0 ]; then
|
||||||
|
local max_words=3
|
||||||
|
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
|
||||||
|
|
||||||
|
local result=""
|
||||||
|
local count=0
|
||||||
|
for word in "${meaningful_words[@]}"; do
|
||||||
|
if [ $count -ge $max_words ]; then break; fi
|
||||||
|
if [ -n "$result" ]; then result="$result-"; fi
|
||||||
|
result="$result$word"
|
||||||
|
count=$((count + 1))
|
||||||
|
done
|
||||||
|
echo "$result"
|
||||||
|
else
|
||||||
|
# Fallback to original logic if no meaningful words found
|
||||||
|
local cleaned=$(clean_branch_name "$description")
|
||||||
|
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Generate branch name
|
||||||
|
if [ -n "$SHORT_NAME" ]; then
|
||||||
|
# Use provided short name, just clean it up
|
||||||
|
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
|
||||||
|
else
|
||||||
|
# Generate from description with smart filtering
|
||||||
|
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Determine branch number
|
||||||
|
if [ -z "$BRANCH_NUMBER" ]; then
|
||||||
|
if [ "$HAS_GIT" = true ]; then
|
||||||
|
# Check existing branches on remotes
|
||||||
|
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
|
||||||
|
else
|
||||||
|
# Fall back to local directory check
|
||||||
|
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
|
||||||
|
BRANCH_NUMBER=$((HIGHEST + 1))
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
|
||||||
|
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
|
||||||
|
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
|
||||||
|
|
||||||
|
# GitHub enforces a 244-byte limit on branch names
|
||||||
|
# Validate and truncate if necessary
|
||||||
|
MAX_BRANCH_LENGTH=244
|
||||||
|
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
|
||||||
|
# Calculate how much we need to trim from suffix
|
||||||
|
# Account for: feature number (3) + hyphen (1) = 4 chars
|
||||||
|
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
|
||||||
|
|
||||||
|
# Truncate suffix at word boundary if possible
|
||||||
|
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
|
||||||
|
# Remove trailing hyphen if truncation created one
|
||||||
|
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
|
||||||
|
|
||||||
|
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
|
||||||
|
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
|
||||||
|
|
||||||
|
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
|
||||||
|
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
|
||||||
|
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [ "$HAS_GIT" = true ]; then
|
||||||
|
if ! git checkout -b "$BRANCH_NAME" 2>/dev/null; then
|
||||||
|
# Check if branch already exists
|
||||||
|
if git branch --list "$BRANCH_NAME" | grep -q .; then
|
||||||
|
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Please use a different feature name or specify a different number with --number."
|
||||||
|
exit 1
|
||||||
|
else
|
||||||
|
>&2 echo "Error: Failed to create git branch '$BRANCH_NAME'. Please check your git configuration and try again."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
|
||||||
|
fi
|
||||||
|
|
||||||
|
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
|
||||||
|
mkdir -p "$FEATURE_DIR"
|
||||||
|
|
||||||
|
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
|
||||||
|
SPEC_FILE="$FEATURE_DIR/spec.md"
|
||||||
|
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
|
||||||
|
|
||||||
|
# Set the SPECIFY_FEATURE environment variable for the current session
|
||||||
|
export SPECIFY_FEATURE="$BRANCH_NAME"
|
||||||
|
|
||||||
|
if $JSON_MODE; then
|
||||||
|
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
|
||||||
|
else
|
||||||
|
echo "BRANCH_NAME: $BRANCH_NAME"
|
||||||
|
echo "SPEC_FILE: $SPEC_FILE"
|
||||||
|
echo "FEATURE_NUM: $FEATURE_NUM"
|
||||||
|
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
|
||||||
|
fi
|
||||||
61
.specify/scripts/bash/setup-plan.sh
Executable file
61
.specify/scripts/bash/setup-plan.sh
Executable file
@@ -0,0 +1,61 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Parse command line arguments
|
||||||
|
JSON_MODE=false
|
||||||
|
ARGS=()
|
||||||
|
|
||||||
|
for arg in "$@"; do
|
||||||
|
case "$arg" in
|
||||||
|
--json)
|
||||||
|
JSON_MODE=true
|
||||||
|
;;
|
||||||
|
--help|-h)
|
||||||
|
echo "Usage: $0 [--json]"
|
||||||
|
echo " --json Output results in JSON format"
|
||||||
|
echo " --help Show this help message"
|
||||||
|
exit 0
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
ARGS+=("$arg")
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
done
|
||||||
|
|
||||||
|
# Get script directory and load common functions
|
||||||
|
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get all paths and variables from common functions
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
|
||||||
|
# Check if we're on a proper feature branch (only for git repos)
|
||||||
|
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
||||||
|
|
||||||
|
# Ensure the feature directory exists
|
||||||
|
mkdir -p "$FEATURE_DIR"
|
||||||
|
|
||||||
|
# Copy plan template if it exists
|
||||||
|
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
|
||||||
|
if [[ -f "$TEMPLATE" ]]; then
|
||||||
|
cp "$TEMPLATE" "$IMPL_PLAN"
|
||||||
|
echo "Copied plan template to $IMPL_PLAN"
|
||||||
|
else
|
||||||
|
echo "Warning: Plan template not found at $TEMPLATE"
|
||||||
|
# Create a basic plan file if template doesn't exist
|
||||||
|
touch "$IMPL_PLAN"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Output results
|
||||||
|
if $JSON_MODE; then
|
||||||
|
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
|
||||||
|
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
|
||||||
|
else
|
||||||
|
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
||||||
|
echo "IMPL_PLAN: $IMPL_PLAN"
|
||||||
|
echo "SPECS_DIR: $FEATURE_DIR"
|
||||||
|
echo "BRANCH: $CURRENT_BRANCH"
|
||||||
|
echo "HAS_GIT: $HAS_GIT"
|
||||||
|
fi
|
||||||
|
|
||||||
829
.specify/scripts/bash/update-agent-context.sh
Executable file
829
.specify/scripts/bash/update-agent-context.sh
Executable file
@@ -0,0 +1,829 @@
|
|||||||
|
#!/usr/bin/env bash
|
||||||
|
|
||||||
|
# Update agent context files with information from plan.md
|
||||||
|
#
|
||||||
|
# This script maintains AI agent context files by parsing feature specifications
|
||||||
|
# and updating agent-specific configuration files with project information.
|
||||||
|
#
|
||||||
|
# MAIN FUNCTIONS:
|
||||||
|
# 1. Environment Validation
|
||||||
|
# - Verifies git repository structure and branch information
|
||||||
|
# - Checks for required plan.md files and templates
|
||||||
|
# - Validates file permissions and accessibility
|
||||||
|
#
|
||||||
|
# 2. Plan Data Extraction
|
||||||
|
# - Parses plan.md files to extract project metadata
|
||||||
|
# - Identifies language/version, frameworks, databases, and project types
|
||||||
|
# - Handles missing or incomplete specification data gracefully
|
||||||
|
#
|
||||||
|
# 3. Agent File Management
|
||||||
|
# - Creates new agent context files from templates when needed
|
||||||
|
# - Updates existing agent files with new project information
|
||||||
|
# - Preserves manual additions and custom configurations
|
||||||
|
# - Supports multiple AI agent formats and directory structures
|
||||||
|
#
|
||||||
|
# 4. Content Generation
|
||||||
|
# - Generates language-specific build/test commands
|
||||||
|
# - Creates appropriate project directory structures
|
||||||
|
# - Updates technology stacks and recent changes sections
|
||||||
|
# - Maintains consistent formatting and timestamps
|
||||||
|
#
|
||||||
|
# 5. Multi-Agent Support
|
||||||
|
# - Handles agent-specific file paths and naming conventions
|
||||||
|
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, Kiro CLI, or Antigravity
|
||||||
|
# - Can update single agents or all existing agent files
|
||||||
|
# - Creates default Claude file if no agent files exist
|
||||||
|
#
|
||||||
|
# Usage: ./update-agent-context.sh [agent_type]
|
||||||
|
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|kiro-cli|agy|bob|qodercli
|
||||||
|
# Leave empty to update all existing agent files
|
||||||
|
|
||||||
|
set -e
|
||||||
|
|
||||||
|
# Enable strict error handling
|
||||||
|
set -u
|
||||||
|
set -o pipefail
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Configuration and Global Variables
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
# Get script directory and load common functions
|
||||||
|
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||||
|
source "$SCRIPT_DIR/common.sh"
|
||||||
|
|
||||||
|
# Get all paths and variables from common functions
|
||||||
|
eval $(get_feature_paths)
|
||||||
|
|
||||||
|
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
|
||||||
|
AGENT_TYPE="${1:-}"
|
||||||
|
|
||||||
|
# Agent-specific file paths
|
||||||
|
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
|
||||||
|
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
|
||||||
|
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
|
||||||
|
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
|
||||||
|
QWEN_FILE="$REPO_ROOT/QWEN.md"
|
||||||
|
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
|
||||||
|
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
|
||||||
|
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
|
||||||
|
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
|
||||||
|
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
|
||||||
|
QODER_FILE="$REPO_ROOT/QODER.md"
|
||||||
|
AMP_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
SHAI_FILE="$REPO_ROOT/SHAI.md"
|
||||||
|
KIRO_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
AGY_FILE="$REPO_ROOT/.agent/rules/specify-rules.md"
|
||||||
|
BOB_FILE="$REPO_ROOT/AGENTS.md"
|
||||||
|
|
||||||
|
# Template file
|
||||||
|
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
|
||||||
|
|
||||||
|
# Global variables for parsed plan data
|
||||||
|
NEW_LANG=""
|
||||||
|
NEW_FRAMEWORK=""
|
||||||
|
NEW_DB=""
|
||||||
|
NEW_PROJECT_TYPE=""
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Utility Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
log_info() {
|
||||||
|
echo "INFO: $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_success() {
|
||||||
|
echo "✓ $1"
|
||||||
|
}
|
||||||
|
|
||||||
|
log_error() {
|
||||||
|
echo "ERROR: $1" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
log_warning() {
|
||||||
|
echo "WARNING: $1" >&2
|
||||||
|
}
|
||||||
|
|
||||||
|
# Cleanup function for temporary files
|
||||||
|
cleanup() {
|
||||||
|
local exit_code=$?
|
||||||
|
rm -f /tmp/agent_update_*_$$
|
||||||
|
rm -f /tmp/manual_additions_$$
|
||||||
|
exit $exit_code
|
||||||
|
}
|
||||||
|
|
||||||
|
# Set up cleanup trap
|
||||||
|
trap cleanup EXIT INT TERM
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Validation Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
validate_environment() {
|
||||||
|
# Check if we have a current branch/feature (git or non-git)
|
||||||
|
if [[ -z "$CURRENT_BRANCH" ]]; then
|
||||||
|
log_error "Unable to determine current feature"
|
||||||
|
if [[ "$HAS_GIT" == "true" ]]; then
|
||||||
|
log_info "Make sure you're on a feature branch"
|
||||||
|
else
|
||||||
|
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if plan.md exists
|
||||||
|
if [[ ! -f "$NEW_PLAN" ]]; then
|
||||||
|
log_error "No plan.md found at $NEW_PLAN"
|
||||||
|
log_info "Make sure you're working on a feature with a corresponding spec directory"
|
||||||
|
if [[ "$HAS_GIT" != "true" ]]; then
|
||||||
|
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
|
||||||
|
fi
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if template exists (needed for new files)
|
||||||
|
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_warning "Template file not found at $TEMPLATE_FILE"
|
||||||
|
log_warning "Creating new agent files will fail"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Plan Parsing Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
extract_plan_field() {
|
||||||
|
local field_pattern="$1"
|
||||||
|
local plan_file="$2"
|
||||||
|
|
||||||
|
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
|
||||||
|
head -1 | \
|
||||||
|
sed "s|^\*\*${field_pattern}\*\*: ||" | \
|
||||||
|
sed 's/^[ \t]*//;s/[ \t]*$//' | \
|
||||||
|
grep -v "NEEDS CLARIFICATION" | \
|
||||||
|
grep -v "^N/A$" || echo ""
|
||||||
|
}
|
||||||
|
|
||||||
|
parse_plan_data() {
|
||||||
|
local plan_file="$1"
|
||||||
|
|
||||||
|
if [[ ! -f "$plan_file" ]]; then
|
||||||
|
log_error "Plan file not found: $plan_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -r "$plan_file" ]]; then
|
||||||
|
log_error "Plan file is not readable: $plan_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Parsing plan data from $plan_file"
|
||||||
|
|
||||||
|
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
|
||||||
|
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
|
||||||
|
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
|
||||||
|
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
|
||||||
|
|
||||||
|
# Log what we found
|
||||||
|
if [[ -n "$NEW_LANG" ]]; then
|
||||||
|
log_info "Found language: $NEW_LANG"
|
||||||
|
else
|
||||||
|
log_warning "No language information found in plan"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
||||||
|
log_info "Found framework: $NEW_FRAMEWORK"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
||||||
|
log_info "Found database: $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
|
||||||
|
log_info "Found project type: $NEW_PROJECT_TYPE"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
format_technology_stack() {
|
||||||
|
local lang="$1"
|
||||||
|
local framework="$2"
|
||||||
|
local parts=()
|
||||||
|
|
||||||
|
# Add non-empty parts
|
||||||
|
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
|
||||||
|
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
|
||||||
|
|
||||||
|
# Join with proper formatting
|
||||||
|
if [[ ${#parts[@]} -eq 0 ]]; then
|
||||||
|
echo ""
|
||||||
|
elif [[ ${#parts[@]} -eq 1 ]]; then
|
||||||
|
echo "${parts[0]}"
|
||||||
|
else
|
||||||
|
# Join multiple parts with " + "
|
||||||
|
local result="${parts[0]}"
|
||||||
|
for ((i=1; i<${#parts[@]}; i++)); do
|
||||||
|
result="$result + ${parts[i]}"
|
||||||
|
done
|
||||||
|
echo "$result"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Template and Content Generation Functions
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
get_project_structure() {
|
||||||
|
local project_type="$1"
|
||||||
|
|
||||||
|
if [[ "$project_type" == *"web"* ]]; then
|
||||||
|
echo "backend/\\nfrontend/\\ntests/"
|
||||||
|
else
|
||||||
|
echo "src/\\ntests/"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
get_commands_for_language() {
|
||||||
|
local lang="$1"
|
||||||
|
|
||||||
|
case "$lang" in
|
||||||
|
*"Python"*)
|
||||||
|
echo "cd src && pytest && ruff check ."
|
||||||
|
;;
|
||||||
|
*"Rust"*)
|
||||||
|
echo "cargo test && cargo clippy"
|
||||||
|
;;
|
||||||
|
*"JavaScript"*|*"TypeScript"*)
|
||||||
|
echo "npm test \\&\\& npm run lint"
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
echo "# Add commands for $lang"
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
get_language_conventions() {
|
||||||
|
local lang="$1"
|
||||||
|
echo "$lang: Follow standard conventions"
|
||||||
|
}
|
||||||
|
|
||||||
|
create_new_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local temp_file="$2"
|
||||||
|
local project_name="$3"
|
||||||
|
local current_date="$4"
|
||||||
|
|
||||||
|
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_error "Template not found at $TEMPLATE_FILE"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -r "$TEMPLATE_FILE" ]]; then
|
||||||
|
log_error "Template file is not readable: $TEMPLATE_FILE"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Creating new agent context file from template..."
|
||||||
|
|
||||||
|
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
|
||||||
|
log_error "Failed to copy template file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Replace template placeholders
|
||||||
|
local project_structure
|
||||||
|
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
|
||||||
|
|
||||||
|
local commands
|
||||||
|
commands=$(get_commands_for_language "$NEW_LANG")
|
||||||
|
|
||||||
|
local language_conventions
|
||||||
|
language_conventions=$(get_language_conventions "$NEW_LANG")
|
||||||
|
|
||||||
|
# Perform substitutions with error checking using safer approach
|
||||||
|
# Escape special characters for sed by using a different delimiter or escaping
|
||||||
|
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||||
|
|
||||||
|
# Build technology stack and recent change strings conditionally
|
||||||
|
local tech_stack
|
||||||
|
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
||||||
|
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
|
||||||
|
elif [[ -n "$escaped_lang" ]]; then
|
||||||
|
tech_stack="- $escaped_lang ($escaped_branch)"
|
||||||
|
elif [[ -n "$escaped_framework" ]]; then
|
||||||
|
tech_stack="- $escaped_framework ($escaped_branch)"
|
||||||
|
else
|
||||||
|
tech_stack="- ($escaped_branch)"
|
||||||
|
fi
|
||||||
|
|
||||||
|
local recent_change
|
||||||
|
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
|
||||||
|
elif [[ -n "$escaped_lang" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_lang"
|
||||||
|
elif [[ -n "$escaped_framework" ]]; then
|
||||||
|
recent_change="- $escaped_branch: Added $escaped_framework"
|
||||||
|
else
|
||||||
|
recent_change="- $escaped_branch: Added"
|
||||||
|
fi
|
||||||
|
|
||||||
|
local substitutions=(
|
||||||
|
"s|\[PROJECT NAME\]|$project_name|"
|
||||||
|
"s|\[DATE\]|$current_date|"
|
||||||
|
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
|
||||||
|
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
|
||||||
|
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
|
||||||
|
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
|
||||||
|
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
|
||||||
|
)
|
||||||
|
|
||||||
|
for substitution in "${substitutions[@]}"; do
|
||||||
|
if ! sed -i.bak -e "$substitution" "$temp_file"; then
|
||||||
|
log_error "Failed to perform substitution: $substitution"
|
||||||
|
rm -f "$temp_file" "$temp_file.bak"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
done
|
||||||
|
|
||||||
|
# Convert \n sequences to actual newlines
|
||||||
|
newline=$(printf '\n')
|
||||||
|
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
|
||||||
|
|
||||||
|
# Clean up backup files
|
||||||
|
rm -f "$temp_file.bak" "$temp_file.bak2"
|
||||||
|
|
||||||
|
# Prepend Cursor frontmatter for .mdc files so rules are auto-included
|
||||||
|
if [[ "$target_file" == *.mdc ]]; then
|
||||||
|
local frontmatter_file
|
||||||
|
frontmatter_file=$(mktemp) || return 1
|
||||||
|
printf '%s\n' "---" "description: Project Development Guidelines" "globs: [\"**/*\"]" "alwaysApply: true" "---" "" > "$frontmatter_file"
|
||||||
|
cat "$temp_file" >> "$frontmatter_file"
|
||||||
|
mv "$frontmatter_file" "$temp_file"
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
|
||||||
|
update_existing_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local current_date="$2"
|
||||||
|
|
||||||
|
log_info "Updating existing agent context file..."
|
||||||
|
|
||||||
|
# Use a single temporary file for atomic update
|
||||||
|
local temp_file
|
||||||
|
temp_file=$(mktemp) || {
|
||||||
|
log_error "Failed to create temporary file"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
# Process the file in one pass
|
||||||
|
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
|
||||||
|
local new_tech_entries=()
|
||||||
|
local new_change_entry=""
|
||||||
|
|
||||||
|
# Prepare new technology entries
|
||||||
|
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
|
||||||
|
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
|
||||||
|
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Prepare new change entry
|
||||||
|
if [[ -n "$tech_stack" ]]; then
|
||||||
|
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
|
||||||
|
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
|
||||||
|
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check if sections exist in the file
|
||||||
|
local has_active_technologies=0
|
||||||
|
local has_recent_changes=0
|
||||||
|
|
||||||
|
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
|
||||||
|
has_active_technologies=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
|
||||||
|
has_recent_changes=1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Process file line by line
|
||||||
|
local in_tech_section=false
|
||||||
|
local in_changes_section=false
|
||||||
|
local tech_entries_added=false
|
||||||
|
local changes_entries_added=false
|
||||||
|
local existing_changes_count=0
|
||||||
|
local file_ended=false
|
||||||
|
|
||||||
|
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||||
|
# Handle Active Technologies section
|
||||||
|
if [[ "$line" == "## Active Technologies" ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_tech_section=true
|
||||||
|
continue
|
||||||
|
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
||||||
|
# Add new tech entries before closing the section
|
||||||
|
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_tech_section=false
|
||||||
|
continue
|
||||||
|
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
|
||||||
|
# Add new tech entries before empty line in tech section
|
||||||
|
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Handle Recent Changes section
|
||||||
|
if [[ "$line" == "## Recent Changes" ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
# Add new change entry right after the heading
|
||||||
|
if [[ -n "$new_change_entry" ]]; then
|
||||||
|
echo "$new_change_entry" >> "$temp_file"
|
||||||
|
fi
|
||||||
|
in_changes_section=true
|
||||||
|
changes_entries_added=true
|
||||||
|
continue
|
||||||
|
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
in_changes_section=false
|
||||||
|
continue
|
||||||
|
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
|
||||||
|
# Keep only first 2 existing changes
|
||||||
|
if [[ $existing_changes_count -lt 2 ]]; then
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
((existing_changes_count++))
|
||||||
|
fi
|
||||||
|
continue
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Update timestamp
|
||||||
|
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
|
||||||
|
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
|
||||||
|
else
|
||||||
|
echo "$line" >> "$temp_file"
|
||||||
|
fi
|
||||||
|
done < "$target_file"
|
||||||
|
|
||||||
|
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
|
||||||
|
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If sections don't exist, add them at the end of the file
|
||||||
|
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||||
|
echo "" >> "$temp_file"
|
||||||
|
echo "## Active Technologies" >> "$temp_file"
|
||||||
|
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||||
|
tech_entries_added=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
|
||||||
|
echo "" >> "$temp_file"
|
||||||
|
echo "## Recent Changes" >> "$temp_file"
|
||||||
|
echo "$new_change_entry" >> "$temp_file"
|
||||||
|
changes_entries_added=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Ensure Cursor .mdc files have YAML frontmatter for auto-inclusion
|
||||||
|
if [[ "$target_file" == *.mdc ]]; then
|
||||||
|
if ! head -1 "$temp_file" | grep -q '^---'; then
|
||||||
|
local frontmatter_file
|
||||||
|
frontmatter_file=$(mktemp) || { rm -f "$temp_file"; return 1; }
|
||||||
|
printf '%s\n' "---" "description: Project Development Guidelines" "globs: [\"**/*\"]" "alwaysApply: true" "---" "" > "$frontmatter_file"
|
||||||
|
cat "$temp_file" >> "$frontmatter_file"
|
||||||
|
mv "$frontmatter_file" "$temp_file"
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Move temp file to target atomically
|
||||||
|
if ! mv "$temp_file" "$target_file"; then
|
||||||
|
log_error "Failed to update target file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
#==============================================================================
|
||||||
|
# Main Agent File Update Function
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
update_agent_file() {
|
||||||
|
local target_file="$1"
|
||||||
|
local agent_name="$2"
|
||||||
|
|
||||||
|
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
|
||||||
|
log_error "update_agent_file requires target_file and agent_name parameters"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
log_info "Updating $agent_name context file: $target_file"
|
||||||
|
|
||||||
|
local project_name
|
||||||
|
project_name=$(basename "$REPO_ROOT")
|
||||||
|
local current_date
|
||||||
|
current_date=$(date +%Y-%m-%d)
|
||||||
|
|
||||||
|
# Create directory if it doesn't exist
|
||||||
|
local target_dir
|
||||||
|
target_dir=$(dirname "$target_file")
|
||||||
|
if [[ ! -d "$target_dir" ]]; then
|
||||||
|
if ! mkdir -p "$target_dir"; then
|
||||||
|
log_error "Failed to create directory: $target_dir"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -f "$target_file" ]]; then
|
||||||
|
# Create new file from template
|
||||||
|
local temp_file
|
||||||
|
temp_file=$(mktemp) || {
|
||||||
|
log_error "Failed to create temporary file"
|
||||||
|
return 1
|
||||||
|
}
|
||||||
|
|
||||||
|
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
|
||||||
|
if mv "$temp_file" "$target_file"; then
|
||||||
|
log_success "Created new $agent_name context file"
|
||||||
|
else
|
||||||
|
log_error "Failed to move temporary file to $target_file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
log_error "Failed to create new agent file"
|
||||||
|
rm -f "$temp_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Update existing file
|
||||||
|
if [[ ! -r "$target_file" ]]; then
|
||||||
|
log_error "Cannot read existing file: $target_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ ! -w "$target_file" ]]; then
|
||||||
|
log_error "Cannot write to existing file: $target_file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
if update_existing_agent_file "$target_file" "$current_date"; then
|
||||||
|
log_success "Updated existing $agent_name context file"
|
||||||
|
else
|
||||||
|
log_error "Failed to update existing agent file"
|
||||||
|
return 1
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
return 0
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Agent Selection and Processing
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
update_specific_agent() {
|
||||||
|
local agent_type="$1"
|
||||||
|
|
||||||
|
case "$agent_type" in
|
||||||
|
claude)
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
;;
|
||||||
|
gemini)
|
||||||
|
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
||||||
|
;;
|
||||||
|
copilot)
|
||||||
|
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
||||||
|
;;
|
||||||
|
cursor-agent)
|
||||||
|
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
||||||
|
;;
|
||||||
|
qwen)
|
||||||
|
update_agent_file "$QWEN_FILE" "Qwen Code"
|
||||||
|
;;
|
||||||
|
opencode)
|
||||||
|
update_agent_file "$AGENTS_FILE" "opencode"
|
||||||
|
;;
|
||||||
|
codex)
|
||||||
|
update_agent_file "$AGENTS_FILE" "Codex CLI"
|
||||||
|
;;
|
||||||
|
windsurf)
|
||||||
|
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
||||||
|
;;
|
||||||
|
kilocode)
|
||||||
|
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
||||||
|
;;
|
||||||
|
auggie)
|
||||||
|
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
||||||
|
;;
|
||||||
|
roo)
|
||||||
|
update_agent_file "$ROO_FILE" "Roo Code"
|
||||||
|
;;
|
||||||
|
codebuddy)
|
||||||
|
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
|
||||||
|
;;
|
||||||
|
qodercli)
|
||||||
|
update_agent_file "$QODER_FILE" "Qoder CLI"
|
||||||
|
;;
|
||||||
|
amp)
|
||||||
|
update_agent_file "$AMP_FILE" "Amp"
|
||||||
|
;;
|
||||||
|
shai)
|
||||||
|
update_agent_file "$SHAI_FILE" "SHAI"
|
||||||
|
;;
|
||||||
|
kiro-cli)
|
||||||
|
update_agent_file "$KIRO_FILE" "Kiro CLI"
|
||||||
|
;;
|
||||||
|
agy)
|
||||||
|
update_agent_file "$AGY_FILE" "Antigravity"
|
||||||
|
;;
|
||||||
|
bob)
|
||||||
|
update_agent_file "$BOB_FILE" "IBM Bob"
|
||||||
|
;;
|
||||||
|
generic)
|
||||||
|
log_info "Generic agent: no predefined context file. Use the agent-specific update script for your agent."
|
||||||
|
;;
|
||||||
|
*)
|
||||||
|
log_error "Unknown agent type '$agent_type'"
|
||||||
|
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|kiro-cli|agy|bob|qodercli|generic"
|
||||||
|
exit 1
|
||||||
|
;;
|
||||||
|
esac
|
||||||
|
}
|
||||||
|
|
||||||
|
update_all_existing_agents() {
|
||||||
|
local found_agent=false
|
||||||
|
|
||||||
|
# Check each possible agent file and update if it exists
|
||||||
|
if [[ -f "$CLAUDE_FILE" ]]; then
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$GEMINI_FILE" ]]; then
|
||||||
|
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$COPILOT_FILE" ]]; then
|
||||||
|
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$CURSOR_FILE" ]]; then
|
||||||
|
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$QWEN_FILE" ]]; then
|
||||||
|
update_agent_file "$QWEN_FILE" "Qwen Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$AGENTS_FILE" ]]; then
|
||||||
|
update_agent_file "$AGENTS_FILE" "Codex/opencode"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$WINDSURF_FILE" ]]; then
|
||||||
|
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$KILOCODE_FILE" ]]; then
|
||||||
|
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$AUGGIE_FILE" ]]; then
|
||||||
|
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$ROO_FILE" ]]; then
|
||||||
|
update_agent_file "$ROO_FILE" "Roo Code"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$CODEBUDDY_FILE" ]]; then
|
||||||
|
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$SHAI_FILE" ]]; then
|
||||||
|
update_agent_file "$SHAI_FILE" "SHAI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$QODER_FILE" ]]; then
|
||||||
|
update_agent_file "$QODER_FILE" "Qoder CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$KIRO_FILE" ]]; then
|
||||||
|
update_agent_file "$KIRO_FILE" "Kiro CLI"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -f "$AGY_FILE" ]]; then
|
||||||
|
update_agent_file "$AGY_FILE" "Antigravity"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
if [[ -f "$BOB_FILE" ]]; then
|
||||||
|
update_agent_file "$BOB_FILE" "IBM Bob"
|
||||||
|
found_agent=true
|
||||||
|
fi
|
||||||
|
|
||||||
|
# If no agent files exist, create a default Claude file
|
||||||
|
if [[ "$found_agent" == false ]]; then
|
||||||
|
log_info "No existing agent files found, creating default Claude file..."
|
||||||
|
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
print_summary() {
|
||||||
|
echo
|
||||||
|
log_info "Summary of changes:"
|
||||||
|
|
||||||
|
if [[ -n "$NEW_LANG" ]]; then
|
||||||
|
echo " - Added language: $NEW_LANG"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
||||||
|
echo " - Added framework: $NEW_FRAMEWORK"
|
||||||
|
fi
|
||||||
|
|
||||||
|
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
||||||
|
echo " - Added database: $NEW_DB"
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo
|
||||||
|
|
||||||
|
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|kiro-cli|agy|bob|qodercli]"
|
||||||
|
}
|
||||||
|
|
||||||
|
#==============================================================================
|
||||||
|
# Main Execution
|
||||||
|
#==============================================================================
|
||||||
|
|
||||||
|
main() {
|
||||||
|
# Validate environment before proceeding
|
||||||
|
validate_environment
|
||||||
|
|
||||||
|
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
|
||||||
|
|
||||||
|
# Parse the plan file to extract project information
|
||||||
|
if ! parse_plan_data "$NEW_PLAN"; then
|
||||||
|
log_error "Failed to parse plan data"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Process based on agent type argument
|
||||||
|
local success=true
|
||||||
|
|
||||||
|
if [[ -z "$AGENT_TYPE" ]]; then
|
||||||
|
# No specific agent provided - update all existing agent files
|
||||||
|
log_info "No agent specified, updating all existing agent files..."
|
||||||
|
if ! update_all_existing_agents; then
|
||||||
|
success=false
|
||||||
|
fi
|
||||||
|
else
|
||||||
|
# Specific agent provided - update only that agent
|
||||||
|
log_info "Updating specific agent: $AGENT_TYPE"
|
||||||
|
if ! update_specific_agent "$AGENT_TYPE"; then
|
||||||
|
success=false
|
||||||
|
fi
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Print summary
|
||||||
|
print_summary
|
||||||
|
|
||||||
|
if [[ "$success" == true ]]; then
|
||||||
|
log_success "Agent context update completed successfully"
|
||||||
|
exit 0
|
||||||
|
else
|
||||||
|
log_error "Agent context update completed with errors"
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
}
|
||||||
|
|
||||||
|
# Execute main function if script is run directly
|
||||||
|
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||||
|
main "$@"
|
||||||
|
fi
|
||||||
28
.specify/templates/agent-file-template.md
Normal file
28
.specify/templates/agent-file-template.md
Normal file
@@ -0,0 +1,28 @@
|
|||||||
|
# [PROJECT NAME] Development Guidelines
|
||||||
|
|
||||||
|
Auto-generated from all feature plans. Last updated: [DATE]
|
||||||
|
|
||||||
|
## Active Technologies
|
||||||
|
|
||||||
|
[EXTRACTED FROM ALL PLAN.MD FILES]
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
```text
|
||||||
|
[ACTUAL STRUCTURE FROM PLANS]
|
||||||
|
```
|
||||||
|
|
||||||
|
## Commands
|
||||||
|
|
||||||
|
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
|
||||||
|
|
||||||
|
## Code Style
|
||||||
|
|
||||||
|
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
|
||||||
|
|
||||||
|
## Recent Changes
|
||||||
|
|
||||||
|
[LAST 3 FEATURES AND WHAT THEY ADDED]
|
||||||
|
|
||||||
|
<!-- MANUAL ADDITIONS START -->
|
||||||
|
<!-- MANUAL ADDITIONS END -->
|
||||||
40
.specify/templates/checklist-template.md
Normal file
40
.specify/templates/checklist-template.md
Normal file
@@ -0,0 +1,40 @@
|
|||||||
|
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Purpose**: [Brief description of what this checklist covers]
|
||||||
|
**Created**: [DATE]
|
||||||
|
**Feature**: [Link to spec.md or relevant documentation]
|
||||||
|
|
||||||
|
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
|
||||||
|
|
||||||
|
<!--
|
||||||
|
============================================================================
|
||||||
|
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
|
||||||
|
|
||||||
|
The /speckit.checklist command MUST replace these with actual items based on:
|
||||||
|
- User's specific checklist request
|
||||||
|
- Feature requirements from spec.md
|
||||||
|
- Technical context from plan.md
|
||||||
|
- Implementation details from tasks.md
|
||||||
|
|
||||||
|
DO NOT keep these sample items in the generated checklist file.
|
||||||
|
============================================================================
|
||||||
|
-->
|
||||||
|
|
||||||
|
## [Category 1]
|
||||||
|
|
||||||
|
- [ ] CHK001 First checklist item with clear action
|
||||||
|
- [ ] CHK002 Second checklist item
|
||||||
|
- [ ] CHK003 Third checklist item
|
||||||
|
|
||||||
|
## [Category 2]
|
||||||
|
|
||||||
|
- [ ] CHK004 Another category item
|
||||||
|
- [ ] CHK005 Item with specific criteria
|
||||||
|
- [ ] CHK006 Final item in this category
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- Check items off as completed: `[x]`
|
||||||
|
- Add comments or findings inline
|
||||||
|
- Link to relevant resources or documentation
|
||||||
|
- Items are numbered sequentially for easy reference
|
||||||
50
.specify/templates/constitution-template.md
Normal file
50
.specify/templates/constitution-template.md
Normal file
@@ -0,0 +1,50 @@
|
|||||||
|
# [PROJECT_NAME] Constitution
|
||||||
|
<!-- Example: Spec Constitution, TaskFlow Constitution, etc. -->
|
||||||
|
|
||||||
|
## Core Principles
|
||||||
|
|
||||||
|
### [PRINCIPLE_1_NAME]
|
||||||
|
<!-- Example: I. Library-First -->
|
||||||
|
[PRINCIPLE_1_DESCRIPTION]
|
||||||
|
<!-- Example: Every feature starts as a standalone library; Libraries must be self-contained, independently testable, documented; Clear purpose required - no organizational-only libraries -->
|
||||||
|
|
||||||
|
### [PRINCIPLE_2_NAME]
|
||||||
|
<!-- Example: II. CLI Interface -->
|
||||||
|
[PRINCIPLE_2_DESCRIPTION]
|
||||||
|
<!-- Example: Every library exposes functionality via CLI; Text in/out protocol: stdin/args → stdout, errors → stderr; Support JSON + human-readable formats -->
|
||||||
|
|
||||||
|
### [PRINCIPLE_3_NAME]
|
||||||
|
<!-- Example: III. Test-First (NON-NEGOTIABLE) -->
|
||||||
|
[PRINCIPLE_3_DESCRIPTION]
|
||||||
|
<!-- Example: TDD mandatory: Tests written → User approved → Tests fail → Then implement; Red-Green-Refactor cycle strictly enforced -->
|
||||||
|
|
||||||
|
### [PRINCIPLE_4_NAME]
|
||||||
|
<!-- Example: IV. Integration Testing -->
|
||||||
|
[PRINCIPLE_4_DESCRIPTION]
|
||||||
|
<!-- Example: Focus areas requiring integration tests: New library contract tests, Contract changes, Inter-service communication, Shared schemas -->
|
||||||
|
|
||||||
|
### [PRINCIPLE_5_NAME]
|
||||||
|
<!-- Example: V. Observability, VI. Versioning & Breaking Changes, VII. Simplicity -->
|
||||||
|
[PRINCIPLE_5_DESCRIPTION]
|
||||||
|
<!-- Example: Text I/O ensures debuggability; Structured logging required; Or: MAJOR.MINOR.BUILD format; Or: Start simple, YAGNI principles -->
|
||||||
|
|
||||||
|
## [SECTION_2_NAME]
|
||||||
|
<!-- Example: Additional Constraints, Security Requirements, Performance Standards, etc. -->
|
||||||
|
|
||||||
|
[SECTION_2_CONTENT]
|
||||||
|
<!-- Example: Technology stack requirements, compliance standards, deployment policies, etc. -->
|
||||||
|
|
||||||
|
## [SECTION_3_NAME]
|
||||||
|
<!-- Example: Development Workflow, Review Process, Quality Gates, etc. -->
|
||||||
|
|
||||||
|
[SECTION_3_CONTENT]
|
||||||
|
<!-- Example: Code review requirements, testing gates, deployment approval process, etc. -->
|
||||||
|
|
||||||
|
## Governance
|
||||||
|
<!-- Example: Constitution supersedes all other practices; Amendments require documentation, approval, migration plan -->
|
||||||
|
|
||||||
|
[GOVERNANCE_RULES]
|
||||||
|
<!-- Example: All PRs/reviews must verify compliance; Complexity must be justified; Use [GUIDANCE_FILE] for runtime development guidance -->
|
||||||
|
|
||||||
|
**Version**: [CONSTITUTION_VERSION] | **Ratified**: [RATIFICATION_DATE] | **Last Amended**: [LAST_AMENDED_DATE]
|
||||||
|
<!-- Example: Version: 2.1.1 | Ratified: 2025-06-13 | Last Amended: 2025-07-16 -->
|
||||||
104
.specify/templates/plan-template.md
Normal file
104
.specify/templates/plan-template.md
Normal file
@@ -0,0 +1,104 @@
|
|||||||
|
# Implementation Plan: [FEATURE]
|
||||||
|
|
||||||
|
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
|
||||||
|
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
|
||||||
|
|
||||||
|
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/plan-template.md` for the execution workflow.
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
[Extract from feature spec: primary requirement + technical approach from research]
|
||||||
|
|
||||||
|
## Technical Context
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: Replace the content in this section with the technical details
|
||||||
|
for the project. The structure here is presented in advisory capacity to guide
|
||||||
|
the iteration process.
|
||||||
|
-->
|
||||||
|
|
||||||
|
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
|
||||||
|
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
|
||||||
|
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
|
||||||
|
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
|
||||||
|
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
|
||||||
|
**Project Type**: [e.g., library/cli/web-service/mobile-app/compiler/desktop-app or NEEDS CLARIFICATION]
|
||||||
|
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
|
||||||
|
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
|
||||||
|
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
|
||||||
|
|
||||||
|
## Constitution Check
|
||||||
|
|
||||||
|
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
|
||||||
|
|
||||||
|
[Gates determined based on constitution file]
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
### Documentation (this feature)
|
||||||
|
|
||||||
|
```text
|
||||||
|
specs/[###-feature]/
|
||||||
|
├── plan.md # This file (/speckit.plan command output)
|
||||||
|
├── research.md # Phase 0 output (/speckit.plan command)
|
||||||
|
├── data-model.md # Phase 1 output (/speckit.plan command)
|
||||||
|
├── quickstart.md # Phase 1 output (/speckit.plan command)
|
||||||
|
├── contracts/ # Phase 1 output (/speckit.plan command)
|
||||||
|
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Source Code (repository root)
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
|
||||||
|
for this feature. Delete unused options and expand the chosen structure with
|
||||||
|
real paths (e.g., apps/admin, packages/something). The delivered plan must
|
||||||
|
not include Option labels.
|
||||||
|
-->
|
||||||
|
|
||||||
|
```text
|
||||||
|
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
|
||||||
|
src/
|
||||||
|
├── models/
|
||||||
|
├── services/
|
||||||
|
├── cli/
|
||||||
|
└── lib/
|
||||||
|
|
||||||
|
tests/
|
||||||
|
├── contract/
|
||||||
|
├── integration/
|
||||||
|
└── unit/
|
||||||
|
|
||||||
|
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
|
||||||
|
backend/
|
||||||
|
├── src/
|
||||||
|
│ ├── models/
|
||||||
|
│ ├── services/
|
||||||
|
│ └── api/
|
||||||
|
└── tests/
|
||||||
|
|
||||||
|
frontend/
|
||||||
|
├── src/
|
||||||
|
│ ├── components/
|
||||||
|
│ ├── pages/
|
||||||
|
│ └── services/
|
||||||
|
└── tests/
|
||||||
|
|
||||||
|
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
|
||||||
|
api/
|
||||||
|
└── [same as backend above]
|
||||||
|
|
||||||
|
ios/ or android/
|
||||||
|
└── [platform-specific structure: feature modules, UI flows, platform tests]
|
||||||
|
```
|
||||||
|
|
||||||
|
**Structure Decision**: [Document the selected structure and reference the real
|
||||||
|
directories captured above]
|
||||||
|
|
||||||
|
## Complexity Tracking
|
||||||
|
|
||||||
|
> **Fill ONLY if Constitution Check has violations that must be justified**
|
||||||
|
|
||||||
|
| Violation | Why Needed | Simpler Alternative Rejected Because |
|
||||||
|
|-----------|------------|-------------------------------------|
|
||||||
|
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
|
||||||
|
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
|
||||||
115
.specify/templates/spec-template.md
Normal file
115
.specify/templates/spec-template.md
Normal file
@@ -0,0 +1,115 @@
|
|||||||
|
# Feature Specification: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Feature Branch**: `[###-feature-name]`
|
||||||
|
**Created**: [DATE]
|
||||||
|
**Status**: Draft
|
||||||
|
**Input**: User description: "$ARGUMENTS"
|
||||||
|
|
||||||
|
## User Scenarios & Testing *(mandatory)*
|
||||||
|
|
||||||
|
<!--
|
||||||
|
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
|
||||||
|
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
|
||||||
|
you should still have a viable MVP (Minimum Viable Product) that delivers value.
|
||||||
|
|
||||||
|
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
|
||||||
|
Think of each story as a standalone slice of functionality that can be:
|
||||||
|
- Developed independently
|
||||||
|
- Tested independently
|
||||||
|
- Deployed independently
|
||||||
|
- Demonstrated to users independently
|
||||||
|
-->
|
||||||
|
|
||||||
|
### User Story 1 - [Brief Title] (Priority: P1)
|
||||||
|
|
||||||
|
[Describe this user journey in plain language]
|
||||||
|
|
||||||
|
**Why this priority**: [Explain the value and why it has this priority level]
|
||||||
|
|
||||||
|
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### User Story 2 - [Brief Title] (Priority: P2)
|
||||||
|
|
||||||
|
[Describe this user journey in plain language]
|
||||||
|
|
||||||
|
**Why this priority**: [Explain the value and why it has this priority level]
|
||||||
|
|
||||||
|
**Independent Test**: [Describe how this can be tested independently]
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### User Story 3 - [Brief Title] (Priority: P3)
|
||||||
|
|
||||||
|
[Describe this user journey in plain language]
|
||||||
|
|
||||||
|
**Why this priority**: [Explain the value and why it has this priority level]
|
||||||
|
|
||||||
|
**Independent Test**: [Describe how this can be tested independently]
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
[Add more user stories as needed, each with an assigned priority]
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: The content in this section represents placeholders.
|
||||||
|
Fill them out with the right edge cases.
|
||||||
|
-->
|
||||||
|
|
||||||
|
- What happens when [boundary condition]?
|
||||||
|
- How does system handle [error scenario]?
|
||||||
|
|
||||||
|
## Requirements *(mandatory)*
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: The content in this section represents placeholders.
|
||||||
|
Fill them out with the right functional requirements.
|
||||||
|
-->
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
|
||||||
|
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
|
||||||
|
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
|
||||||
|
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
|
||||||
|
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
|
||||||
|
|
||||||
|
*Example of marking unclear requirements:*
|
||||||
|
|
||||||
|
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
|
||||||
|
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
|
||||||
|
|
||||||
|
### Key Entities *(include if feature involves data)*
|
||||||
|
|
||||||
|
- **[Entity 1]**: [What it represents, key attributes without implementation]
|
||||||
|
- **[Entity 2]**: [What it represents, relationships to other entities]
|
||||||
|
|
||||||
|
## Success Criteria *(mandatory)*
|
||||||
|
|
||||||
|
<!--
|
||||||
|
ACTION REQUIRED: Define measurable success criteria.
|
||||||
|
These must be technology-agnostic and measurable.
|
||||||
|
-->
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
|
||||||
|
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
|
||||||
|
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
|
||||||
|
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
|
||||||
251
.specify/templates/tasks-template.md
Normal file
251
.specify/templates/tasks-template.md
Normal file
@@ -0,0 +1,251 @@
|
|||||||
|
---
|
||||||
|
|
||||||
|
description: "Task list template for feature implementation"
|
||||||
|
---
|
||||||
|
|
||||||
|
# Tasks: [FEATURE NAME]
|
||||||
|
|
||||||
|
**Input**: Design documents from `/specs/[###-feature-name]/`
|
||||||
|
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
|
||||||
|
|
||||||
|
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
|
||||||
|
|
||||||
|
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
|
||||||
|
|
||||||
|
## Format: `[ID] [P?] [Story] Description`
|
||||||
|
|
||||||
|
- **[P]**: Can run in parallel (different files, no dependencies)
|
||||||
|
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
|
||||||
|
- Include exact file paths in descriptions
|
||||||
|
|
||||||
|
## Path Conventions
|
||||||
|
|
||||||
|
- **Single project**: `src/`, `tests/` at repository root
|
||||||
|
- **Web app**: `backend/src/`, `frontend/src/`
|
||||||
|
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
|
||||||
|
- Paths shown below assume single project - adjust based on plan.md structure
|
||||||
|
|
||||||
|
<!--
|
||||||
|
============================================================================
|
||||||
|
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
|
||||||
|
|
||||||
|
The /speckit.tasks command MUST replace these with actual tasks based on:
|
||||||
|
- User stories from spec.md (with their priorities P1, P2, P3...)
|
||||||
|
- Feature requirements from plan.md
|
||||||
|
- Entities from data-model.md
|
||||||
|
- Endpoints from contracts/
|
||||||
|
|
||||||
|
Tasks MUST be organized by user story so each story can be:
|
||||||
|
- Implemented independently
|
||||||
|
- Tested independently
|
||||||
|
- Delivered as an MVP increment
|
||||||
|
|
||||||
|
DO NOT keep these sample tasks in the generated tasks.md file.
|
||||||
|
============================================================================
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Phase 1: Setup (Shared Infrastructure)
|
||||||
|
|
||||||
|
**Purpose**: Project initialization and basic structure
|
||||||
|
|
||||||
|
- [ ] T001 Create project structure per implementation plan
|
||||||
|
- [ ] T002 Initialize [language] project with [framework] dependencies
|
||||||
|
- [ ] T003 [P] Configure linting and formatting tools
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 2: Foundational (Blocking Prerequisites)
|
||||||
|
|
||||||
|
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
|
||||||
|
|
||||||
|
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
|
||||||
|
|
||||||
|
Examples of foundational tasks (adjust based on your project):
|
||||||
|
|
||||||
|
- [ ] T004 Setup database schema and migrations framework
|
||||||
|
- [ ] T005 [P] Implement authentication/authorization framework
|
||||||
|
- [ ] T006 [P] Setup API routing and middleware structure
|
||||||
|
- [ ] T007 Create base models/entities that all stories depend on
|
||||||
|
- [ ] T008 Configure error handling and logging infrastructure
|
||||||
|
- [ ] T009 Setup environment configuration management
|
||||||
|
|
||||||
|
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
|
||||||
|
|
||||||
|
**Goal**: [Brief description of what this story delivers]
|
||||||
|
|
||||||
|
**Independent Test**: [How to verify this story works on its own]
|
||||||
|
|
||||||
|
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
|
||||||
|
|
||||||
|
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
|
||||||
|
|
||||||
|
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||||
|
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
|
||||||
|
|
||||||
|
### Implementation for User Story 1
|
||||||
|
|
||||||
|
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
|
||||||
|
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
|
||||||
|
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
|
||||||
|
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
|
||||||
|
- [ ] T016 [US1] Add validation and error handling
|
||||||
|
- [ ] T017 [US1] Add logging for user story 1 operations
|
||||||
|
|
||||||
|
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 4: User Story 2 - [Title] (Priority: P2)
|
||||||
|
|
||||||
|
**Goal**: [Brief description of what this story delivers]
|
||||||
|
|
||||||
|
**Independent Test**: [How to verify this story works on its own]
|
||||||
|
|
||||||
|
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
|
||||||
|
|
||||||
|
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||||
|
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
|
||||||
|
|
||||||
|
### Implementation for User Story 2
|
||||||
|
|
||||||
|
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
|
||||||
|
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
|
||||||
|
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
|
||||||
|
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
|
||||||
|
|
||||||
|
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase 5: User Story 3 - [Title] (Priority: P3)
|
||||||
|
|
||||||
|
**Goal**: [Brief description of what this story delivers]
|
||||||
|
|
||||||
|
**Independent Test**: [How to verify this story works on its own]
|
||||||
|
|
||||||
|
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
|
||||||
|
|
||||||
|
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||||
|
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
|
||||||
|
|
||||||
|
### Implementation for User Story 3
|
||||||
|
|
||||||
|
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
|
||||||
|
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
|
||||||
|
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
|
||||||
|
|
||||||
|
**Checkpoint**: All user stories should now be independently functional
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
[Add more user story phases as needed, following the same pattern]
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Phase N: Polish & Cross-Cutting Concerns
|
||||||
|
|
||||||
|
**Purpose**: Improvements that affect multiple user stories
|
||||||
|
|
||||||
|
- [ ] TXXX [P] Documentation updates in docs/
|
||||||
|
- [ ] TXXX Code cleanup and refactoring
|
||||||
|
- [ ] TXXX Performance optimization across all stories
|
||||||
|
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
|
||||||
|
- [ ] TXXX Security hardening
|
||||||
|
- [ ] TXXX Run quickstart.md validation
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Dependencies & Execution Order
|
||||||
|
|
||||||
|
### Phase Dependencies
|
||||||
|
|
||||||
|
- **Setup (Phase 1)**: No dependencies - can start immediately
|
||||||
|
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
|
||||||
|
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
|
||||||
|
- User stories can then proceed in parallel (if staffed)
|
||||||
|
- Or sequentially in priority order (P1 → P2 → P3)
|
||||||
|
- **Polish (Final Phase)**: Depends on all desired user stories being complete
|
||||||
|
|
||||||
|
### User Story Dependencies
|
||||||
|
|
||||||
|
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
|
||||||
|
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
|
||||||
|
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
|
||||||
|
|
||||||
|
### Within Each User Story
|
||||||
|
|
||||||
|
- Tests (if included) MUST be written and FAIL before implementation
|
||||||
|
- Models before services
|
||||||
|
- Services before endpoints
|
||||||
|
- Core implementation before integration
|
||||||
|
- Story complete before moving to next priority
|
||||||
|
|
||||||
|
### Parallel Opportunities
|
||||||
|
|
||||||
|
- All Setup tasks marked [P] can run in parallel
|
||||||
|
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
|
||||||
|
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
|
||||||
|
- All tests for a user story marked [P] can run in parallel
|
||||||
|
- Models within a story marked [P] can run in parallel
|
||||||
|
- Different user stories can be worked on in parallel by different team members
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Parallel Example: User Story 1
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# Launch all tests for User Story 1 together (if tests requested):
|
||||||
|
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
|
||||||
|
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
|
||||||
|
|
||||||
|
# Launch all models for User Story 1 together:
|
||||||
|
Task: "Create [Entity1] model in src/models/[entity1].py"
|
||||||
|
Task: "Create [Entity2] model in src/models/[entity2].py"
|
||||||
|
```
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Implementation Strategy
|
||||||
|
|
||||||
|
### MVP First (User Story 1 Only)
|
||||||
|
|
||||||
|
1. Complete Phase 1: Setup
|
||||||
|
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
|
||||||
|
3. Complete Phase 3: User Story 1
|
||||||
|
4. **STOP and VALIDATE**: Test User Story 1 independently
|
||||||
|
5. Deploy/demo if ready
|
||||||
|
|
||||||
|
### Incremental Delivery
|
||||||
|
|
||||||
|
1. Complete Setup + Foundational → Foundation ready
|
||||||
|
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
|
||||||
|
3. Add User Story 2 → Test independently → Deploy/Demo
|
||||||
|
4. Add User Story 3 → Test independently → Deploy/Demo
|
||||||
|
5. Each story adds value without breaking previous stories
|
||||||
|
|
||||||
|
### Parallel Team Strategy
|
||||||
|
|
||||||
|
With multiple developers:
|
||||||
|
|
||||||
|
1. Team completes Setup + Foundational together
|
||||||
|
2. Once Foundational is done:
|
||||||
|
- Developer A: User Story 1
|
||||||
|
- Developer B: User Story 2
|
||||||
|
- Developer C: User Story 3
|
||||||
|
3. Stories complete and integrate independently
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
- [P] tasks = different files, no dependencies
|
||||||
|
- [Story] label maps task to specific user story for traceability
|
||||||
|
- Each user story should be independently completable and testable
|
||||||
|
- Verify tests fail before implementing
|
||||||
|
- Commit after each task or logical group
|
||||||
|
- Stop at any checkpoint to validate story independently
|
||||||
|
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence
|
||||||
88
CLAUDE.md
88
CLAUDE.md
@@ -2,76 +2,16 @@
|
|||||||
|
|
||||||
A privacy-focused, self-hostable PWA for event announcements and RSVPs. Alternative to Facebook Events or Telegram groups — reduced to the essentials.
|
A privacy-focused, self-hostable PWA for event announcements and RSVPs. Alternative to Facebook Events or Telegram groups — reduced to the essentials.
|
||||||
|
|
||||||
## Project Statutes
|
## Constitution
|
||||||
|
|
||||||
These are the non-negotiable principles of this project. Every decision — architectural, technical, or design-related — must be consistent with them.
|
Project principles, constraints, tech stack, and governance are defined in `.specify/memory/constitution.md`. That file is the single source of truth — read and follow it.
|
||||||
|
|
||||||
### Governance
|
## Language
|
||||||
|
|
||||||
- The agent works autonomously on implementation tasks.
|
|
||||||
- When facing architectural decisions, fundamental design questions, tech stack choices, or dependency selections: advise, propose options, and ask for approval before proceeding.
|
|
||||||
- Actively challenge decisions — including the developer's — when there are good reasons to. Don't be a yes-machine.
|
|
||||||
- When encountering problems, attempt to solve them independently first. Only escalate when stuck.
|
|
||||||
|
|
||||||
### Methodology
|
|
||||||
|
|
||||||
- Follow Research → Spec → Test → Implement → Review. No shortcuts.
|
|
||||||
- API-first development: the OpenAPI spec (`backend/src/main/resources/openapi/api.yaml`) is the single source of truth for the REST API contract. Define endpoints and schemas in the spec first, then generate backend interfaces and frontend types before writing any implementation code.
|
|
||||||
- Never write implementation code without a specification.
|
|
||||||
- Always write tests before implementation (TDD). Red → Green → Refactor.
|
|
||||||
- Refactoring is permitted freely as long as it does not alter the fundamental architecture.
|
|
||||||
- No vibe coding. Every line of code must be intentional and traceable to a requirement.
|
|
||||||
- Document integrity: when a decision is revised (pivot), add an addendum with rationale — never rewrite or delete the original decision. Traceability over tidiness.
|
|
||||||
- When a setup task or user story is completed, check off its acceptance criteria in the corresponding spec file (`spec/setup-tasks.md` or `spec/userstories.md`) before committing. Progress must be tracked — no silent completions.
|
|
||||||
|
|
||||||
### Privacy
|
|
||||||
|
|
||||||
- Privacy is a design constraint, not a feature. It shapes every decision from the start.
|
|
||||||
- No analytics, no telemetry — not even self-hosted.
|
|
||||||
- Never log PII or IP addresses on the server.
|
|
||||||
- For every feature, critically evaluate what data is necessary. Only store what is absolutely required for functionality.
|
|
||||||
- Never include external dependencies that phone home: no CDNs, no Google Fonts, no tracking-capable libraries.
|
|
||||||
|
|
||||||
### Design
|
|
||||||
|
|
||||||
- The visual design system is defined in `spec/design-system.md`. All frontend implementation must follow it.
|
|
||||||
- Color palette, typography, component patterns, and layout rules are specified there — do not deviate without explicit approval.
|
|
||||||
|
|
||||||
### Quality
|
|
||||||
|
|
||||||
- KISS and grugbrain. Engineer it properly, but don't over-engineer.
|
|
||||||
- No workarounds. Always fix the root cause, even if it takes longer.
|
|
||||||
- Address technical debt immediately. Don't let it accumulate.
|
|
||||||
- Accessibility is a baseline requirement, not an afterthought.
|
|
||||||
|
|
||||||
### Dependencies
|
|
||||||
|
|
||||||
- Every dependency is a deliberate, justified decision.
|
|
||||||
- A dependency must provide substantial value and a significant portion of its features must actually be used.
|
|
||||||
- Dependencies must be actively maintained and open source. Copyleft is fine — the project is GPL-licensed.
|
|
||||||
- Never introduce a dependency that phones home or compromises user privacy.
|
|
||||||
|
|
||||||
### Language
|
|
||||||
|
|
||||||
- Conversation and brainstorming: German.
|
- Conversation and brainstorming: German.
|
||||||
- Code, comments, commits, documentation: English — no exceptions.
|
- Code, comments, commits, documentation: English — no exceptions.
|
||||||
|
|
||||||
### Deployment
|
## Build Commands
|
||||||
|
|
||||||
- The project provides a Dockerfile. How and where it is deployed is the hoster's responsibility.
|
|
||||||
- A docker-compose example in the README is sufficient.
|
|
||||||
- Documentation lives in the README. No wiki, no elaborate docs site.
|
|
||||||
|
|
||||||
### Tech Stack
|
|
||||||
|
|
||||||
- **Backend:** Java 25 (LTS, installed via SDKMAN), Spring Boot 3.5.x, Maven with wrapper (`./mvnw`)
|
|
||||||
- **Frontend:** Vue 3, TypeScript, Vue Router, Vite, Vitest, ESLint, Prettier
|
|
||||||
- **Node.js:** 24 LTS (for Docker/CI; development tolerates newer versions)
|
|
||||||
- **Base package:** `de.fete`, hexagonal architecture (single Maven module, package-level separation)
|
|
||||||
- **No Pinia** — Composition API (`ref`/`reactive`) + localStorage is sufficient
|
|
||||||
- **No JPA until T-4** — added when database infrastructure is ready
|
|
||||||
|
|
||||||
### Build Commands
|
|
||||||
|
|
||||||
| What | Command |
|
| What | Command |
|
||||||
|------------------|----------------------------------------|
|
|------------------|----------------------------------------|
|
||||||
@@ -84,14 +24,15 @@ These are the non-negotiable principles of this project. Every decision — arch
|
|||||||
| Backend checkstyle | `cd backend && ./mvnw checkstyle:check` |
|
| Backend checkstyle | `cd backend && ./mvnw checkstyle:check` |
|
||||||
| Backend full verify | `cd backend && ./mvnw verify` |
|
| Backend full verify | `cd backend && ./mvnw verify` |
|
||||||
|
|
||||||
### Agent Documentation
|
## Agent Documentation
|
||||||
|
|
||||||
- Research reports: `docs/agents/research/`
|
- Feature specs, research, and plans: `specs/[NNN-feature-name]/` (e.g. `specs/006-create-event/spec.md`, `research.md`, `plan.md`)
|
||||||
- Implementation plans: `docs/agents/plan/`
|
- Cross-cutting research: `.specify/memory/research/`
|
||||||
|
- Cross-cutting plans: `.specify/memory/plans/`
|
||||||
- Agent test reports (browser verification): `.agent-tests/` (gitignored)
|
- Agent test reports (browser verification): `.agent-tests/` (gitignored)
|
||||||
- Use the `browser-interactive-testing` skill (rodney/showboat) for visual verification — this is an agent tool, not manual work.
|
- Use the `browser-interactive-testing` skill (rodney/showboat) for visual verification — this is an agent tool, not manual work.
|
||||||
|
|
||||||
### Skills
|
## Skills
|
||||||
|
|
||||||
The following skills are available and should be used for their respective purposes:
|
The following skills are available and should be used for their respective purposes:
|
||||||
|
|
||||||
@@ -102,9 +43,16 @@ The following skills are available and should be used for their respective purpo
|
|||||||
| `rpi-implement` | Approved plan ready for execution | Executes approved implementation plans phase by phase with automated and manual verification. |
|
| `rpi-implement` | Approved plan ready for execution | Executes approved implementation plans phase by phase with automated and manual verification. |
|
||||||
| `browser-interactive-testing` | Visual verification of web pages | Headless Chrome testing via rodney/showboat. Use for screenshots, browser automation, and visual test reports. |
|
| `browser-interactive-testing` | Visual verification of web pages | Headless Chrome testing via rodney/showboat. Use for screenshots, browser automation, and visual test reports. |
|
||||||
|
|
||||||
### Ralph Loops
|
## Ralph Loops
|
||||||
|
|
||||||
- Autonomous work is done via Ralph Loops. See [.claude/rules/ralph-loops.md](.claude/rules/ralph-loops.md) for documentation.
|
- Autonomous work is done via Ralph Loops. See [.claude/rules/ralph-loops.md](.claude/rules/ralph-loops.md) for documentation.
|
||||||
- The loop runner is `ralph.sh`. Each run lives in its own directory under `.ralph/`.
|
- The loop runner is `ralph.sh`. Each run lives in its own directory under `.ralph/`.
|
||||||
- Run directories contain: `instructions.md` (prompt), `chief-wiggum.md` (directives), `answers.md` (human answers), `questions.md` (Ralph's questions), `progress.txt` (iteration log), `meta.md` (metadata), `run.log` (execution log).
|
- Run directories contain: `instructions.md` (prompt), `chief-wiggum.md` (directives), `answers.md` (human answers), `questions.md` (Ralph's questions), `progress.txt` (iteration log), `meta.md` (metadata), `run.log` (execution log).
|
||||||
- Project specifications (user stories, setup tasks, personas, etc.) live in `spec/`.
|
- Project specifications (user stories, setup tasks, personas, etc.) live in `specs/` (feature dirs) and `.specify/memory/` (cross-cutting docs).
|
||||||
|
|
||||||
|
## Active Technologies
|
||||||
|
- Java 25 (backend), TypeScript 5.9 (frontend) + Spring Boot 3.5.x, Vue 3, Vue Router 5, openapi-fetch, openapi-typescript (007-view-event)
|
||||||
|
- PostgreSQL (JPA via Spring Data, Liquibase migrations) (007-view-event)
|
||||||
|
|
||||||
|
## Recent Changes
|
||||||
|
- 007-view-event: Added Java 25 (backend), TypeScript 5.9 (frontend) + Spring Boot 3.5.x, Vue 3, Vue Router 5, openapi-fetch, openapi-typescript
|
||||||
|
|||||||
@@ -10,14 +10,14 @@ COPY backend/src/main/resources/openapi/api.yaml \
|
|||||||
RUN npm run build
|
RUN npm run build
|
||||||
|
|
||||||
# Stage 2: Build backend with frontend assets baked in
|
# Stage 2: Build backend with frontend assets baked in
|
||||||
FROM eclipse-temurin:25-jdk-alpine AS backend-build
|
FROM eclipse-temurin:25.0.2_10-jdk-alpine AS backend-build
|
||||||
WORKDIR /app/backend
|
WORKDIR /app/backend
|
||||||
COPY backend/ ./
|
COPY backend/ ./
|
||||||
COPY --from=frontend-build /app/frontend/dist src/main/resources/static/
|
COPY --from=frontend-build /app/frontend/dist src/main/resources/static/
|
||||||
RUN ./mvnw -B -DskipTests -Dcheckstyle.skip -Dspotbugs.skip package
|
RUN ./mvnw -B -DskipTests -Dcheckstyle.skip -Dspotbugs.skip package
|
||||||
|
|
||||||
# Stage 3: Runtime
|
# Stage 3: Runtime
|
||||||
FROM eclipse-temurin:25-jre-alpine
|
FROM eclipse-temurin:25.0.2_10-jre-alpine
|
||||||
WORKDIR /app
|
WORKDIR /app
|
||||||
COPY --from=backend-build /app/backend/target/*.jar app.jar
|
COPY --from=backend-build /app/backend/target/*.jar app.jar
|
||||||
EXPOSE 8080
|
EXPOSE 8080
|
||||||
|
|||||||
94
WORKFLOW.md
Normal file
94
WORKFLOW.md
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
# Spec-Kit Workflow
|
||||||
|
|
||||||
|
How to take a feature from spec to working code.
|
||||||
|
|
||||||
|
## The Loop
|
||||||
|
|
||||||
|
```
|
||||||
|
clarify → plan → tasks → analyze → implement
|
||||||
|
```
|
||||||
|
|
||||||
|
Every step produces files in `specs/NNN-feature-name/`. Review after each step before moving on.
|
||||||
|
|
||||||
|
## Steps
|
||||||
|
|
||||||
|
### 1. Clarify the spec (optional)
|
||||||
|
|
||||||
|
```
|
||||||
|
/speckit.clarify
|
||||||
|
```
|
||||||
|
|
||||||
|
Asks up to 5 targeted questions about gaps or ambiguities in the spec. Answers get folded back into `spec.md`. Skip this if the spec is already solid.
|
||||||
|
|
||||||
|
### 2. Create the implementation plan
|
||||||
|
|
||||||
|
```
|
||||||
|
/speckit.plan
|
||||||
|
```
|
||||||
|
|
||||||
|
Reads `spec.md`, checks it against the constitution, and produces:
|
||||||
|
|
||||||
|
| File | Content |
|
||||||
|
|------|---------|
|
||||||
|
| `plan.md` | Technical design, architecture decisions, contracts |
|
||||||
|
| `research.md` | Background research (if needed) |
|
||||||
|
| `data-model.md` | Entity definitions (if new entities are involved) |
|
||||||
|
| `contracts/` | API contracts (if applicable) |
|
||||||
|
|
||||||
|
**Review this carefully.** The plan shapes everything downstream.
|
||||||
|
|
||||||
|
### 3. Generate the task list
|
||||||
|
|
||||||
|
```
|
||||||
|
/speckit.tasks
|
||||||
|
```
|
||||||
|
|
||||||
|
Reads the plan and generates `tasks.md` — an ordered, dependency-aware task list. Tasks marked `[P]` can run in parallel.
|
||||||
|
|
||||||
|
### 4. Check consistency (optional)
|
||||||
|
|
||||||
|
```
|
||||||
|
/speckit.analyze
|
||||||
|
```
|
||||||
|
|
||||||
|
Cross-checks spec, plan, and tasks for contradictions, gaps, or drift. Non-destructive — only reports, doesn't change files.
|
||||||
|
|
||||||
|
### 5. Implement
|
||||||
|
|
||||||
|
```
|
||||||
|
/speckit.implement
|
||||||
|
```
|
||||||
|
|
||||||
|
Executes the tasks from `tasks.md` phase by phase. Follows TDD: writes tests first, then implementation. Stops at checkpoints so you can verify.
|
||||||
|
|
||||||
|
## File structure
|
||||||
|
|
||||||
|
Each feature lives in its own directory:
|
||||||
|
|
||||||
|
```
|
||||||
|
specs/
|
||||||
|
007-view-event/
|
||||||
|
spec.md # What and why (from /speckit.specify or migration)
|
||||||
|
plan.md # How (from /speckit.plan)
|
||||||
|
research.md # Background research (from /speckit.plan)
|
||||||
|
data-model.md # Entity definitions (from /speckit.plan)
|
||||||
|
contracts/ # API contracts (from /speckit.plan)
|
||||||
|
tasks.md # Ordered task list (from /speckit.tasks)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Starting a brand new feature
|
||||||
|
|
||||||
|
If the feature doesn't have a spec yet:
|
||||||
|
|
||||||
|
```
|
||||||
|
/speckit.specify
|
||||||
|
```
|
||||||
|
|
||||||
|
Describe what you want in plain language. This creates the spec directory and `spec.md` from the template. Then continue with the loop above.
|
||||||
|
|
||||||
|
## Tips
|
||||||
|
|
||||||
|
- **Don't skip the review.** Each step builds on the previous one. Garbage in, garbage out.
|
||||||
|
- **The spec is the source of truth.** If something in the plan contradicts the spec, fix the spec first.
|
||||||
|
- **You can re-run steps.** Changed the spec after planning? Run `/speckit.plan` again.
|
||||||
|
- **Constitution governs everything.** Principles in `.specify/memory/constitution.md` override ad-hoc decisions.
|
||||||
@@ -1,3 +1,3 @@
|
|||||||
wrapperVersion=3.3.4
|
wrapperVersion=3.3.4
|
||||||
distributionType=only-script
|
distributionType=only-script
|
||||||
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.9.12/apache-maven-3.9.12-bin.zip
|
distributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.9.13/apache-maven-3.9.13-bin.zip
|
||||||
|
|||||||
@@ -179,7 +179,7 @@
|
|||||||
<plugin>
|
<plugin>
|
||||||
<groupId>org.codehaus.mojo</groupId>
|
<groupId>org.codehaus.mojo</groupId>
|
||||||
<artifactId>build-helper-maven-plugin</artifactId>
|
<artifactId>build-helper-maven-plugin</artifactId>
|
||||||
<version>3.6.0</version>
|
<version>3.6.1</version>
|
||||||
<executions>
|
<executions>
|
||||||
<execution>
|
<execution>
|
||||||
<id>add-openapi-sources</id>
|
<id>add-openapi-sources</id>
|
||||||
|
|||||||
@@ -3,9 +3,18 @@ package de.fete.adapter.in.web;
|
|||||||
import de.fete.adapter.in.web.api.EventsApi;
|
import de.fete.adapter.in.web.api.EventsApi;
|
||||||
import de.fete.adapter.in.web.model.CreateEventRequest;
|
import de.fete.adapter.in.web.model.CreateEventRequest;
|
||||||
import de.fete.adapter.in.web.model.CreateEventResponse;
|
import de.fete.adapter.in.web.model.CreateEventResponse;
|
||||||
|
import de.fete.adapter.in.web.model.GetEventResponse;
|
||||||
|
import de.fete.application.service.EventNotFoundException;
|
||||||
|
import de.fete.application.service.InvalidTimezoneException;
|
||||||
import de.fete.domain.model.CreateEventCommand;
|
import de.fete.domain.model.CreateEventCommand;
|
||||||
import de.fete.domain.model.Event;
|
import de.fete.domain.model.Event;
|
||||||
import de.fete.domain.port.in.CreateEventUseCase;
|
import de.fete.domain.port.in.CreateEventUseCase;
|
||||||
|
import de.fete.domain.port.in.GetEventUseCase;
|
||||||
|
import java.time.Clock;
|
||||||
|
import java.time.DateTimeException;
|
||||||
|
import java.time.LocalDate;
|
||||||
|
import java.time.ZoneId;
|
||||||
|
import java.util.UUID;
|
||||||
import org.springframework.http.HttpStatus;
|
import org.springframework.http.HttpStatus;
|
||||||
import org.springframework.http.ResponseEntity;
|
import org.springframework.http.ResponseEntity;
|
||||||
import org.springframework.web.bind.annotation.RestController;
|
import org.springframework.web.bind.annotation.RestController;
|
||||||
@@ -15,19 +24,29 @@ import org.springframework.web.bind.annotation.RestController;
|
|||||||
public class EventController implements EventsApi {
|
public class EventController implements EventsApi {
|
||||||
|
|
||||||
private final CreateEventUseCase createEventUseCase;
|
private final CreateEventUseCase createEventUseCase;
|
||||||
|
private final GetEventUseCase getEventUseCase;
|
||||||
|
private final Clock clock;
|
||||||
|
|
||||||
/** Creates a new controller with the given use case. */
|
/** Creates a new controller with the given use cases and clock. */
|
||||||
public EventController(CreateEventUseCase createEventUseCase) {
|
public EventController(
|
||||||
|
CreateEventUseCase createEventUseCase,
|
||||||
|
GetEventUseCase getEventUseCase,
|
||||||
|
Clock clock) {
|
||||||
this.createEventUseCase = createEventUseCase;
|
this.createEventUseCase = createEventUseCase;
|
||||||
|
this.getEventUseCase = getEventUseCase;
|
||||||
|
this.clock = clock;
|
||||||
}
|
}
|
||||||
|
|
||||||
@Override
|
@Override
|
||||||
public ResponseEntity<CreateEventResponse> createEvent(
|
public ResponseEntity<CreateEventResponse> createEvent(
|
||||||
CreateEventRequest request) {
|
CreateEventRequest request) {
|
||||||
|
ZoneId zoneId = parseTimezone(request.getTimezone());
|
||||||
|
|
||||||
var command = new CreateEventCommand(
|
var command = new CreateEventCommand(
|
||||||
request.getTitle(),
|
request.getTitle(),
|
||||||
request.getDescription(),
|
request.getDescription(),
|
||||||
request.getDateTime(),
|
request.getDateTime(),
|
||||||
|
zoneId,
|
||||||
request.getLocation(),
|
request.getLocation(),
|
||||||
request.getExpiryDate()
|
request.getExpiryDate()
|
||||||
);
|
);
|
||||||
@@ -39,8 +58,36 @@ public class EventController implements EventsApi {
|
|||||||
response.setOrganizerToken(event.getOrganizerToken());
|
response.setOrganizerToken(event.getOrganizerToken());
|
||||||
response.setTitle(event.getTitle());
|
response.setTitle(event.getTitle());
|
||||||
response.setDateTime(event.getDateTime());
|
response.setDateTime(event.getDateTime());
|
||||||
|
response.setTimezone(event.getTimezone().getId());
|
||||||
response.setExpiryDate(event.getExpiryDate());
|
response.setExpiryDate(event.getExpiryDate());
|
||||||
|
|
||||||
return ResponseEntity.status(HttpStatus.CREATED).body(response);
|
return ResponseEntity.status(HttpStatus.CREATED).body(response);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public ResponseEntity<GetEventResponse> getEvent(UUID token) {
|
||||||
|
Event event = getEventUseCase.getByEventToken(token)
|
||||||
|
.orElseThrow(() -> new EventNotFoundException(token));
|
||||||
|
|
||||||
|
var response = new GetEventResponse();
|
||||||
|
response.setEventToken(event.getEventToken());
|
||||||
|
response.setTitle(event.getTitle());
|
||||||
|
response.setDescription(event.getDescription());
|
||||||
|
response.setDateTime(event.getDateTime());
|
||||||
|
response.setTimezone(event.getTimezone().getId());
|
||||||
|
response.setLocation(event.getLocation());
|
||||||
|
response.setAttendeeCount(0);
|
||||||
|
response.setExpired(
|
||||||
|
event.getExpiryDate().isBefore(LocalDate.now(clock)));
|
||||||
|
|
||||||
|
return ResponseEntity.ok(response);
|
||||||
|
}
|
||||||
|
|
||||||
|
private static ZoneId parseTimezone(String timezone) {
|
||||||
|
try {
|
||||||
|
return ZoneId.of(timezone);
|
||||||
|
} catch (DateTimeException e) {
|
||||||
|
throw new InvalidTimezoneException(timezone);
|
||||||
|
}
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -1,6 +1,8 @@
|
|||||||
package de.fete.adapter.in.web;
|
package de.fete.adapter.in.web;
|
||||||
|
|
||||||
|
import de.fete.application.service.EventNotFoundException;
|
||||||
import de.fete.application.service.ExpiryDateInPastException;
|
import de.fete.application.service.ExpiryDateInPastException;
|
||||||
|
import de.fete.application.service.InvalidTimezoneException;
|
||||||
import java.net.URI;
|
import java.net.URI;
|
||||||
import java.util.List;
|
import java.util.List;
|
||||||
import java.util.Map;
|
import java.util.Map;
|
||||||
@@ -57,6 +59,32 @@ public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {
|
|||||||
.body(problemDetail);
|
.body(problemDetail);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Handles event not found. */
|
||||||
|
@ExceptionHandler(EventNotFoundException.class)
|
||||||
|
public ResponseEntity<ProblemDetail> handleEventNotFound(
|
||||||
|
EventNotFoundException ex) {
|
||||||
|
ProblemDetail problemDetail = ProblemDetail.forStatusAndDetail(
|
||||||
|
HttpStatus.NOT_FOUND, ex.getMessage());
|
||||||
|
problemDetail.setTitle("Event Not Found");
|
||||||
|
problemDetail.setType(URI.create("urn:problem-type:event-not-found"));
|
||||||
|
return ResponseEntity.status(HttpStatus.NOT_FOUND)
|
||||||
|
.contentType(MediaType.APPLICATION_PROBLEM_JSON)
|
||||||
|
.body(problemDetail);
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Handles invalid timezone. */
|
||||||
|
@ExceptionHandler(InvalidTimezoneException.class)
|
||||||
|
public ResponseEntity<ProblemDetail> handleInvalidTimezone(
|
||||||
|
InvalidTimezoneException ex) {
|
||||||
|
ProblemDetail problemDetail = ProblemDetail.forStatusAndDetail(
|
||||||
|
HttpStatus.BAD_REQUEST, ex.getMessage());
|
||||||
|
problemDetail.setTitle("Invalid Timezone");
|
||||||
|
problemDetail.setType(URI.create("urn:problem-type:invalid-timezone"));
|
||||||
|
return ResponseEntity.badRequest()
|
||||||
|
.contentType(MediaType.APPLICATION_PROBLEM_JSON)
|
||||||
|
.body(problemDetail);
|
||||||
|
}
|
||||||
|
|
||||||
/** Catches all unhandled exceptions. */
|
/** Catches all unhandled exceptions. */
|
||||||
@ExceptionHandler(Exception.class)
|
@ExceptionHandler(Exception.class)
|
||||||
public ResponseEntity<ProblemDetail> handleAll(Exception ex) {
|
public ResponseEntity<ProblemDetail> handleAll(Exception ex) {
|
||||||
|
|||||||
@@ -34,6 +34,9 @@ public class EventJpaEntity {
|
|||||||
@Column(name = "date_time", nullable = false)
|
@Column(name = "date_time", nullable = false)
|
||||||
private OffsetDateTime dateTime;
|
private OffsetDateTime dateTime;
|
||||||
|
|
||||||
|
@Column(nullable = false, length = 64)
|
||||||
|
private String timezone;
|
||||||
|
|
||||||
@Column(length = 500)
|
@Column(length = 500)
|
||||||
private String location;
|
private String location;
|
||||||
|
|
||||||
@@ -103,6 +106,16 @@ public class EventJpaEntity {
|
|||||||
this.dateTime = dateTime;
|
this.dateTime = dateTime;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Returns the IANA timezone name. */
|
||||||
|
public String getTimezone() {
|
||||||
|
return timezone;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Sets the IANA timezone name. */
|
||||||
|
public void setTimezone(String timezone) {
|
||||||
|
this.timezone = timezone;
|
||||||
|
}
|
||||||
|
|
||||||
/** Returns the event location. */
|
/** Returns the event location. */
|
||||||
public String getLocation() {
|
public String getLocation() {
|
||||||
return location;
|
return location;
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ package de.fete.adapter.out.persistence;
|
|||||||
|
|
||||||
import de.fete.domain.model.Event;
|
import de.fete.domain.model.Event;
|
||||||
import de.fete.domain.port.out.EventRepository;
|
import de.fete.domain.port.out.EventRepository;
|
||||||
|
import java.time.ZoneId;
|
||||||
import java.util.Optional;
|
import java.util.Optional;
|
||||||
import java.util.UUID;
|
import java.util.UUID;
|
||||||
import org.springframework.stereotype.Repository;
|
import org.springframework.stereotype.Repository;
|
||||||
@@ -37,6 +38,7 @@ public class EventPersistenceAdapter implements EventRepository {
|
|||||||
entity.setTitle(event.getTitle());
|
entity.setTitle(event.getTitle());
|
||||||
entity.setDescription(event.getDescription());
|
entity.setDescription(event.getDescription());
|
||||||
entity.setDateTime(event.getDateTime());
|
entity.setDateTime(event.getDateTime());
|
||||||
|
entity.setTimezone(event.getTimezone().getId());
|
||||||
entity.setLocation(event.getLocation());
|
entity.setLocation(event.getLocation());
|
||||||
entity.setExpiryDate(event.getExpiryDate());
|
entity.setExpiryDate(event.getExpiryDate());
|
||||||
entity.setCreatedAt(event.getCreatedAt());
|
entity.setCreatedAt(event.getCreatedAt());
|
||||||
@@ -51,6 +53,7 @@ public class EventPersistenceAdapter implements EventRepository {
|
|||||||
event.setTitle(entity.getTitle());
|
event.setTitle(entity.getTitle());
|
||||||
event.setDescription(entity.getDescription());
|
event.setDescription(entity.getDescription());
|
||||||
event.setDateTime(entity.getDateTime());
|
event.setDateTime(entity.getDateTime());
|
||||||
|
event.setTimezone(ZoneId.of(entity.getTimezone()));
|
||||||
event.setLocation(entity.getLocation());
|
event.setLocation(entity.getLocation());
|
||||||
event.setExpiryDate(entity.getExpiryDate());
|
event.setExpiryDate(entity.getExpiryDate());
|
||||||
event.setCreatedAt(entity.getCreatedAt());
|
event.setCreatedAt(entity.getCreatedAt());
|
||||||
|
|||||||
@@ -0,0 +1,12 @@
|
|||||||
|
package de.fete.application.service;
|
||||||
|
|
||||||
|
import java.util.UUID;
|
||||||
|
|
||||||
|
/** Thrown when an event cannot be found by its token. */
|
||||||
|
public class EventNotFoundException extends RuntimeException {
|
||||||
|
|
||||||
|
/** Creates a new exception for the given event token. */
|
||||||
|
public EventNotFoundException(UUID eventToken) {
|
||||||
|
super("Event not found: " + eventToken);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -3,16 +3,18 @@ package de.fete.application.service;
|
|||||||
import de.fete.domain.model.CreateEventCommand;
|
import de.fete.domain.model.CreateEventCommand;
|
||||||
import de.fete.domain.model.Event;
|
import de.fete.domain.model.Event;
|
||||||
import de.fete.domain.port.in.CreateEventUseCase;
|
import de.fete.domain.port.in.CreateEventUseCase;
|
||||||
|
import de.fete.domain.port.in.GetEventUseCase;
|
||||||
import de.fete.domain.port.out.EventRepository;
|
import de.fete.domain.port.out.EventRepository;
|
||||||
import java.time.Clock;
|
import java.time.Clock;
|
||||||
import java.time.LocalDate;
|
import java.time.LocalDate;
|
||||||
import java.time.OffsetDateTime;
|
import java.time.OffsetDateTime;
|
||||||
|
import java.util.Optional;
|
||||||
import java.util.UUID;
|
import java.util.UUID;
|
||||||
import org.springframework.stereotype.Service;
|
import org.springframework.stereotype.Service;
|
||||||
|
|
||||||
/** Application service implementing event creation. */
|
/** Application service implementing event creation and retrieval. */
|
||||||
@Service
|
@Service
|
||||||
public class EventService implements CreateEventUseCase {
|
public class EventService implements CreateEventUseCase, GetEventUseCase {
|
||||||
|
|
||||||
private final EventRepository eventRepository;
|
private final EventRepository eventRepository;
|
||||||
private final Clock clock;
|
private final Clock clock;
|
||||||
@@ -35,10 +37,16 @@ public class EventService implements CreateEventUseCase {
|
|||||||
event.setTitle(command.title());
|
event.setTitle(command.title());
|
||||||
event.setDescription(command.description());
|
event.setDescription(command.description());
|
||||||
event.setDateTime(command.dateTime());
|
event.setDateTime(command.dateTime());
|
||||||
|
event.setTimezone(command.timezone());
|
||||||
event.setLocation(command.location());
|
event.setLocation(command.location());
|
||||||
event.setExpiryDate(command.expiryDate());
|
event.setExpiryDate(command.expiryDate());
|
||||||
event.setCreatedAt(OffsetDateTime.now(clock));
|
event.setCreatedAt(OffsetDateTime.now(clock));
|
||||||
|
|
||||||
return eventRepository.save(event);
|
return eventRepository.save(event);
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Override
|
||||||
|
public Optional<Event> getByEventToken(UUID eventToken) {
|
||||||
|
return eventRepository.findByEventToken(eventToken);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -0,0 +1,10 @@
|
|||||||
|
package de.fete.application.service;
|
||||||
|
|
||||||
|
/** Thrown when an invalid IANA timezone ID is provided. */
|
||||||
|
public class InvalidTimezoneException extends RuntimeException {
|
||||||
|
|
||||||
|
/** Creates a new exception for the given invalid timezone. */
|
||||||
|
public InvalidTimezoneException(String timezone) {
|
||||||
|
super("Invalid IANA timezone: " + timezone);
|
||||||
|
}
|
||||||
|
}
|
||||||
@@ -2,12 +2,14 @@ package de.fete.domain.model;
|
|||||||
|
|
||||||
import java.time.LocalDate;
|
import java.time.LocalDate;
|
||||||
import java.time.OffsetDateTime;
|
import java.time.OffsetDateTime;
|
||||||
|
import java.time.ZoneId;
|
||||||
|
|
||||||
/** Command carrying the data needed to create an event. */
|
/** Command carrying the data needed to create an event. */
|
||||||
public record CreateEventCommand(
|
public record CreateEventCommand(
|
||||||
String title,
|
String title,
|
||||||
String description,
|
String description,
|
||||||
OffsetDateTime dateTime,
|
OffsetDateTime dateTime,
|
||||||
|
ZoneId timezone,
|
||||||
String location,
|
String location,
|
||||||
LocalDate expiryDate
|
LocalDate expiryDate
|
||||||
) {}
|
) {}
|
||||||
|
|||||||
@@ -2,6 +2,7 @@ package de.fete.domain.model;
|
|||||||
|
|
||||||
import java.time.LocalDate;
|
import java.time.LocalDate;
|
||||||
import java.time.OffsetDateTime;
|
import java.time.OffsetDateTime;
|
||||||
|
import java.time.ZoneId;
|
||||||
import java.util.UUID;
|
import java.util.UUID;
|
||||||
|
|
||||||
/** Domain entity representing an event. */
|
/** Domain entity representing an event. */
|
||||||
@@ -13,6 +14,7 @@ public class Event {
|
|||||||
private String title;
|
private String title;
|
||||||
private String description;
|
private String description;
|
||||||
private OffsetDateTime dateTime;
|
private OffsetDateTime dateTime;
|
||||||
|
private ZoneId timezone;
|
||||||
private String location;
|
private String location;
|
||||||
private LocalDate expiryDate;
|
private LocalDate expiryDate;
|
||||||
private OffsetDateTime createdAt;
|
private OffsetDateTime createdAt;
|
||||||
@@ -77,6 +79,16 @@ public class Event {
|
|||||||
this.dateTime = dateTime;
|
this.dateTime = dateTime;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/** Returns the IANA timezone. */
|
||||||
|
public ZoneId getTimezone() {
|
||||||
|
return timezone;
|
||||||
|
}
|
||||||
|
|
||||||
|
/** Sets the IANA timezone. */
|
||||||
|
public void setTimezone(ZoneId timezone) {
|
||||||
|
this.timezone = timezone;
|
||||||
|
}
|
||||||
|
|
||||||
/** Returns the event location. */
|
/** Returns the event location. */
|
||||||
public String getLocation() {
|
public String getLocation() {
|
||||||
return location;
|
return location;
|
||||||
|
|||||||
@@ -0,0 +1,12 @@
|
|||||||
|
package de.fete.domain.port.in;
|
||||||
|
|
||||||
|
import de.fete.domain.model.Event;
|
||||||
|
import java.util.Optional;
|
||||||
|
import java.util.UUID;
|
||||||
|
|
||||||
|
/** Inbound port for retrieving a public event by its token. */
|
||||||
|
public interface GetEventUseCase {
|
||||||
|
|
||||||
|
/** Finds an event by its public event token. */
|
||||||
|
Optional<Event> getByEventToken(UUID eventToken);
|
||||||
|
}
|
||||||
@@ -0,0 +1,16 @@
|
|||||||
|
<?xml version="1.0" encoding="UTF-8"?>
|
||||||
|
<databaseChangeLog
|
||||||
|
xmlns="http://www.liquibase.org/xml/ns/dbchangelog"
|
||||||
|
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
|
||||||
|
xsi:schemaLocation="http://www.liquibase.org/xml/ns/dbchangelog
|
||||||
|
http://www.liquibase.org/xml/ns/dbchangelog/dbchangelog-latest.xsd">
|
||||||
|
|
||||||
|
<changeSet id="002-add-timezone-column" author="fete">
|
||||||
|
<addColumn tableName="events">
|
||||||
|
<column name="timezone" type="varchar(64)" defaultValue="UTC">
|
||||||
|
<constraints nullable="false"/>
|
||||||
|
</column>
|
||||||
|
</addColumn>
|
||||||
|
</changeSet>
|
||||||
|
|
||||||
|
</databaseChangeLog>
|
||||||
@@ -7,5 +7,6 @@
|
|||||||
|
|
||||||
<include file="db/changelog/000-baseline.xml"/>
|
<include file="db/changelog/000-baseline.xml"/>
|
||||||
<include file="db/changelog/001-create-events-table.xml"/>
|
<include file="db/changelog/001-create-events-table.xml"/>
|
||||||
|
<include file="db/changelog/002-add-timezone-column.xml"/>
|
||||||
|
|
||||||
</databaseChangeLog>
|
</databaseChangeLog>
|
||||||
|
|||||||
@@ -37,6 +37,34 @@ paths:
|
|||||||
schema:
|
schema:
|
||||||
$ref: "#/components/schemas/ValidationProblemDetail"
|
$ref: "#/components/schemas/ValidationProblemDetail"
|
||||||
|
|
||||||
|
/events/{token}:
|
||||||
|
get:
|
||||||
|
operationId: getEvent
|
||||||
|
summary: Get public event details by token
|
||||||
|
tags:
|
||||||
|
- events
|
||||||
|
parameters:
|
||||||
|
- name: token
|
||||||
|
in: path
|
||||||
|
required: true
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Public event token
|
||||||
|
responses:
|
||||||
|
"200":
|
||||||
|
description: Event found
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: "#/components/schemas/GetEventResponse"
|
||||||
|
"404":
|
||||||
|
description: Event not found
|
||||||
|
content:
|
||||||
|
application/problem+json:
|
||||||
|
schema:
|
||||||
|
$ref: "#/components/schemas/ProblemDetail"
|
||||||
|
|
||||||
components:
|
components:
|
||||||
schemas:
|
schemas:
|
||||||
CreateEventRequest:
|
CreateEventRequest:
|
||||||
@@ -44,6 +72,7 @@ components:
|
|||||||
required:
|
required:
|
||||||
- title
|
- title
|
||||||
- dateTime
|
- dateTime
|
||||||
|
- timezone
|
||||||
- expiryDate
|
- expiryDate
|
||||||
properties:
|
properties:
|
||||||
title:
|
title:
|
||||||
@@ -58,6 +87,10 @@ components:
|
|||||||
format: date-time
|
format: date-time
|
||||||
description: Event date and time with UTC offset (ISO 8601)
|
description: Event date and time with UTC offset (ISO 8601)
|
||||||
example: "2026-03-15T20:00:00+01:00"
|
example: "2026-03-15T20:00:00+01:00"
|
||||||
|
timezone:
|
||||||
|
type: string
|
||||||
|
description: IANA timezone of the organizer
|
||||||
|
example: "Europe/Berlin"
|
||||||
location:
|
location:
|
||||||
type: string
|
type: string
|
||||||
maxLength: 500
|
maxLength: 500
|
||||||
@@ -74,6 +107,7 @@ components:
|
|||||||
- organizerToken
|
- organizerToken
|
||||||
- title
|
- title
|
||||||
- dateTime
|
- dateTime
|
||||||
|
- timezone
|
||||||
- expiryDate
|
- expiryDate
|
||||||
properties:
|
properties:
|
||||||
eventToken:
|
eventToken:
|
||||||
@@ -93,11 +127,61 @@ components:
|
|||||||
type: string
|
type: string
|
||||||
format: date-time
|
format: date-time
|
||||||
example: "2026-03-15T20:00:00+01:00"
|
example: "2026-03-15T20:00:00+01:00"
|
||||||
|
timezone:
|
||||||
|
type: string
|
||||||
|
description: IANA timezone of the organizer
|
||||||
|
example: "Europe/Berlin"
|
||||||
expiryDate:
|
expiryDate:
|
||||||
type: string
|
type: string
|
||||||
format: date
|
format: date
|
||||||
example: "2026-06-15"
|
example: "2026-06-15"
|
||||||
|
|
||||||
|
GetEventResponse:
|
||||||
|
type: object
|
||||||
|
required:
|
||||||
|
- eventToken
|
||||||
|
- title
|
||||||
|
- dateTime
|
||||||
|
- timezone
|
||||||
|
- attendeeCount
|
||||||
|
- expired
|
||||||
|
properties:
|
||||||
|
eventToken:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Public event token
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
title:
|
||||||
|
type: string
|
||||||
|
description: Event title
|
||||||
|
example: "Summer BBQ"
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
description: Event description (absent if not set)
|
||||||
|
example: "Bring your own drinks!"
|
||||||
|
dateTime:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
description: Event date/time with organizer's UTC offset
|
||||||
|
example: "2026-03-15T20:00:00+01:00"
|
||||||
|
timezone:
|
||||||
|
type: string
|
||||||
|
description: IANA timezone name of the organizer
|
||||||
|
example: "Europe/Berlin"
|
||||||
|
location:
|
||||||
|
type: string
|
||||||
|
description: Event location (absent if not set)
|
||||||
|
example: "Central Park, NYC"
|
||||||
|
attendeeCount:
|
||||||
|
type: integer
|
||||||
|
minimum: 0
|
||||||
|
description: Number of confirmed attendees (attending=true)
|
||||||
|
example: 12
|
||||||
|
expired:
|
||||||
|
type: boolean
|
||||||
|
description: Whether the event's expiry date has passed
|
||||||
|
example: false
|
||||||
|
|
||||||
ProblemDetail:
|
ProblemDetail:
|
||||||
type: object
|
type: object
|
||||||
properties:
|
properties:
|
||||||
|
|||||||
@@ -1,12 +1,22 @@
|
|||||||
package de.fete.adapter.in.web;
|
package de.fete.adapter.in.web;
|
||||||
|
|
||||||
|
import static org.assertj.core.api.Assertions.assertThat;
|
||||||
|
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.get;
|
||||||
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.post;
|
import static org.springframework.test.web.servlet.request.MockMvcRequestBuilders.post;
|
||||||
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;
|
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.content;
|
||||||
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.jsonPath;
|
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.jsonPath;
|
||||||
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
|
import static org.springframework.test.web.servlet.result.MockMvcResultMatchers.status;
|
||||||
|
|
||||||
|
import com.fasterxml.jackson.databind.ObjectMapper;
|
||||||
import de.fete.TestcontainersConfig;
|
import de.fete.TestcontainersConfig;
|
||||||
|
import de.fete.adapter.in.web.model.CreateEventRequest;
|
||||||
|
import de.fete.adapter.in.web.model.CreateEventResponse;
|
||||||
|
import de.fete.adapter.out.persistence.EventJpaEntity;
|
||||||
|
import de.fete.adapter.out.persistence.EventJpaRepository;
|
||||||
import java.time.LocalDate;
|
import java.time.LocalDate;
|
||||||
|
import java.time.OffsetDateTime;
|
||||||
|
import java.time.ZoneOffset;
|
||||||
|
import java.util.UUID;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.springframework.beans.factory.annotation.Autowired;
|
import org.springframework.beans.factory.annotation.Autowired;
|
||||||
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
|
import org.springframework.boot.test.autoconfigure.web.servlet.AutoConfigureMockMvc;
|
||||||
@@ -23,63 +33,89 @@ class EventControllerIntegrationTest {
|
|||||||
@Autowired
|
@Autowired
|
||||||
private MockMvc mockMvc;
|
private MockMvc mockMvc;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private ObjectMapper objectMapper;
|
||||||
|
|
||||||
|
@Autowired
|
||||||
|
private EventJpaRepository jpaRepository;
|
||||||
|
|
||||||
|
// --- Create Event tests ---
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
void createEventWithValidBody() throws Exception {
|
void createEventWithValidBody() throws Exception {
|
||||||
String body =
|
var request = new CreateEventRequest()
|
||||||
"""
|
.title("Birthday Party")
|
||||||
{
|
.description("Come celebrate!")
|
||||||
"title": "Birthday Party",
|
.dateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)))
|
||||||
"description": "Come celebrate!",
|
.timezone("Europe/Berlin")
|
||||||
"dateTime": "2026-06-15T20:00:00+02:00",
|
.location("Berlin")
|
||||||
"location": "Berlin",
|
.expiryDate(LocalDate.now().plusDays(30));
|
||||||
"expiryDate": "%s"
|
|
||||||
}
|
|
||||||
""".formatted(LocalDate.now().plusDays(30));
|
|
||||||
|
|
||||||
mockMvc.perform(post("/api/events")
|
var result = mockMvc.perform(post("/api/events")
|
||||||
.contentType(MediaType.APPLICATION_JSON)
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
.content(body))
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
.andExpect(status().isCreated())
|
.andExpect(status().isCreated())
|
||||||
.andExpect(jsonPath("$.eventToken").isNotEmpty())
|
.andExpect(jsonPath("$.eventToken").isNotEmpty())
|
||||||
.andExpect(jsonPath("$.organizerToken").isNotEmpty())
|
.andExpect(jsonPath("$.organizerToken").isNotEmpty())
|
||||||
.andExpect(jsonPath("$.title").value("Birthday Party"))
|
.andExpect(jsonPath("$.title").value("Birthday Party"))
|
||||||
|
.andExpect(jsonPath("$.timezone").value("Europe/Berlin"))
|
||||||
.andExpect(jsonPath("$.dateTime").isNotEmpty())
|
.andExpect(jsonPath("$.dateTime").isNotEmpty())
|
||||||
.andExpect(jsonPath("$.expiryDate").isNotEmpty());
|
.andExpect(jsonPath("$.expiryDate").isNotEmpty())
|
||||||
|
.andReturn();
|
||||||
|
|
||||||
|
var response = objectMapper.readValue(
|
||||||
|
result.getResponse().getContentAsString(), CreateEventResponse.class);
|
||||||
|
|
||||||
|
EventJpaEntity persisted = jpaRepository
|
||||||
|
.findByEventToken(response.getEventToken()).orElseThrow();
|
||||||
|
assertThat(persisted.getTitle()).isEqualTo("Birthday Party");
|
||||||
|
assertThat(persisted.getDescription()).isEqualTo("Come celebrate!");
|
||||||
|
assertThat(persisted.getTimezone()).isEqualTo("Europe/Berlin");
|
||||||
|
assertThat(persisted.getLocation()).isEqualTo("Berlin");
|
||||||
|
assertThat(persisted.getExpiryDate()).isEqualTo(request.getExpiryDate());
|
||||||
|
assertThat(persisted.getDateTime().toInstant())
|
||||||
|
.isEqualTo(request.getDateTime().toInstant());
|
||||||
|
assertThat(persisted.getOrganizerToken()).isNotNull();
|
||||||
|
assertThat(persisted.getCreatedAt()).isNotNull();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
void createEventWithOptionalFieldsNull() throws Exception {
|
void createEventWithOptionalFieldsNull() throws Exception {
|
||||||
String body =
|
var request = new CreateEventRequest()
|
||||||
"""
|
.title("Minimal Event")
|
||||||
{
|
.dateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)))
|
||||||
"title": "Minimal Event",
|
.timezone("UTC")
|
||||||
"dateTime": "2026-06-15T20:00:00+02:00",
|
.expiryDate(LocalDate.now().plusDays(30));
|
||||||
"expiryDate": "%s"
|
|
||||||
}
|
|
||||||
""".formatted(LocalDate.now().plusDays(30));
|
|
||||||
|
|
||||||
mockMvc.perform(post("/api/events")
|
var result = mockMvc.perform(post("/api/events")
|
||||||
.contentType(MediaType.APPLICATION_JSON)
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
.content(body))
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
.andExpect(status().isCreated())
|
.andExpect(status().isCreated())
|
||||||
.andExpect(jsonPath("$.eventToken").isNotEmpty())
|
.andExpect(jsonPath("$.eventToken").isNotEmpty())
|
||||||
.andExpect(jsonPath("$.organizerToken").isNotEmpty())
|
.andExpect(jsonPath("$.organizerToken").isNotEmpty())
|
||||||
.andExpect(jsonPath("$.title").value("Minimal Event"));
|
.andExpect(jsonPath("$.title").value("Minimal Event"))
|
||||||
|
.andReturn();
|
||||||
|
|
||||||
|
var response = objectMapper.readValue(
|
||||||
|
result.getResponse().getContentAsString(), CreateEventResponse.class);
|
||||||
|
|
||||||
|
EventJpaEntity persisted = jpaRepository
|
||||||
|
.findByEventToken(response.getEventToken()).orElseThrow();
|
||||||
|
assertThat(persisted.getTitle()).isEqualTo("Minimal Event");
|
||||||
|
assertThat(persisted.getDescription()).isNull();
|
||||||
|
assertThat(persisted.getLocation()).isNull();
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
void createEventMissingTitleReturns400() throws Exception {
|
void createEventMissingTitleReturns400() throws Exception {
|
||||||
String body =
|
var request = new CreateEventRequest()
|
||||||
"""
|
.dateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)))
|
||||||
{
|
.timezone("Europe/Berlin")
|
||||||
"dateTime": "2026-06-15T20:00:00+02:00",
|
.expiryDate(LocalDate.now().plusDays(30));
|
||||||
"expiryDate": "%s"
|
|
||||||
}
|
|
||||||
""".formatted(LocalDate.now().plusDays(30));
|
|
||||||
|
|
||||||
mockMvc.perform(post("/api/events")
|
mockMvc.perform(post("/api/events")
|
||||||
.contentType(MediaType.APPLICATION_JSON)
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
.content(body))
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
.andExpect(status().isBadRequest())
|
.andExpect(status().isBadRequest())
|
||||||
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
||||||
.andExpect(jsonPath("$.title").value("Validation Failed"))
|
.andExpect(jsonPath("$.title").value("Validation Failed"))
|
||||||
@@ -88,17 +124,14 @@ class EventControllerIntegrationTest {
|
|||||||
|
|
||||||
@Test
|
@Test
|
||||||
void createEventMissingDateTimeReturns400() throws Exception {
|
void createEventMissingDateTimeReturns400() throws Exception {
|
||||||
String body =
|
var request = new CreateEventRequest()
|
||||||
"""
|
.title("No Date")
|
||||||
{
|
.timezone("Europe/Berlin")
|
||||||
"title": "No Date",
|
.expiryDate(LocalDate.now().plusDays(30));
|
||||||
"expiryDate": "%s"
|
|
||||||
}
|
|
||||||
""".formatted(LocalDate.now().plusDays(30));
|
|
||||||
|
|
||||||
mockMvc.perform(post("/api/events")
|
mockMvc.perform(post("/api/events")
|
||||||
.contentType(MediaType.APPLICATION_JSON)
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
.content(body))
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
.andExpect(status().isBadRequest())
|
.andExpect(status().isBadRequest())
|
||||||
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
||||||
.andExpect(jsonPath("$.fieldErrors").isArray());
|
.andExpect(jsonPath("$.fieldErrors").isArray());
|
||||||
@@ -106,17 +139,14 @@ class EventControllerIntegrationTest {
|
|||||||
|
|
||||||
@Test
|
@Test
|
||||||
void createEventMissingExpiryDateReturns400() throws Exception {
|
void createEventMissingExpiryDateReturns400() throws Exception {
|
||||||
String body =
|
var request = new CreateEventRequest()
|
||||||
"""
|
.title("No Expiry")
|
||||||
{
|
.dateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)))
|
||||||
"title": "No Expiry",
|
.timezone("Europe/Berlin");
|
||||||
"dateTime": "2026-06-15T20:00:00+02:00"
|
|
||||||
}
|
|
||||||
""";
|
|
||||||
|
|
||||||
mockMvc.perform(post("/api/events")
|
mockMvc.perform(post("/api/events")
|
||||||
.contentType(MediaType.APPLICATION_JSON)
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
.content(body))
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
.andExpect(status().isBadRequest())
|
.andExpect(status().isBadRequest())
|
||||||
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
||||||
.andExpect(jsonPath("$.fieldErrors").isArray());
|
.andExpect(jsonPath("$.fieldErrors").isArray());
|
||||||
@@ -124,18 +154,15 @@ class EventControllerIntegrationTest {
|
|||||||
|
|
||||||
@Test
|
@Test
|
||||||
void createEventExpiryDateInPastReturns400() throws Exception {
|
void createEventExpiryDateInPastReturns400() throws Exception {
|
||||||
String body =
|
var request = new CreateEventRequest()
|
||||||
"""
|
.title("Past Expiry")
|
||||||
{
|
.dateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)))
|
||||||
"title": "Past Expiry",
|
.timezone("Europe/Berlin")
|
||||||
"dateTime": "2026-06-15T20:00:00+02:00",
|
.expiryDate(LocalDate.of(2025, 1, 1));
|
||||||
"expiryDate": "2025-01-01"
|
|
||||||
}
|
|
||||||
""";
|
|
||||||
|
|
||||||
mockMvc.perform(post("/api/events")
|
mockMvc.perform(post("/api/events")
|
||||||
.contentType(MediaType.APPLICATION_JSON)
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
.content(body))
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
.andExpect(status().isBadRequest())
|
.andExpect(status().isBadRequest())
|
||||||
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
||||||
.andExpect(jsonPath("$.type").value("urn:problem-type:expiry-date-in-past"));
|
.andExpect(jsonPath("$.type").value("urn:problem-type:expiry-date-in-past"));
|
||||||
@@ -143,18 +170,15 @@ class EventControllerIntegrationTest {
|
|||||||
|
|
||||||
@Test
|
@Test
|
||||||
void createEventExpiryDateTodayReturns400() throws Exception {
|
void createEventExpiryDateTodayReturns400() throws Exception {
|
||||||
String body =
|
var request = new CreateEventRequest()
|
||||||
"""
|
.title("Today Expiry")
|
||||||
{
|
.dateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)))
|
||||||
"title": "Today Expiry",
|
.timezone("Europe/Berlin")
|
||||||
"dateTime": "2026-06-15T20:00:00+02:00",
|
.expiryDate(LocalDate.now());
|
||||||
"expiryDate": "%s"
|
|
||||||
}
|
|
||||||
""".formatted(LocalDate.now());
|
|
||||||
|
|
||||||
mockMvc.perform(post("/api/events")
|
mockMvc.perform(post("/api/events")
|
||||||
.contentType(MediaType.APPLICATION_JSON)
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
.content(body))
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
.andExpect(status().isBadRequest())
|
.andExpect(status().isBadRequest())
|
||||||
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
||||||
.andExpect(jsonPath("$.type").value("urn:problem-type:expiry-date-in-past"));
|
.andExpect(jsonPath("$.type").value("urn:problem-type:expiry-date-in-past"));
|
||||||
@@ -162,19 +186,101 @@ class EventControllerIntegrationTest {
|
|||||||
|
|
||||||
@Test
|
@Test
|
||||||
void errorResponseContentTypeIsProblemJson() throws Exception {
|
void errorResponseContentTypeIsProblemJson() throws Exception {
|
||||||
String body =
|
var request = new CreateEventRequest()
|
||||||
"""
|
.title("")
|
||||||
{
|
.dateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)))
|
||||||
"title": "",
|
.timezone("Europe/Berlin")
|
||||||
"dateTime": "2026-06-15T20:00:00+02:00",
|
.expiryDate(LocalDate.now().plusDays(30));
|
||||||
"expiryDate": "%s"
|
|
||||||
}
|
|
||||||
""".formatted(LocalDate.now().plusDays(30));
|
|
||||||
|
|
||||||
mockMvc.perform(post("/api/events")
|
mockMvc.perform(post("/api/events")
|
||||||
.contentType(MediaType.APPLICATION_JSON)
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
.content(body))
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
.andExpect(status().isBadRequest())
|
.andExpect(status().isBadRequest())
|
||||||
.andExpect(content().contentTypeCompatibleWith("application/problem+json"));
|
.andExpect(content().contentTypeCompatibleWith("application/problem+json"));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void createEventWithInvalidTimezoneReturns400() throws Exception {
|
||||||
|
var request = new CreateEventRequest()
|
||||||
|
.title("Bad TZ")
|
||||||
|
.dateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)))
|
||||||
|
.timezone("Not/A/Zone")
|
||||||
|
.expiryDate(LocalDate.now().plusDays(30));
|
||||||
|
|
||||||
|
mockMvc.perform(post("/api/events")
|
||||||
|
.contentType(MediaType.APPLICATION_JSON)
|
||||||
|
.content(objectMapper.writeValueAsString(request)))
|
||||||
|
.andExpect(status().isBadRequest())
|
||||||
|
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
||||||
|
.andExpect(jsonPath("$.type").value("urn:problem-type:invalid-timezone"));
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- GET /events/{token} tests ---
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void getEventReturnsFullResponse() throws Exception {
|
||||||
|
EventJpaEntity entity = seedEvent(
|
||||||
|
"Summer BBQ", "Bring drinks!", "Europe/Berlin",
|
||||||
|
"Central Park", LocalDate.now().plusDays(30));
|
||||||
|
|
||||||
|
mockMvc.perform(get("/api/events/" + entity.getEventToken()))
|
||||||
|
.andExpect(status().isOk())
|
||||||
|
.andExpect(jsonPath("$.eventToken").value(entity.getEventToken().toString()))
|
||||||
|
.andExpect(jsonPath("$.title").value("Summer BBQ"))
|
||||||
|
.andExpect(jsonPath("$.description").value("Bring drinks!"))
|
||||||
|
.andExpect(jsonPath("$.timezone").value("Europe/Berlin"))
|
||||||
|
.andExpect(jsonPath("$.location").value("Central Park"))
|
||||||
|
.andExpect(jsonPath("$.attendeeCount").value(0))
|
||||||
|
.andExpect(jsonPath("$.expired").value(false))
|
||||||
|
.andExpect(jsonPath("$.dateTime").isNotEmpty());
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void getEventWithOptionalFieldsAbsent() throws Exception {
|
||||||
|
EventJpaEntity entity = seedEvent(
|
||||||
|
"Minimal", null, "UTC", null, LocalDate.now().plusDays(30));
|
||||||
|
|
||||||
|
mockMvc.perform(get("/api/events/" + entity.getEventToken()))
|
||||||
|
.andExpect(status().isOk())
|
||||||
|
.andExpect(jsonPath("$.title").value("Minimal"))
|
||||||
|
.andExpect(jsonPath("$.description").doesNotExist())
|
||||||
|
.andExpect(jsonPath("$.location").doesNotExist())
|
||||||
|
.andExpect(jsonPath("$.attendeeCount").value(0));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void getEventNotFoundReturns404() throws Exception {
|
||||||
|
mockMvc.perform(get("/api/events/" + UUID.randomUUID()))
|
||||||
|
.andExpect(status().isNotFound())
|
||||||
|
.andExpect(content().contentTypeCompatibleWith("application/problem+json"))
|
||||||
|
.andExpect(jsonPath("$.type").value("urn:problem-type:event-not-found"));
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void getExpiredEventReturnsExpiredTrue() throws Exception {
|
||||||
|
EventJpaEntity entity = seedEvent(
|
||||||
|
"Past Event", "It happened", "Europe/Berlin",
|
||||||
|
"Old Venue", LocalDate.now().minusDays(1));
|
||||||
|
|
||||||
|
mockMvc.perform(get("/api/events/" + entity.getEventToken()))
|
||||||
|
.andExpect(status().isOk())
|
||||||
|
.andExpect(jsonPath("$.title").value("Past Event"))
|
||||||
|
.andExpect(jsonPath("$.expired").value(true));
|
||||||
|
}
|
||||||
|
|
||||||
|
private EventJpaEntity seedEvent(
|
||||||
|
String title, String description, String timezone,
|
||||||
|
String location, LocalDate expiryDate) {
|
||||||
|
var entity = new EventJpaEntity();
|
||||||
|
entity.setEventToken(UUID.randomUUID());
|
||||||
|
entity.setOrganizerToken(UUID.randomUUID());
|
||||||
|
entity.setTitle(title);
|
||||||
|
entity.setDescription(description);
|
||||||
|
entity.setDateTime(OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)));
|
||||||
|
entity.setTimezone(timezone);
|
||||||
|
entity.setLocation(location);
|
||||||
|
entity.setExpiryDate(expiryDate);
|
||||||
|
entity.setCreatedAt(OffsetDateTime.now());
|
||||||
|
return jpaRepository.save(entity);
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -7,6 +7,7 @@ import de.fete.domain.model.Event;
|
|||||||
import de.fete.domain.port.out.EventRepository;
|
import de.fete.domain.port.out.EventRepository;
|
||||||
import java.time.LocalDate;
|
import java.time.LocalDate;
|
||||||
import java.time.OffsetDateTime;
|
import java.time.OffsetDateTime;
|
||||||
|
import java.time.ZoneId;
|
||||||
import java.time.ZoneOffset;
|
import java.time.ZoneOffset;
|
||||||
import java.util.Optional;
|
import java.util.Optional;
|
||||||
import java.util.UUID;
|
import java.util.UUID;
|
||||||
@@ -65,6 +66,7 @@ class EventPersistenceAdapterTest {
|
|||||||
event.setTitle("Full Event");
|
event.setTitle("Full Event");
|
||||||
event.setDescription("A detailed description");
|
event.setDescription("A detailed description");
|
||||||
event.setDateTime(dateTime);
|
event.setDateTime(dateTime);
|
||||||
|
event.setTimezone(ZoneId.of("Europe/Berlin"));
|
||||||
event.setLocation("Berlin, Germany");
|
event.setLocation("Berlin, Germany");
|
||||||
event.setExpiryDate(expiryDate);
|
event.setExpiryDate(expiryDate);
|
||||||
event.setCreatedAt(createdAt);
|
event.setCreatedAt(createdAt);
|
||||||
@@ -77,6 +79,7 @@ class EventPersistenceAdapterTest {
|
|||||||
assertThat(found.getTitle()).isEqualTo("Full Event");
|
assertThat(found.getTitle()).isEqualTo("Full Event");
|
||||||
assertThat(found.getDescription()).isEqualTo("A detailed description");
|
assertThat(found.getDescription()).isEqualTo("A detailed description");
|
||||||
assertThat(found.getDateTime().toInstant()).isEqualTo(dateTime.toInstant());
|
assertThat(found.getDateTime().toInstant()).isEqualTo(dateTime.toInstant());
|
||||||
|
assertThat(found.getTimezone()).isEqualTo(ZoneId.of("Europe/Berlin"));
|
||||||
assertThat(found.getLocation()).isEqualTo("Berlin, Germany");
|
assertThat(found.getLocation()).isEqualTo("Berlin, Germany");
|
||||||
assertThat(found.getExpiryDate()).isEqualTo(expiryDate);
|
assertThat(found.getExpiryDate()).isEqualTo(expiryDate);
|
||||||
assertThat(found.getCreatedAt().toInstant()).isEqualTo(createdAt.toInstant());
|
assertThat(found.getCreatedAt().toInstant()).isEqualTo(createdAt.toInstant());
|
||||||
@@ -89,6 +92,7 @@ class EventPersistenceAdapterTest {
|
|||||||
event.setTitle("Test Event");
|
event.setTitle("Test Event");
|
||||||
event.setDescription("Test description");
|
event.setDescription("Test description");
|
||||||
event.setDateTime(OffsetDateTime.now().plusDays(7));
|
event.setDateTime(OffsetDateTime.now().plusDays(7));
|
||||||
|
event.setTimezone(ZoneId.of("Europe/Berlin"));
|
||||||
event.setLocation("Somewhere");
|
event.setLocation("Somewhere");
|
||||||
event.setExpiryDate(LocalDate.now().plusDays(30));
|
event.setExpiryDate(LocalDate.now().plusDays(30));
|
||||||
event.setCreatedAt(OffsetDateTime.now());
|
event.setCreatedAt(OffsetDateTime.now());
|
||||||
|
|||||||
@@ -16,6 +16,8 @@ import java.time.LocalDate;
|
|||||||
import java.time.OffsetDateTime;
|
import java.time.OffsetDateTime;
|
||||||
import java.time.ZoneId;
|
import java.time.ZoneId;
|
||||||
import java.time.ZoneOffset;
|
import java.time.ZoneOffset;
|
||||||
|
import java.util.Optional;
|
||||||
|
import java.util.UUID;
|
||||||
import org.junit.jupiter.api.BeforeEach;
|
import org.junit.jupiter.api.BeforeEach;
|
||||||
import org.junit.jupiter.api.Test;
|
import org.junit.jupiter.api.Test;
|
||||||
import org.junit.jupiter.api.extension.ExtendWith;
|
import org.junit.jupiter.api.extension.ExtendWith;
|
||||||
@@ -50,6 +52,7 @@ class EventServiceTest {
|
|||||||
"Birthday Party",
|
"Birthday Party",
|
||||||
"Come celebrate!",
|
"Come celebrate!",
|
||||||
OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)),
|
OffsetDateTime.of(2026, 6, 15, 20, 0, 0, 0, ZoneOffset.ofHours(2)),
|
||||||
|
ZoneId.of("Europe/Berlin"),
|
||||||
"Berlin",
|
"Berlin",
|
||||||
LocalDate.of(2026, 7, 15)
|
LocalDate.of(2026, 7, 15)
|
||||||
);
|
);
|
||||||
@@ -58,28 +61,13 @@ class EventServiceTest {
|
|||||||
|
|
||||||
assertThat(result.getTitle()).isEqualTo("Birthday Party");
|
assertThat(result.getTitle()).isEqualTo("Birthday Party");
|
||||||
assertThat(result.getDescription()).isEqualTo("Come celebrate!");
|
assertThat(result.getDescription()).isEqualTo("Come celebrate!");
|
||||||
|
assertThat(result.getTimezone()).isEqualTo(ZoneId.of("Europe/Berlin"));
|
||||||
assertThat(result.getLocation()).isEqualTo("Berlin");
|
assertThat(result.getLocation()).isEqualTo("Berlin");
|
||||||
assertThat(result.getEventToken()).isNotNull();
|
assertThat(result.getEventToken()).isNotNull();
|
||||||
assertThat(result.getOrganizerToken()).isNotNull();
|
assertThat(result.getOrganizerToken()).isNotNull();
|
||||||
assertThat(result.getCreatedAt()).isEqualTo(OffsetDateTime.now(FIXED_CLOCK));
|
assertThat(result.getCreatedAt()).isEqualTo(OffsetDateTime.now(FIXED_CLOCK));
|
||||||
}
|
}
|
||||||
|
|
||||||
@Test
|
|
||||||
void eventTokenAndOrganizerTokenAreDifferent() {
|
|
||||||
when(eventRepository.save(any(Event.class)))
|
|
||||||
.thenAnswer(invocation -> invocation.getArgument(0));
|
|
||||||
|
|
||||||
var command = new CreateEventCommand(
|
|
||||||
"Test", null,
|
|
||||||
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), null,
|
|
||||||
LocalDate.now(FIXED_CLOCK).plusDays(30)
|
|
||||||
);
|
|
||||||
|
|
||||||
Event result = eventService.createEvent(command);
|
|
||||||
|
|
||||||
assertThat(result.getEventToken()).isNotEqualTo(result.getOrganizerToken());
|
|
||||||
}
|
|
||||||
|
|
||||||
@Test
|
@Test
|
||||||
void repositorySaveCalledExactlyOnce() {
|
void repositorySaveCalledExactlyOnce() {
|
||||||
when(eventRepository.save(any(Event.class)))
|
when(eventRepository.save(any(Event.class)))
|
||||||
@@ -87,7 +75,7 @@ class EventServiceTest {
|
|||||||
|
|
||||||
var command = new CreateEventCommand(
|
var command = new CreateEventCommand(
|
||||||
"Test", null,
|
"Test", null,
|
||||||
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), null,
|
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), ZONE, null,
|
||||||
LocalDate.now(FIXED_CLOCK).plusDays(30)
|
LocalDate.now(FIXED_CLOCK).plusDays(30)
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -102,7 +90,7 @@ class EventServiceTest {
|
|||||||
void expiryDateTodayThrowsException() {
|
void expiryDateTodayThrowsException() {
|
||||||
var command = new CreateEventCommand(
|
var command = new CreateEventCommand(
|
||||||
"Test", null,
|
"Test", null,
|
||||||
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), null,
|
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), ZONE, null,
|
||||||
LocalDate.now(FIXED_CLOCK)
|
LocalDate.now(FIXED_CLOCK)
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -114,7 +102,7 @@ class EventServiceTest {
|
|||||||
void expiryDateInPastThrowsException() {
|
void expiryDateInPastThrowsException() {
|
||||||
var command = new CreateEventCommand(
|
var command = new CreateEventCommand(
|
||||||
"Test", null,
|
"Test", null,
|
||||||
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), null,
|
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), ZONE, null,
|
||||||
LocalDate.now(FIXED_CLOCK).minusDays(5)
|
LocalDate.now(FIXED_CLOCK).minusDays(5)
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -129,7 +117,7 @@ class EventServiceTest {
|
|||||||
|
|
||||||
var command = new CreateEventCommand(
|
var command = new CreateEventCommand(
|
||||||
"Test", null,
|
"Test", null,
|
||||||
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), null,
|
OffsetDateTime.now(FIXED_CLOCK).plusDays(1), ZONE, null,
|
||||||
LocalDate.now(FIXED_CLOCK).plusDays(1)
|
LocalDate.now(FIXED_CLOCK).plusDays(1)
|
||||||
);
|
);
|
||||||
|
|
||||||
@@ -137,4 +125,51 @@ class EventServiceTest {
|
|||||||
|
|
||||||
assertThat(result.getExpiryDate()).isEqualTo(LocalDate.of(2026, 3, 6));
|
assertThat(result.getExpiryDate()).isEqualTo(LocalDate.of(2026, 3, 6));
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// --- GetEventUseCase tests (T004) ---
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void getByEventTokenReturnsEvent() {
|
||||||
|
UUID token = UUID.randomUUID();
|
||||||
|
var event = new Event();
|
||||||
|
event.setEventToken(token);
|
||||||
|
event.setTitle("Found Event");
|
||||||
|
when(eventRepository.findByEventToken(token))
|
||||||
|
.thenReturn(Optional.of(event));
|
||||||
|
|
||||||
|
Optional<Event> result = eventService.getByEventToken(token);
|
||||||
|
|
||||||
|
assertThat(result).isPresent();
|
||||||
|
assertThat(result.get().getTitle()).isEqualTo("Found Event");
|
||||||
|
}
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void getByEventTokenReturnsEmptyForUnknownToken() {
|
||||||
|
UUID token = UUID.randomUUID();
|
||||||
|
when(eventRepository.findByEventToken(token))
|
||||||
|
.thenReturn(Optional.empty());
|
||||||
|
|
||||||
|
Optional<Event> result = eventService.getByEventToken(token);
|
||||||
|
|
||||||
|
assertThat(result).isEmpty();
|
||||||
|
}
|
||||||
|
|
||||||
|
// --- Timezone validation tests (T006) ---
|
||||||
|
|
||||||
|
@Test
|
||||||
|
void createEventWithValidTimezoneSucceeds() {
|
||||||
|
when(eventRepository.save(any(Event.class)))
|
||||||
|
.thenAnswer(invocation -> invocation.getArgument(0));
|
||||||
|
|
||||||
|
var command = new CreateEventCommand(
|
||||||
|
"Test", null,
|
||||||
|
OffsetDateTime.now(FIXED_CLOCK).plusDays(1),
|
||||||
|
ZoneId.of("America/New_York"), null,
|
||||||
|
LocalDate.now(FIXED_CLOCK).plusDays(30)
|
||||||
|
);
|
||||||
|
|
||||||
|
Event result = eventService.createEvent(command);
|
||||||
|
|
||||||
|
assertThat(result.getTimezone()).isEqualTo(ZoneId.of("America/New_York"));
|
||||||
|
}
|
||||||
}
|
}
|
||||||
|
|||||||
@@ -12,7 +12,7 @@ test.describe('US-1: Create an event', () => {
|
|||||||
await expect(page.getByText('Expiry date is required.')).toBeVisible()
|
await expect(page.getByText('Expiry date is required.')).toBeVisible()
|
||||||
})
|
})
|
||||||
|
|
||||||
test('creates an event and redirects to stub page', async ({ page }) => {
|
test('creates an event and redirects to event detail page', async ({ page }) => {
|
||||||
await page.goto('/create')
|
await page.goto('/create')
|
||||||
|
|
||||||
await page.getByLabel(/title/i).fill('Summer BBQ')
|
await page.getByLabel(/title/i).fill('Summer BBQ')
|
||||||
@@ -24,7 +24,6 @@ test.describe('US-1: Create an event', () => {
|
|||||||
await page.getByRole('button', { name: /create event/i }).click()
|
await page.getByRole('button', { name: /create event/i }).click()
|
||||||
|
|
||||||
await expect(page).toHaveURL(/\/events\/.+/)
|
await expect(page).toHaveURL(/\/events\/.+/)
|
||||||
await expect(page.getByText('Event created!')).toBeVisible()
|
|
||||||
})
|
})
|
||||||
|
|
||||||
test('stores event data in localStorage after creation', async ({ page }) => {
|
test('stores event data in localStorage after creation', async ({ page }) => {
|
||||||
|
|||||||
127
frontend/e2e/event-view.spec.ts
Normal file
127
frontend/e2e/event-view.spec.ts
Normal file
@@ -0,0 +1,127 @@
|
|||||||
|
import { http, HttpResponse } from 'msw'
|
||||||
|
import { test, expect } from './msw-setup'
|
||||||
|
|
||||||
|
const fullEvent = {
|
||||||
|
eventToken: 'a1b2c3d4-e5f6-7890-abcd-ef1234567890',
|
||||||
|
title: 'Summer BBQ',
|
||||||
|
description: 'Bring your own drinks!',
|
||||||
|
dateTime: '2026-03-15T20:00:00+01:00',
|
||||||
|
timezone: 'Europe/Berlin',
|
||||||
|
location: 'Central Park, NYC',
|
||||||
|
attendeeCount: 12,
|
||||||
|
expired: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
test.describe('US-1: View event details', () => {
|
||||||
|
test('displays all event fields for a valid event', async ({ page, network }) => {
|
||||||
|
network.use(
|
||||||
|
http.get('*/api/events/:token', () => {
|
||||||
|
return HttpResponse.json(fullEvent)
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
await page.goto(`/events/${fullEvent.eventToken}`)
|
||||||
|
|
||||||
|
await expect(page.getByRole('heading', { name: 'Summer BBQ' })).toBeVisible()
|
||||||
|
await expect(page.getByText('Bring your own drinks!')).toBeVisible()
|
||||||
|
await expect(page.getByText('Central Park, NYC')).toBeVisible()
|
||||||
|
await expect(page.getByText('12')).toBeVisible()
|
||||||
|
await expect(page.getByText('Europe/Berlin')).toBeVisible()
|
||||||
|
await expect(page.getByText('2026')).toBeVisible()
|
||||||
|
})
|
||||||
|
|
||||||
|
test('does not load external resources', async ({ page, network }) => {
|
||||||
|
const externalRequests: string[] = []
|
||||||
|
page.on('request', (req) => {
|
||||||
|
const url = new URL(req.url())
|
||||||
|
if (!['localhost', '127.0.0.1'].includes(url.hostname)) {
|
||||||
|
externalRequests.push(req.url())
|
||||||
|
}
|
||||||
|
})
|
||||||
|
|
||||||
|
network.use(
|
||||||
|
http.get('*/api/events/:token', () => {
|
||||||
|
return HttpResponse.json(fullEvent)
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
await page.goto(`/events/${fullEvent.eventToken}`)
|
||||||
|
await expect(page.getByRole('heading', { name: 'Summer BBQ' })).toBeVisible()
|
||||||
|
|
||||||
|
expect(externalRequests).toEqual([])
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
test.describe('US-2: View expired event', () => {
|
||||||
|
test('shows "event has ended" banner for expired event', async ({ page, network }) => {
|
||||||
|
network.use(
|
||||||
|
http.get('*/api/events/:token', () => {
|
||||||
|
return HttpResponse.json({ ...fullEvent, expired: true })
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
await page.goto(`/events/${fullEvent.eventToken}`)
|
||||||
|
|
||||||
|
await expect(page.getByText('This event has ended.')).toBeVisible()
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
test.describe('US-4: Event not found', () => {
|
||||||
|
test('shows "event not found" for unknown token', async ({ page, network }) => {
|
||||||
|
network.use(
|
||||||
|
http.get('*/api/events/:token', () => {
|
||||||
|
return HttpResponse.json(
|
||||||
|
{ type: 'urn:problem-type:event-not-found', title: 'Event Not Found', status: 404, detail: 'Event not found.' },
|
||||||
|
{ status: 404, headers: { 'Content-Type': 'application/problem+json' } },
|
||||||
|
)
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
await page.goto('/events/00000000-0000-0000-0000-000000000000')
|
||||||
|
|
||||||
|
await expect(page.getByText('Event not found.')).toBeVisible()
|
||||||
|
// No event data visible
|
||||||
|
await expect(page.locator('.detail__title')).not.toBeVisible()
|
||||||
|
})
|
||||||
|
})
|
||||||
|
|
||||||
|
test.describe('Server error', () => {
|
||||||
|
test('shows error message and retry button on 500', async ({ page, network }) => {
|
||||||
|
network.use(
|
||||||
|
http.get('*/api/events/:token', () => {
|
||||||
|
return HttpResponse.json(
|
||||||
|
{ type: 'about:blank', title: 'Internal Server Error', status: 500, detail: 'An unexpected error occurred.' },
|
||||||
|
{ status: 500, headers: { 'Content-Type': 'application/problem+json' } },
|
||||||
|
)
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
await page.goto(`/events/${fullEvent.eventToken}`)
|
||||||
|
|
||||||
|
await expect(page.getByText('Something went wrong.')).toBeVisible()
|
||||||
|
await expect(page.getByRole('button', { name: 'Retry' })).toBeVisible()
|
||||||
|
})
|
||||||
|
|
||||||
|
test('retry button re-fetches the event', async ({ page, network }) => {
|
||||||
|
let callCount = 0
|
||||||
|
network.use(
|
||||||
|
http.get('*/api/events/:token', () => {
|
||||||
|
callCount++
|
||||||
|
if (callCount === 1) {
|
||||||
|
return HttpResponse.json(
|
||||||
|
{ type: 'about:blank', title: 'Error', status: 500 },
|
||||||
|
{ status: 500, headers: { 'Content-Type': 'application/problem+json' } },
|
||||||
|
)
|
||||||
|
}
|
||||||
|
return HttpResponse.json(fullEvent)
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
|
||||||
|
await page.goto(`/events/${fullEvent.eventToken}`)
|
||||||
|
await expect(page.getByText('Something went wrong.')).toBeVisible()
|
||||||
|
|
||||||
|
await page.getByRole('button', { name: 'Retry' }).click()
|
||||||
|
|
||||||
|
await expect(page.getByRole('heading', { name: 'Summer BBQ' })).toBeVisible()
|
||||||
|
})
|
||||||
|
})
|
||||||
184
frontend/package-lock.json
generated
184
frontend/package-lock.json
generated
@@ -23,17 +23,17 @@
|
|||||||
"@vitest/eslint-plugin": "^1.6.9",
|
"@vitest/eslint-plugin": "^1.6.9",
|
||||||
"@vue/eslint-config-typescript": "^14.7.0",
|
"@vue/eslint-config-typescript": "^14.7.0",
|
||||||
"@vue/test-utils": "^2.4.6",
|
"@vue/test-utils": "^2.4.6",
|
||||||
"@vue/tsconfig": "^0.8.1",
|
"@vue/tsconfig": "^0.9.0",
|
||||||
"eslint": "^10.0.2",
|
"eslint": "^10.0.2",
|
||||||
"eslint-config-prettier": "^10.1.8",
|
"eslint-config-prettier": "^10.1.8",
|
||||||
"eslint-plugin-oxlint": "~1.50.0",
|
"eslint-plugin-oxlint": "~1.51.0",
|
||||||
"eslint-plugin-vue": "~10.8.0",
|
"eslint-plugin-vue": "~10.8.0",
|
||||||
"jiti": "^2.6.1",
|
"jiti": "^2.6.1",
|
||||||
"jsdom": "^28.1.0",
|
"jsdom": "^28.1.0",
|
||||||
"msw": "^2.12.10",
|
"msw": "^2.12.10",
|
||||||
"npm-run-all2": "^8.0.4",
|
"npm-run-all2": "^8.0.4",
|
||||||
"openapi-typescript": "^7.13.0",
|
"openapi-typescript": "^7.13.0",
|
||||||
"oxlint": "~1.50.0",
|
"oxlint": "~1.51.0",
|
||||||
"prettier": "3.8.1",
|
"prettier": "3.8.1",
|
||||||
"typescript": "~5.9.3",
|
"typescript": "~5.9.3",
|
||||||
"vite": "^7.3.1",
|
"vite": "^7.3.1",
|
||||||
@@ -1727,9 +1727,9 @@
|
|||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-android-arm-eabi": {
|
"node_modules/@oxlint/binding-android-arm-eabi": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm-eabi/-/binding-android-arm-eabi-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm-eabi/-/binding-android-arm-eabi-1.51.0.tgz",
|
||||||
"integrity": "sha512-G7MRGk/6NCe+L8ntonRdZP7IkBfEpiZ/he3buLK6JkLgMHgJShXZ+BeOwADmspXez7U7F7L1Anf4xLSkLHiGTg==",
|
"integrity": "sha512-jJYIqbx4sX+suIxWstc4P7SzhEwb4ArWA2KVrmEuu9vH2i0qM6QIHz/ehmbGE4/2fZbpuMuBzTl7UkfNoqiSgw==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm"
|
"arm"
|
||||||
],
|
],
|
||||||
@@ -1744,9 +1744,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-android-arm64": {
|
"node_modules/@oxlint/binding-android-arm64": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm64/-/binding-android-arm64-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-android-arm64/-/binding-android-arm64-1.51.0.tgz",
|
||||||
"integrity": "sha512-GeSuMoJWCVpovJi/e3xDSNgjeR8WEZ6MCXL6EtPiCIM2NTzv7LbflARINTXTJy2oFBYyvdf/l2PwHzYo6EdXvg==",
|
"integrity": "sha512-GtXyBCcH4ti98YdiMNCrpBNGitx87EjEWxevnyhcBK12k/Vu4EzSB45rzSC4fGFUD6sQgeaxItRCEEWeVwPafw==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm64"
|
"arm64"
|
||||||
],
|
],
|
||||||
@@ -1761,9 +1761,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-darwin-arm64": {
|
"node_modules/@oxlint/binding-darwin-arm64": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-arm64/-/binding-darwin-arm64-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-arm64/-/binding-darwin-arm64-1.51.0.tgz",
|
||||||
"integrity": "sha512-w3SY5YtxGnxCHPJ8Twl3KmS9oja1gERYk3AMoZ7Hv8P43ZtB6HVfs02TxvarxfL214Tm3uzvc2vn+DhtUNeKnw==",
|
"integrity": "sha512-3QJbeYaMHn6Bh2XeBXuITSsbnIctyTjvHf5nRjKYrT9pPeErNIpp5VDEeAXC0CZSwSVTsc8WOSDwgrAI24JolQ==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm64"
|
"arm64"
|
||||||
],
|
],
|
||||||
@@ -1778,9 +1778,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-darwin-x64": {
|
"node_modules/@oxlint/binding-darwin-x64": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-x64/-/binding-darwin-x64-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-darwin-x64/-/binding-darwin-x64-1.51.0.tgz",
|
||||||
"integrity": "sha512-hNfogDqy7tvmllXKBSlHo6k5x7dhTUVOHbMSE15CCAcXzmqf5883aPvBYPOq9AE7DpDUQUZ1kVE22YbiGW+tuw==",
|
"integrity": "sha512-NzErhMaTEN1cY0E8C5APy74lw5VwsNfJfVPBMWPVQLqAbO0k4FFLjvHURvkUL+Y18Wu+8Vs1kbqPh2hjXYA4pg==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"x64"
|
"x64"
|
||||||
],
|
],
|
||||||
@@ -1795,9 +1795,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-freebsd-x64": {
|
"node_modules/@oxlint/binding-freebsd-x64": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-freebsd-x64/-/binding-freebsd-x64-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-freebsd-x64/-/binding-freebsd-x64-1.51.0.tgz",
|
||||||
"integrity": "sha512-ykZevOWEyu0nsxolA911ucxpEv0ahw8jfEeGWOwwb/VPoE4xoexuTOAiPNlWZNJqANlJl7yp8OyzCtXTUAxotw==",
|
"integrity": "sha512-msAIh3vPAoKoHlOE/oe6Q5C/n9umypv/k81lED82ibrJotn+3YG2Qp1kiR8o/Dg5iOEU97c6tl0utxcyFenpFw==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"x64"
|
"x64"
|
||||||
],
|
],
|
||||||
@@ -1812,9 +1812,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-arm-gnueabihf": {
|
"node_modules/@oxlint/binding-linux-arm-gnueabihf": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-gnueabihf/-/binding-linux-arm-gnueabihf-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-gnueabihf/-/binding-linux-arm-gnueabihf-1.51.0.tgz",
|
||||||
"integrity": "sha512-hif3iDk7vo5GGJ4OLCCZAf2vjnU9FztGw4L0MbQL0M2iY9LKFtDMMiQAHmkF0PQGQMVbTYtPdXCLKVgdkiqWXQ==",
|
"integrity": "sha512-CqQPcvqYyMe9ZBot2stjGogEzk1z8gGAngIX7srSzrzexmXixwVxBdFZyxTVM0CjGfDeV+Ru0w25/WNjlMM2Hw==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm"
|
"arm"
|
||||||
],
|
],
|
||||||
@@ -1829,9 +1829,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-arm-musleabihf": {
|
"node_modules/@oxlint/binding-linux-arm-musleabihf": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-musleabihf/-/binding-linux-arm-musleabihf-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm-musleabihf/-/binding-linux-arm-musleabihf-1.51.0.tgz",
|
||||||
"integrity": "sha512-dVp9iSssiGAnTNey2Ruf6xUaQhdnvcFOJyRWd/mu5o2jVbFK15E5fbWGeFRfmuobu5QXuROtFga44+7DOS3PLg==",
|
"integrity": "sha512-dstrlYQgZMnyOssxSbolGCge/sDbko12N/35RBNuqLpoPbft2aeBidBAb0dvQlyBd9RJ6u8D4o4Eh8Un6iTgyQ==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm"
|
"arm"
|
||||||
],
|
],
|
||||||
@@ -1846,9 +1846,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-arm64-gnu": {
|
"node_modules/@oxlint/binding-linux-arm64-gnu": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-gnu/-/binding-linux-arm64-gnu-1.51.0.tgz",
|
||||||
"integrity": "sha512-1cT7yz2HA910CKA9NkH1ZJo50vTtmND2fkoW1oyiSb0j6WvNtJ0Wx2zoySfXWc/c+7HFoqRK5AbEoL41LOn9oA==",
|
"integrity": "sha512-QEjUpXO7d35rP1/raLGGbAsBLLGZIzV3ZbeSjqWlD3oRnxpRIZ6iL4o51XQHkconn3uKssc+1VKdtHJ81BBhDA==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm64"
|
"arm64"
|
||||||
],
|
],
|
||||||
@@ -1863,9 +1863,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-arm64-musl": {
|
"node_modules/@oxlint/binding-linux-arm64-musl": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-musl/-/binding-linux-arm64-musl-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-arm64-musl/-/binding-linux-arm64-musl-1.51.0.tgz",
|
||||||
"integrity": "sha512-++B3k/HEPFVlj89cOz8kWfQccMZB/aWL9AhsW7jPIkG++63Mpwb2cE9XOEsd0PATbIan78k2Gky+09uWM1d/gQ==",
|
"integrity": "sha512-YSJua5irtG4DoMAjUapDTPhkQLHhBIY0G9JqlZS6/SZPzqDkPku/1GdWs0D6h/wyx0Iz31lNCfIaWKBQhzP0wQ==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm64"
|
"arm64"
|
||||||
],
|
],
|
||||||
@@ -1880,9 +1880,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-ppc64-gnu": {
|
"node_modules/@oxlint/binding-linux-ppc64-gnu": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-ppc64-gnu/-/binding-linux-ppc64-gnu-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-ppc64-gnu/-/binding-linux-ppc64-gnu-1.51.0.tgz",
|
||||||
"integrity": "sha512-Z9b/KpFMkx66w3gVBqjIC1AJBTZAGoI9+U+K5L4QM0CB/G0JSNC1es9b3Y0Vcrlvtdn8A+IQTkYjd/Q0uCSaZw==",
|
"integrity": "sha512-7L4Wj2IEUNDETKssB9IDYt16T6WlF+X2jgC/hBq3diGHda9vJLpAgb09+D3quFq7TdkFtI7hwz/jmuQmQFPc1Q==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"ppc64"
|
"ppc64"
|
||||||
],
|
],
|
||||||
@@ -1897,9 +1897,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-riscv64-gnu": {
|
"node_modules/@oxlint/binding-linux-riscv64-gnu": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-gnu/-/binding-linux-riscv64-gnu-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-gnu/-/binding-linux-riscv64-gnu-1.51.0.tgz",
|
||||||
"integrity": "sha512-jvmuIw8wRSohsQlFNIST5uUwkEtEJmOQYr33bf/K2FrFPXHhM4KqGekI3ShYJemFS/gARVacQFgBzzJKCAyJjg==",
|
"integrity": "sha512-cBUHqtOXy76G41lOB401qpFoKx1xq17qYkhWrLSM7eEjiHM9sOtYqpr6ZdqCnN9s6ZpzudX4EkeHOFH2E9q0vA==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"riscv64"
|
"riscv64"
|
||||||
],
|
],
|
||||||
@@ -1914,9 +1914,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-riscv64-musl": {
|
"node_modules/@oxlint/binding-linux-riscv64-musl": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-musl/-/binding-linux-riscv64-musl-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-riscv64-musl/-/binding-linux-riscv64-musl-1.51.0.tgz",
|
||||||
"integrity": "sha512-x+UrN47oYNh90nmAAyql8eQaaRpHbDPu5guasDg10+OpszUQ3/1+1J6zFMmV4xfIEgTcUXG/oI5fxJhF4eWCNA==",
|
"integrity": "sha512-WKbg8CysgZcHfZX0ixQFBRSBvFZUHa3SBnEjHY2FVYt2nbNJEjzTxA3ZR5wMU0NOCNKIAFUFvAh5/XJKPRJuJg==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"riscv64"
|
"riscv64"
|
||||||
],
|
],
|
||||||
@@ -1931,9 +1931,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-s390x-gnu": {
|
"node_modules/@oxlint/binding-linux-s390x-gnu": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-s390x-gnu/-/binding-linux-s390x-gnu-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-s390x-gnu/-/binding-linux-s390x-gnu-1.51.0.tgz",
|
||||||
"integrity": "sha512-i/JLi2ljLUIVfekMj4ISmdt+Hn11wzYUdRRrkVUYsCWw7zAy5xV7X9iA+KMyM156LTFympa7s3oKBjuCLoTAUQ==",
|
"integrity": "sha512-N1QRUvJTxqXNSu35YOufdjsAVmKVx5bkrggOWAhTWBc3J4qjcBwr1IfyLh/6YCg8sYRSR1GraldS9jUgJL/U4A==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"s390x"
|
"s390x"
|
||||||
],
|
],
|
||||||
@@ -1948,9 +1948,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-x64-gnu": {
|
"node_modules/@oxlint/binding-linux-x64-gnu": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-gnu/-/binding-linux-x64-gnu-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-gnu/-/binding-linux-x64-gnu-1.51.0.tgz",
|
||||||
"integrity": "sha512-/C7brhn6c6UUPccgSPCcpLQXcp+xKIW/3sji/5VZ8/OItL3tQ2U7KalHz887UxxSQeEOmd1kY6lrpuwFnmNqOA==",
|
"integrity": "sha512-e0Mz0DizsCoqNIjeOg6OUKe8JKJWZ5zZlwsd05Bmr51Jo3AOL4UJnPvwKumr4BBtBrDZkCmOLhCvDGm95nJM2g==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"x64"
|
"x64"
|
||||||
],
|
],
|
||||||
@@ -1965,9 +1965,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-linux-x64-musl": {
|
"node_modules/@oxlint/binding-linux-x64-musl": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-musl/-/binding-linux-x64-musl-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-linux-x64-musl/-/binding-linux-x64-musl-1.51.0.tgz",
|
||||||
"integrity": "sha512-oDR1f+bGOYU8LfgtEW8XtotWGB63ghtcxk5Jm6IDTCk++rTA/IRMsjOid2iMd+1bW+nP9Mdsmcdc7VbPD3+iyQ==",
|
"integrity": "sha512-wD8HGTWhYBKXvRDvoBVB1y+fEYV01samhWQSy1Zkxq2vpezvMnjaFKRuiP6tBNITLGuffbNDEXOwcAhJ3gI5Ug==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"x64"
|
"x64"
|
||||||
],
|
],
|
||||||
@@ -1982,9 +1982,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-openharmony-arm64": {
|
"node_modules/@oxlint/binding-openharmony-arm64": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-openharmony-arm64/-/binding-openharmony-arm64-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-openharmony-arm64/-/binding-openharmony-arm64-1.51.0.tgz",
|
||||||
"integrity": "sha512-4CmRGPp5UpvXyu4jjP9Tey/SrXDQLRvZXm4pb4vdZBxAzbFZkCyh0KyRy4txld/kZKTJlW4TO8N1JKrNEk+mWw==",
|
"integrity": "sha512-5NSwQ2hDEJ0GPXqikjWtwzgAQCsS7P9aLMNenjjKa+gknN3lTCwwwERsT6lKXSirfU3jLjexA2XQvQALh5h27w==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm64"
|
"arm64"
|
||||||
],
|
],
|
||||||
@@ -1999,9 +1999,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-win32-arm64-msvc": {
|
"node_modules/@oxlint/binding-win32-arm64-msvc": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-arm64-msvc/-/binding-win32-arm64-msvc-1.51.0.tgz",
|
||||||
"integrity": "sha512-Fq0M6vsGcFsSfeuWAACDhd5KJrO85ckbEfe1EGuBj+KPyJz7KeWte2fSFrFGmNKNXyhEMyx4tbgxiWRujBM2KQ==",
|
"integrity": "sha512-JEZyah1M0RHMw8d+jjSSJmSmO8sABA1J1RtrHYujGPeCkYg1NeH0TGuClpe2h5QtioRTaF57y/TZfn/2IFV6fA==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"arm64"
|
"arm64"
|
||||||
],
|
],
|
||||||
@@ -2016,9 +2016,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-win32-ia32-msvc": {
|
"node_modules/@oxlint/binding-win32-ia32-msvc": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-ia32-msvc/-/binding-win32-ia32-msvc-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-ia32-msvc/-/binding-win32-ia32-msvc-1.51.0.tgz",
|
||||||
"integrity": "sha512-qTdWR9KwY/vxJGhHVIZG2eBOhidOQvOwzDxnX+jhW/zIVacal1nAhR8GLkiywW8BIFDkQKXo/zOfT+/DY+ns/w==",
|
"integrity": "sha512-q3cEoKH6kwjz/WRyHwSf0nlD2F5Qw536kCXvmlSu+kaShzgrA0ojmh45CA81qL+7udfCaZL2SdKCZlLiGBVFlg==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"ia32"
|
"ia32"
|
||||||
],
|
],
|
||||||
@@ -2033,9 +2033,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@oxlint/binding-win32-x64-msvc": {
|
"node_modules/@oxlint/binding-win32-x64-msvc": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-x64-msvc/-/binding-win32-x64-msvc-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/@oxlint/binding-win32-x64-msvc/-/binding-win32-x64-msvc-1.51.0.tgz",
|
||||||
"integrity": "sha512-682t7npLC4G2Ca+iNlI9fhAKTcFPYYXJjwoa88H4q+u5HHHlsnL/gHULapX3iqp+A8FIJbgdylL5KMYo2LaluQ==",
|
"integrity": "sha512-Q14+fOGb9T28nWF/0EUsYqERiRA7cl1oy4TJrGmLaqhm+aO2cV+JttboHI3CbdeMCAyDI1+NoSlrM7Melhp/cw==",
|
||||||
"cpu": [
|
"cpu": [
|
||||||
"x64"
|
"x64"
|
||||||
],
|
],
|
||||||
@@ -2656,9 +2656,9 @@
|
|||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/@types/node": {
|
"node_modules/@types/node": {
|
||||||
"version": "24.11.0",
|
"version": "24.12.0",
|
||||||
"resolved": "https://registry.npmjs.org/@types/node/-/node-24.11.0.tgz",
|
"resolved": "https://registry.npmjs.org/@types/node/-/node-24.12.0.tgz",
|
||||||
"integrity": "sha512-fPxQqz4VTgPI/IQ+lj9r0h+fDR66bzoeMGHp8ASee+32OSGIkeASsoZuJixsQoVef1QJbeubcPBxKk22QVoWdw==",
|
"integrity": "sha512-GYDxsZi3ChgmckRT9HPU0WEhKLP08ev/Yfcq2AstjrDASOYCSXeyjDsHg4v5t4jOj7cyDX3vmprafKlWIG9MXQ==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
@@ -3423,9 +3423,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/@vue/tsconfig": {
|
"node_modules/@vue/tsconfig": {
|
||||||
"version": "0.8.1",
|
"version": "0.9.0",
|
||||||
"resolved": "https://registry.npmjs.org/@vue/tsconfig/-/tsconfig-0.8.1.tgz",
|
"resolved": "https://registry.npmjs.org/@vue/tsconfig/-/tsconfig-0.9.0.tgz",
|
||||||
"integrity": "sha512-aK7feIWPXFSUhsCP9PFqPyFOcz4ENkb8hZ2pneL6m2UjCkccvaOhC/5KCKluuBufvp2KzkbdA2W2pk20vLzu3g==",
|
"integrity": "sha512-RP+v9Cpbsk1ZVXltCHHkYBr7+624x6gcijJXVjIcsYk7JXqvIpRtMwU2ARLvWDhmy9ffdFYxhsfJnPztADBohQ==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"peerDependencies": {
|
"peerDependencies": {
|
||||||
@@ -4376,9 +4376,9 @@
|
|||||||
}
|
}
|
||||||
},
|
},
|
||||||
"node_modules/eslint-plugin-oxlint": {
|
"node_modules/eslint-plugin-oxlint": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/eslint-plugin-oxlint/-/eslint-plugin-oxlint-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/eslint-plugin-oxlint/-/eslint-plugin-oxlint-1.51.0.tgz",
|
||||||
"integrity": "sha512-QAxeFeUHuekmLkuRLdzHH8Z0JvC7482OaQ3jlUMdEd0gcS6m+MYHei3Favoew9DdvTQT7yHxrm7BL0iXoenb6w==",
|
"integrity": "sha512-lct8LD1AxfHF1PcsuK6mFYals+zX0mx/WP2G4i16h0iR8jpT3xCfGTmTNwXiImcevzGIiJ/VDBgQ7t0B9z2Jeg==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"dependencies": {
|
"dependencies": {
|
||||||
@@ -5776,9 +5776,9 @@
|
|||||||
"license": "MIT"
|
"license": "MIT"
|
||||||
},
|
},
|
||||||
"node_modules/oxlint": {
|
"node_modules/oxlint": {
|
||||||
"version": "1.50.0",
|
"version": "1.51.0",
|
||||||
"resolved": "https://registry.npmjs.org/oxlint/-/oxlint-1.50.0.tgz",
|
"resolved": "https://registry.npmjs.org/oxlint/-/oxlint-1.51.0.tgz",
|
||||||
"integrity": "sha512-iSJ4IZEICBma8cZX7kxIIz9PzsYLF2FaLAYN6RKu7VwRVKdu7RIgpP99bTZaGl//Yao7fsaGZLSEo5xBrI5ReQ==",
|
"integrity": "sha512-g6DNPaV9/WI9MoX2XllafxQuxwY1TV++j7hP8fTJByVBuCoVtm3dy9f/2vtH/HU40JztcgWF4G7ua+gkainklQ==",
|
||||||
"dev": true,
|
"dev": true,
|
||||||
"license": "MIT",
|
"license": "MIT",
|
||||||
"bin": {
|
"bin": {
|
||||||
@@ -5791,28 +5791,28 @@
|
|||||||
"url": "https://github.com/sponsors/Boshen"
|
"url": "https://github.com/sponsors/Boshen"
|
||||||
},
|
},
|
||||||
"optionalDependencies": {
|
"optionalDependencies": {
|
||||||
"@oxlint/binding-android-arm-eabi": "1.50.0",
|
"@oxlint/binding-android-arm-eabi": "1.51.0",
|
||||||
"@oxlint/binding-android-arm64": "1.50.0",
|
"@oxlint/binding-android-arm64": "1.51.0",
|
||||||
"@oxlint/binding-darwin-arm64": "1.50.0",
|
"@oxlint/binding-darwin-arm64": "1.51.0",
|
||||||
"@oxlint/binding-darwin-x64": "1.50.0",
|
"@oxlint/binding-darwin-x64": "1.51.0",
|
||||||
"@oxlint/binding-freebsd-x64": "1.50.0",
|
"@oxlint/binding-freebsd-x64": "1.51.0",
|
||||||
"@oxlint/binding-linux-arm-gnueabihf": "1.50.0",
|
"@oxlint/binding-linux-arm-gnueabihf": "1.51.0",
|
||||||
"@oxlint/binding-linux-arm-musleabihf": "1.50.0",
|
"@oxlint/binding-linux-arm-musleabihf": "1.51.0",
|
||||||
"@oxlint/binding-linux-arm64-gnu": "1.50.0",
|
"@oxlint/binding-linux-arm64-gnu": "1.51.0",
|
||||||
"@oxlint/binding-linux-arm64-musl": "1.50.0",
|
"@oxlint/binding-linux-arm64-musl": "1.51.0",
|
||||||
"@oxlint/binding-linux-ppc64-gnu": "1.50.0",
|
"@oxlint/binding-linux-ppc64-gnu": "1.51.0",
|
||||||
"@oxlint/binding-linux-riscv64-gnu": "1.50.0",
|
"@oxlint/binding-linux-riscv64-gnu": "1.51.0",
|
||||||
"@oxlint/binding-linux-riscv64-musl": "1.50.0",
|
"@oxlint/binding-linux-riscv64-musl": "1.51.0",
|
||||||
"@oxlint/binding-linux-s390x-gnu": "1.50.0",
|
"@oxlint/binding-linux-s390x-gnu": "1.51.0",
|
||||||
"@oxlint/binding-linux-x64-gnu": "1.50.0",
|
"@oxlint/binding-linux-x64-gnu": "1.51.0",
|
||||||
"@oxlint/binding-linux-x64-musl": "1.50.0",
|
"@oxlint/binding-linux-x64-musl": "1.51.0",
|
||||||
"@oxlint/binding-openharmony-arm64": "1.50.0",
|
"@oxlint/binding-openharmony-arm64": "1.51.0",
|
||||||
"@oxlint/binding-win32-arm64-msvc": "1.50.0",
|
"@oxlint/binding-win32-arm64-msvc": "1.51.0",
|
||||||
"@oxlint/binding-win32-ia32-msvc": "1.50.0",
|
"@oxlint/binding-win32-ia32-msvc": "1.51.0",
|
||||||
"@oxlint/binding-win32-x64-msvc": "1.50.0"
|
"@oxlint/binding-win32-x64-msvc": "1.51.0"
|
||||||
},
|
},
|
||||||
"peerDependencies": {
|
"peerDependencies": {
|
||||||
"oxlint-tsgolint": ">=0.14.1"
|
"oxlint-tsgolint": ">=0.15.0"
|
||||||
},
|
},
|
||||||
"peerDependenciesMeta": {
|
"peerDependenciesMeta": {
|
||||||
"oxlint-tsgolint": {
|
"oxlint-tsgolint": {
|
||||||
|
|||||||
@@ -35,17 +35,17 @@
|
|||||||
"@vitest/eslint-plugin": "^1.6.9",
|
"@vitest/eslint-plugin": "^1.6.9",
|
||||||
"@vue/eslint-config-typescript": "^14.7.0",
|
"@vue/eslint-config-typescript": "^14.7.0",
|
||||||
"@vue/test-utils": "^2.4.6",
|
"@vue/test-utils": "^2.4.6",
|
||||||
"@vue/tsconfig": "^0.8.1",
|
"@vue/tsconfig": "^0.9.0",
|
||||||
"eslint": "^10.0.2",
|
"eslint": "^10.0.2",
|
||||||
"eslint-config-prettier": "^10.1.8",
|
"eslint-config-prettier": "^10.1.8",
|
||||||
"eslint-plugin-oxlint": "~1.50.0",
|
"eslint-plugin-oxlint": "~1.51.0",
|
||||||
"eslint-plugin-vue": "~10.8.0",
|
"eslint-plugin-vue": "~10.8.0",
|
||||||
"jiti": "^2.6.1",
|
"jiti": "^2.6.1",
|
||||||
"jsdom": "^28.1.0",
|
"jsdom": "^28.1.0",
|
||||||
"msw": "^2.12.10",
|
"msw": "^2.12.10",
|
||||||
"npm-run-all2": "^8.0.4",
|
"npm-run-all2": "^8.0.4",
|
||||||
"openapi-typescript": "^7.13.0",
|
"openapi-typescript": "^7.13.0",
|
||||||
"oxlint": "~1.50.0",
|
"oxlint": "~1.51.0",
|
||||||
"prettier": "3.8.1",
|
"prettier": "3.8.1",
|
||||||
"typescript": "~5.9.3",
|
"typescript": "~5.9.3",
|
||||||
"vite": "^7.3.1",
|
"vite": "^7.3.1",
|
||||||
|
|||||||
@@ -1,15 +1,5 @@
|
|||||||
import { defineConfig, devices } from '@playwright/test'
|
import { defineConfig, devices } from '@playwright/test'
|
||||||
|
|
||||||
// Suppress Node 25 warning from MSW's cookieStore accessing native localStorage
|
|
||||||
// without --localstorage-file being set. Harmless — MSW doesn't need file-backed storage.
|
|
||||||
const originalEmit = process.emit.bind(process)
|
|
||||||
process.emit = function (event: string, ...args: unknown[]) {
|
|
||||||
if (event === 'warning' && args[0] instanceof Error && args[0].message.includes('--localstorage-file')) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
return originalEmit(event, ...args)
|
|
||||||
} as typeof process.emit
|
|
||||||
|
|
||||||
export default defineConfig({
|
export default defineConfig({
|
||||||
testDir: './e2e',
|
testDir: './e2e',
|
||||||
fullyParallel: true,
|
fullyParallel: true,
|
||||||
|
|||||||
@@ -163,6 +163,19 @@ textarea.form-field {
|
|||||||
padding-left: 0.25rem;
|
padding-left: 0.25rem;
|
||||||
}
|
}
|
||||||
|
|
||||||
|
/* Skeleton shimmer loading state */
|
||||||
|
.skeleton {
|
||||||
|
background: linear-gradient(90deg, var(--color-card) 25%, #e0e0e0 50%, var(--color-card) 75%);
|
||||||
|
background-size: 200% 100%;
|
||||||
|
animation: shimmer 1.5s infinite;
|
||||||
|
border-radius: var(--radius-card);
|
||||||
|
}
|
||||||
|
|
||||||
|
@keyframes shimmer {
|
||||||
|
0% { background-position: 200% 0; }
|
||||||
|
100% { background-position: -200% 0; }
|
||||||
|
}
|
||||||
|
|
||||||
/* Utility */
|
/* Utility */
|
||||||
.text-center {
|
.text-center {
|
||||||
text-align: center;
|
text-align: center;
|
||||||
|
|||||||
@@ -17,7 +17,7 @@ const router = createRouter({
|
|||||||
{
|
{
|
||||||
path: '/events/:token',
|
path: '/events/:token',
|
||||||
name: 'event',
|
name: 'event',
|
||||||
component: () => import('../views/EventStubView.vue'),
|
component: () => import('../views/EventDetailView.vue'),
|
||||||
},
|
},
|
||||||
],
|
],
|
||||||
})
|
})
|
||||||
|
|||||||
@@ -184,6 +184,7 @@ async function handleSubmit() {
|
|||||||
title: form.title.trim(),
|
title: form.title.trim(),
|
||||||
description: form.description.trim() || undefined,
|
description: form.description.trim() || undefined,
|
||||||
dateTime: dateTimeWithOffset,
|
dateTime: dateTimeWithOffset,
|
||||||
|
timezone: Intl.DateTimeFormat().resolvedOptions().timeZone,
|
||||||
location: form.location.trim() || undefined,
|
location: form.location.trim() || undefined,
|
||||||
expiryDate: form.expiryDate,
|
expiryDate: form.expiryDate,
|
||||||
},
|
},
|
||||||
|
|||||||
214
frontend/src/views/EventDetailView.vue
Normal file
214
frontend/src/views/EventDetailView.vue
Normal file
@@ -0,0 +1,214 @@
|
|||||||
|
<template>
|
||||||
|
<main class="detail">
|
||||||
|
<header class="detail__header">
|
||||||
|
<RouterLink to="/" class="detail__back" aria-label="Back to home">←</RouterLink>
|
||||||
|
<span class="detail__brand">fete</span>
|
||||||
|
</header>
|
||||||
|
|
||||||
|
<!-- Loading state -->
|
||||||
|
<div v-if="state === 'loading'" class="detail__card" aria-busy="true" aria-label="Loading event details">
|
||||||
|
<div class="skeleton skeleton--title" />
|
||||||
|
<div class="skeleton skeleton--line" />
|
||||||
|
<div class="skeleton skeleton--line skeleton--short" />
|
||||||
|
<div class="skeleton skeleton--line" />
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Loaded state -->
|
||||||
|
<div v-else-if="state === 'loaded' && event" class="detail__card">
|
||||||
|
<h1 class="detail__title">{{ event.title }}</h1>
|
||||||
|
|
||||||
|
<dl class="detail__fields">
|
||||||
|
<div class="detail__field">
|
||||||
|
<dt class="detail__label">Date & Time</dt>
|
||||||
|
<dd class="detail__value">{{ formattedDateTime }}</dd>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-if="event.description" class="detail__field">
|
||||||
|
<dt class="detail__label">Description</dt>
|
||||||
|
<dd class="detail__value">{{ event.description }}</dd>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div v-if="event.location" class="detail__field">
|
||||||
|
<dt class="detail__label">Location</dt>
|
||||||
|
<dd class="detail__value">{{ event.location }}</dd>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<div class="detail__field">
|
||||||
|
<dt class="detail__label">Attendees</dt>
|
||||||
|
<dd class="detail__value">{{ event.attendeeCount }}</dd>
|
||||||
|
</div>
|
||||||
|
</dl>
|
||||||
|
|
||||||
|
<div v-if="event.expired" class="detail__banner detail__banner--expired" role="status">
|
||||||
|
This event has ended.
|
||||||
|
</div>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Not found state -->
|
||||||
|
<div v-else-if="state === 'not-found'" class="detail__card detail__card--center" role="status">
|
||||||
|
<p class="detail__message">Event not found.</p>
|
||||||
|
</div>
|
||||||
|
|
||||||
|
<!-- Error state -->
|
||||||
|
<div v-else-if="state === 'error'" class="detail__card detail__card--center" role="alert">
|
||||||
|
<p class="detail__message">Something went wrong.</p>
|
||||||
|
<button class="btn-primary" type="button" @click="fetchEvent">Retry</button>
|
||||||
|
</div>
|
||||||
|
</main>
|
||||||
|
</template>
|
||||||
|
|
||||||
|
<script setup lang="ts">
|
||||||
|
import { ref, computed, onMounted } from 'vue'
|
||||||
|
import { RouterLink, useRoute } from 'vue-router'
|
||||||
|
import { api } from '@/api/client'
|
||||||
|
import type { components } from '@/api/schema'
|
||||||
|
|
||||||
|
type GetEventResponse = components['schemas']['GetEventResponse']
|
||||||
|
type State = 'loading' | 'loaded' | 'not-found' | 'error'
|
||||||
|
|
||||||
|
const route = useRoute()
|
||||||
|
const state = ref<State>('loading')
|
||||||
|
const event = ref<GetEventResponse | null>(null)
|
||||||
|
|
||||||
|
const formattedDateTime = computed(() => {
|
||||||
|
if (!event.value) return ''
|
||||||
|
const formatted = new Intl.DateTimeFormat(undefined, {
|
||||||
|
dateStyle: 'long',
|
||||||
|
timeStyle: 'short',
|
||||||
|
}).format(new Date(event.value.dateTime))
|
||||||
|
return `${formatted} (${event.value.timezone})`
|
||||||
|
})
|
||||||
|
|
||||||
|
async function fetchEvent() {
|
||||||
|
state.value = 'loading'
|
||||||
|
event.value = null
|
||||||
|
|
||||||
|
try {
|
||||||
|
const { data, error, response } = await api.GET('/events/{token}', {
|
||||||
|
params: { path: { token: route.params.token as string } },
|
||||||
|
})
|
||||||
|
|
||||||
|
if (error) {
|
||||||
|
state.value = response.status === 404 ? 'not-found' : 'error'
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
event.value = data!
|
||||||
|
state.value = 'loaded'
|
||||||
|
} catch {
|
||||||
|
state.value = 'error'
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
onMounted(fetchEvent)
|
||||||
|
</script>
|
||||||
|
|
||||||
|
<style scoped>
|
||||||
|
.detail {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: var(--spacing-2xl);
|
||||||
|
padding-top: var(--spacing-lg);
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__header {
|
||||||
|
display: flex;
|
||||||
|
align-items: center;
|
||||||
|
gap: var(--spacing-sm);
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__back {
|
||||||
|
color: var(--color-text-on-gradient);
|
||||||
|
font-size: 1.5rem;
|
||||||
|
text-decoration: none;
|
||||||
|
line-height: 1;
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__brand {
|
||||||
|
font-size: 1.3rem;
|
||||||
|
font-weight: 700;
|
||||||
|
color: var(--color-text-on-gradient);
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__card {
|
||||||
|
background: var(--color-card);
|
||||||
|
border-radius: var(--radius-card);
|
||||||
|
padding: var(--spacing-xl);
|
||||||
|
box-shadow: var(--shadow-card);
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: var(--spacing-lg);
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__card--center {
|
||||||
|
align-items: center;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__title {
|
||||||
|
font-size: 1.4rem;
|
||||||
|
font-weight: 700;
|
||||||
|
color: var(--color-text);
|
||||||
|
word-break: break-word;
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__fields {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: var(--spacing-md);
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__field {
|
||||||
|
display: flex;
|
||||||
|
flex-direction: column;
|
||||||
|
gap: 0.15rem;
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__label {
|
||||||
|
font-size: 0.8rem;
|
||||||
|
font-weight: 700;
|
||||||
|
color: #888;
|
||||||
|
text-transform: uppercase;
|
||||||
|
letter-spacing: 0.04em;
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__value {
|
||||||
|
font-size: 0.95rem;
|
||||||
|
color: var(--color-text);
|
||||||
|
word-break: break-word;
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__banner {
|
||||||
|
padding: var(--spacing-sm) var(--spacing-md);
|
||||||
|
border-radius: var(--radius-card);
|
||||||
|
font-weight: 600;
|
||||||
|
font-size: 0.9rem;
|
||||||
|
text-align: center;
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__banner--expired {
|
||||||
|
background: #fff3e0;
|
||||||
|
color: #e65100;
|
||||||
|
}
|
||||||
|
|
||||||
|
.detail__message {
|
||||||
|
font-size: 1rem;
|
||||||
|
font-weight: 600;
|
||||||
|
color: var(--color-text);
|
||||||
|
}
|
||||||
|
|
||||||
|
/* Skeleton sizes */
|
||||||
|
.skeleton--title {
|
||||||
|
height: 1.6rem;
|
||||||
|
width: 60%;
|
||||||
|
}
|
||||||
|
|
||||||
|
.skeleton--line {
|
||||||
|
height: 1rem;
|
||||||
|
width: 80%;
|
||||||
|
}
|
||||||
|
|
||||||
|
.skeleton--short {
|
||||||
|
width: 40%;
|
||||||
|
}
|
||||||
|
</style>
|
||||||
@@ -173,6 +173,7 @@ describe('EventCreateView', () => {
|
|||||||
organizerToken: 'org-456',
|
organizerToken: 'org-456',
|
||||||
title: 'Birthday Party',
|
title: 'Birthday Party',
|
||||||
dateTime: '2026-12-25T18:00:00+01:00',
|
dateTime: '2026-12-25T18:00:00+01:00',
|
||||||
|
timezone: 'Europe/Berlin',
|
||||||
expiryDate: '2026-12-24',
|
expiryDate: '2026-12-24',
|
||||||
},
|
},
|
||||||
error: undefined,
|
error: undefined,
|
||||||
|
|||||||
198
frontend/src/views/__tests__/EventDetailView.spec.ts
Normal file
198
frontend/src/views/__tests__/EventDetailView.spec.ts
Normal file
@@ -0,0 +1,198 @@
|
|||||||
|
import { describe, it, expect, vi, beforeEach } from 'vitest'
|
||||||
|
import { mount, flushPromises } from '@vue/test-utils'
|
||||||
|
import { createRouter, createMemoryHistory } from 'vue-router'
|
||||||
|
import EventDetailView from '../EventDetailView.vue'
|
||||||
|
import { api } from '@/api/client'
|
||||||
|
|
||||||
|
vi.mock('@/api/client', () => ({
|
||||||
|
api: {
|
||||||
|
GET: vi.fn(),
|
||||||
|
},
|
||||||
|
}))
|
||||||
|
|
||||||
|
function createTestRouter(_token?: string) {
|
||||||
|
return createRouter({
|
||||||
|
history: createMemoryHistory(),
|
||||||
|
routes: [
|
||||||
|
{ path: '/', name: 'home', component: { template: '<div />' } },
|
||||||
|
{ path: '/events/:token', name: 'event', component: EventDetailView },
|
||||||
|
],
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
async function mountWithToken(token = 'test-token') {
|
||||||
|
const router = createTestRouter(token)
|
||||||
|
await router.push(`/events/${token}`)
|
||||||
|
await router.isReady()
|
||||||
|
return mount(EventDetailView, {
|
||||||
|
global: { plugins: [router] },
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
const fullEvent = {
|
||||||
|
eventToken: 'abc-123',
|
||||||
|
title: 'Summer BBQ',
|
||||||
|
description: 'Bring your own drinks!',
|
||||||
|
dateTime: '2026-03-15T20:00:00+01:00',
|
||||||
|
timezone: 'Europe/Berlin',
|
||||||
|
location: 'Central Park, NYC',
|
||||||
|
attendeeCount: 12,
|
||||||
|
expired: false,
|
||||||
|
}
|
||||||
|
|
||||||
|
beforeEach(() => {
|
||||||
|
vi.restoreAllMocks()
|
||||||
|
})
|
||||||
|
|
||||||
|
describe('EventDetailView', () => {
|
||||||
|
// T014: Loading state
|
||||||
|
it('renders skeleton shimmer placeholders while loading', async () => {
|
||||||
|
vi.mocked(api.GET).mockReturnValue(new Promise(() => {}))
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
|
||||||
|
expect(wrapper.find('[aria-busy="true"]').exists()).toBe(true)
|
||||||
|
expect(wrapper.findAll('.skeleton').length).toBeGreaterThanOrEqual(3)
|
||||||
|
})
|
||||||
|
|
||||||
|
// T013: Loaded state — all fields
|
||||||
|
it('renders all event fields when loaded', async () => {
|
||||||
|
vi.mocked(api.GET).mockResolvedValue({
|
||||||
|
data: fullEvent,
|
||||||
|
error: undefined,
|
||||||
|
response: new Response(null, { status: 200 }),
|
||||||
|
} as never)
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
expect(wrapper.find('.detail__title').text()).toBe('Summer BBQ')
|
||||||
|
expect(wrapper.text()).toContain('Bring your own drinks!')
|
||||||
|
expect(wrapper.text()).toContain('Central Park, NYC')
|
||||||
|
expect(wrapper.text()).toContain('12')
|
||||||
|
expect(wrapper.text()).toContain('Europe/Berlin')
|
||||||
|
})
|
||||||
|
|
||||||
|
// T013: Loaded state — locale-formatted date/time
|
||||||
|
it('formats date/time with Intl.DateTimeFormat and timezone', async () => {
|
||||||
|
vi.mocked(api.GET).mockResolvedValue({
|
||||||
|
data: fullEvent,
|
||||||
|
error: undefined,
|
||||||
|
response: new Response(null, { status: 200 }),
|
||||||
|
} as never)
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
const dateField = wrapper.findAll('.detail__value')[0]!
|
||||||
|
expect(dateField.text()).toContain('(Europe/Berlin)')
|
||||||
|
// The formatted date part is locale-dependent but should contain the year
|
||||||
|
expect(dateField.text()).toContain('2026')
|
||||||
|
})
|
||||||
|
|
||||||
|
// T013: Loaded state — optional fields absent
|
||||||
|
it('does not render description and location when absent', async () => {
|
||||||
|
vi.mocked(api.GET).mockResolvedValue({
|
||||||
|
data: {
|
||||||
|
...fullEvent,
|
||||||
|
description: undefined,
|
||||||
|
location: undefined,
|
||||||
|
attendeeCount: 0,
|
||||||
|
},
|
||||||
|
error: undefined,
|
||||||
|
response: new Response(null, { status: 200 }),
|
||||||
|
} as never)
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
expect(wrapper.text()).not.toContain('Description')
|
||||||
|
expect(wrapper.text()).not.toContain('Location')
|
||||||
|
expect(wrapper.text()).toContain('0')
|
||||||
|
})
|
||||||
|
|
||||||
|
// T020 (US2): Expired state
|
||||||
|
it('renders "event has ended" banner when expired', async () => {
|
||||||
|
vi.mocked(api.GET).mockResolvedValue({
|
||||||
|
data: { ...fullEvent, expired: true },
|
||||||
|
error: undefined,
|
||||||
|
response: new Response(null, { status: 200 }),
|
||||||
|
} as never)
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
expect(wrapper.text()).toContain('This event has ended.')
|
||||||
|
expect(wrapper.find('.detail__banner--expired').exists()).toBe(true)
|
||||||
|
})
|
||||||
|
|
||||||
|
// T020 (US2): No expired banner when not expired
|
||||||
|
it('does not render expired banner when event is active', async () => {
|
||||||
|
vi.mocked(api.GET).mockResolvedValue({
|
||||||
|
data: fullEvent,
|
||||||
|
error: undefined,
|
||||||
|
response: new Response(null, { status: 200 }),
|
||||||
|
} as never)
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
expect(wrapper.find('.detail__banner--expired').exists()).toBe(false)
|
||||||
|
})
|
||||||
|
|
||||||
|
// T023 (US4): Not found state
|
||||||
|
it('renders "event not found" when API returns 404', async () => {
|
||||||
|
vi.mocked(api.GET).mockResolvedValue({
|
||||||
|
data: undefined,
|
||||||
|
error: { type: 'about:blank', title: 'Not Found', status: 404 },
|
||||||
|
response: new Response(null, { status: 404 }),
|
||||||
|
} as never)
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
expect(wrapper.text()).toContain('Event not found.')
|
||||||
|
// No event data in DOM
|
||||||
|
expect(wrapper.find('.detail__title').exists()).toBe(false)
|
||||||
|
})
|
||||||
|
|
||||||
|
// T027: Server error + retry
|
||||||
|
it('renders error state with retry button on server error', async () => {
|
||||||
|
vi.mocked(api.GET).mockResolvedValue({
|
||||||
|
data: undefined,
|
||||||
|
error: { type: 'about:blank', title: 'Internal Server Error', status: 500 },
|
||||||
|
response: new Response(null, { status: 500 }),
|
||||||
|
} as never)
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
expect(wrapper.text()).toContain('Something went wrong.')
|
||||||
|
expect(wrapper.find('button').text()).toBe('Retry')
|
||||||
|
})
|
||||||
|
|
||||||
|
// T027: Retry button re-fetches
|
||||||
|
it('retry button triggers a new fetch', async () => {
|
||||||
|
vi.mocked(api.GET)
|
||||||
|
.mockResolvedValueOnce({
|
||||||
|
data: undefined,
|
||||||
|
error: { type: 'about:blank', title: 'Error', status: 500 },
|
||||||
|
response: new Response(null, { status: 500 }),
|
||||||
|
} as never)
|
||||||
|
.mockResolvedValueOnce({
|
||||||
|
data: fullEvent,
|
||||||
|
error: undefined,
|
||||||
|
response: new Response(null, { status: 200 }),
|
||||||
|
} as never)
|
||||||
|
|
||||||
|
const wrapper = await mountWithToken()
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
expect(wrapper.text()).toContain('Something went wrong.')
|
||||||
|
|
||||||
|
await wrapper.find('button').trigger('click')
|
||||||
|
await flushPromises()
|
||||||
|
|
||||||
|
expect(wrapper.find('.detail__title').text()).toBe('Summer BBQ')
|
||||||
|
})
|
||||||
|
})
|
||||||
42
ralph.sh
42
ralph.sh
@@ -140,21 +140,51 @@ echo "Tools: $TOOLS"
|
|||||||
echo ""
|
echo ""
|
||||||
|
|
||||||
for ((i = 1; i <= MAX_ITERATIONS; i++)); do
|
for ((i = 1; i <= MAX_ITERATIONS; i++)); do
|
||||||
echo "--- Iteration $i/$MAX_ITERATIONS ---"
|
echo ""
|
||||||
|
echo "━━━ Iteration $i/$MAX_ITERATIONS ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━"
|
||||||
|
ITER_START=$(date +%H:%M:%S)
|
||||||
|
echo "Started: $ITER_START"
|
||||||
|
echo ""
|
||||||
|
|
||||||
OUTPUT=$(echo "$PROMPT" | claude --print --model "$MODEL" --allowedTools "$TOOLS" 2>&1)
|
ITER_LOG="$RUN_DIR/iteration-${i}.jsonl"
|
||||||
|
|
||||||
|
echo "$PROMPT" | claude --print --model "$MODEL" --allowedTools "$TOOLS" --verbose --output-format stream-json > "$ITER_LOG" 2>&1
|
||||||
|
|
||||||
ITER_TIME=$(date +%H:%M:%S)
|
ITER_TIME=$(date +%H:%M:%S)
|
||||||
|
|
||||||
if [[ "$OUTPUT" == *"$COMPLETION_SIGNAL"* ]]; then
|
# Extract tool uses for a compact summary
|
||||||
|
TOOL_SUMMARY=$(jq -r 'select(.type == "assistant") | .message.content[]? | select(.type == "tool_use") | " -> \(.name): \(.input.file_path // .input.pattern // .input.command // .input.content[0:80] // "" | tostring | .[0:100])"' "$ITER_LOG" 2>/dev/null) || true
|
||||||
|
|
||||||
|
if [[ -n "$TOOL_SUMMARY" ]]; then
|
||||||
|
echo "Tools used:"
|
||||||
|
echo "$TOOL_SUMMARY"
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Extract assistant text messages
|
||||||
|
ASSISTANT_TEXT=$(jq -r 'select(.type == "assistant") | .message.content[]? | select(.type == "text") | .text' "$ITER_LOG" 2>/dev/null) || true
|
||||||
|
|
||||||
|
if [[ -n "$ASSISTANT_TEXT" ]]; then
|
||||||
|
echo "Ralph says:"
|
||||||
|
echo "$ASSISTANT_TEXT" | sed 's/^/ /'
|
||||||
|
echo ""
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for errors
|
||||||
|
ERROR_TEXT=$(jq -r 'select(.type == "result") | select(.subtype == "error") | .result' "$ITER_LOG" 2>/dev/null) || true
|
||||||
|
if [[ -n "$ERROR_TEXT" ]]; then
|
||||||
|
echo "[ERROR] $ERROR_TEXT"
|
||||||
|
fi
|
||||||
|
|
||||||
|
# Check for completion signal
|
||||||
|
if grep -q "$COMPLETION_SIGNAL" "$ITER_LOG" 2>/dev/null; then
|
||||||
echo "[$ITER_TIME] COMPLETE after $i iteration(s)" >> "$RUN_DIR/run.log"
|
echo "[$ITER_TIME] COMPLETE after $i iteration(s)" >> "$RUN_DIR/run.log"
|
||||||
echo "Loop complete after $i iteration(s)."
|
echo "━━━ Loop complete after $i iteration(s). ━━━"
|
||||||
exit 0
|
exit 0
|
||||||
fi
|
fi
|
||||||
|
|
||||||
echo "[$ITER_TIME] Iteration $i done" >> "$RUN_DIR/run.log"
|
echo "[$ITER_TIME] Iteration $i done" >> "$RUN_DIR/run.log"
|
||||||
echo "Iteration $i done. Continuing..."
|
echo "--- Done at $ITER_TIME ---"
|
||||||
echo ""
|
|
||||||
sleep 2
|
sleep 2
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|||||||
@@ -1,98 +0,0 @@
|
|||||||
# Setup Tasks
|
|
||||||
|
|
||||||
<!-- Technical setup tasks that are prerequisites for user story implementation. -->
|
|
||||||
<!-- These are not user stories — they describe infrastructure and project scaffolding work. -->
|
|
||||||
|
|
||||||
## Tasks
|
|
||||||
|
|
||||||
### T-1: Initialize monorepo structure
|
|
||||||
|
|
||||||
**Description:** Set up the repository structure with separate directories for backend and frontend, scaffolded with the chosen tech stack.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [x] Single repository with `backend/` and `frontend/` directories
|
|
||||||
- [x] Backend: Java (latest LTS), Spring Boot, Maven, hexagonal/onion architecture scaffold
|
|
||||||
- [x] Frontend: Vue 3 with Vite as bundler, TypeScript, Vue Router
|
|
||||||
- [x] Shared top-level files: README, Dockerfile, CLAUDE.md, LICENSE (GPL), .gitignore
|
|
||||||
- [x] Both projects build successfully with no source code (empty scaffold)
|
|
||||||
- [x] .gitignore covers build artifacts, IDE files, and dependency directories for both Java/Maven and Node/Vue
|
|
||||||
|
|
||||||
**Dependencies:** None
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### T-2: Docker deployment setup
|
|
||||||
|
|
||||||
**Description:** Create a multi-stage Dockerfile that builds backend and frontend and produces a single runnable container. This task focuses exclusively on the Docker build — database wiring, environment variable configuration, and docker-compose documentation are deferred to T-4 (where JPA and migration tooling are introduced).
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [x] Single multi-stage Dockerfile at repo root that builds backend and frontend and produces one container
|
|
||||||
- [x] `.dockerignore` excludes build artifacts, IDE files, and unnecessary files from the build context
|
|
||||||
- [x] Health-check endpoint so Docker/orchestrators can verify the app is alive
|
|
||||||
- [x] `docker build .` succeeds and produces a working image
|
|
||||||
- [x] Container starts and the health-check endpoint responds
|
|
||||||
|
|
||||||
**Dependencies:** T-1, T-5
|
|
||||||
|
|
||||||
**Addendum (2026-03-04):** Scope reduced from original "Dockerfile + configuration" to Docker-only. Database connectivity (`DATABASE_URL`), runtime environment variable configuration (Unsplash API key, max active events), and README docker-compose documentation are deferred to T-4, where JPA and Flyway are introduced and the configuration can be tested end-to-end. Rationale: without JPA and migrations, database wiring cannot be meaningfully verified.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### T-3: CI/CD pipeline
|
|
||||||
|
|
||||||
**Description:** Set up a Gitea Actions CI/CD pipeline that runs on every push, ensuring code quality before deployment.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [x] Gitea Actions workflow file in `.gitea/workflows/` runs on push: test, build, publish Docker image
|
|
||||||
- [x] Backend tests run via Maven
|
|
||||||
- [x] Frontend tests run via Vitest
|
|
||||||
- [x] Docker image is published to the Gitea container registry on the same instance
|
|
||||||
- [x] Pipeline fails visibly if any test fails or the build breaks
|
|
||||||
- [x] Docker image is only published if all tests pass and the build succeeds
|
|
||||||
|
|
||||||
**Dependencies:** T-1, T-2
|
|
||||||
|
|
||||||
**Notes:** Per Q-5 resolution: the project uses Gitea as its hosting and CI/CD platform. The pipeline uses Gitea Actions (`.gitea/workflows/`) and publishes Docker images to the Gitea container registry. T-3 depends on T-1 (repository structure with both projects to test and build) and T-2 (Dockerfile used by the pipeline to build and publish the container image).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### T-5: API-first tooling setup
|
|
||||||
|
|
||||||
**Description:** Set up the API-first development workflow. The OpenAPI spec is the single source of truth for the REST API contract. Backend server interfaces and frontend TypeScript types are generated from it. This task scaffolds the tooling and creates a minimal initial spec — the spec itself is a living document that grows with each user story.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [x] `openapi-generator-maven-plugin` (v7.20.x, `spring` generator, `interfaceOnly: true`) is configured in `backend/pom.xml`
|
|
||||||
- [x] A minimal OpenAPI 3.1 spec exists at `backend/src/main/resources/openapi/api.yaml` (info block, placeholder path, or health-only — enough for the generator to run)
|
|
||||||
- [x] `mvnw compile` generates Java interfaces and model classes into `target/generated-sources/openapi/` with packages `de.fete.adapter.in.web.api` and `de.fete.adapter.in.web.model`
|
|
||||||
- [x] `openapi-typescript` (devDependency) and `openapi-fetch` (dependency) are installed in the frontend
|
|
||||||
- [x] `npm run generate:api` generates TypeScript types from the spec into `frontend/src/api/schema.d.ts`
|
|
||||||
- [x] Frontend `dev` and `build` scripts include type generation as a pre-step
|
|
||||||
- [x] A minimal API client (`frontend/src/api/client.ts`) using `openapi-fetch` with `createClient<paths>()` exists
|
|
||||||
- [x] Both generation steps succeed and the project compiles cleanly (backend + frontend)
|
|
||||||
|
|
||||||
**Dependencies:** T-1
|
|
||||||
|
|
||||||
**Notes:** The OpenAPI spec is intentionally minimal at this stage — just enough to prove the tooling works end-to-end. Each user story will extend the spec with its endpoints and schemas. Research basis: `docs/agents/research/2026-03-04-api-first-approach.md`.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### T-4: Development infrastructure setup
|
|
||||||
|
|
||||||
**Description:** Set up the development foundation needed before the first user story can be implemented with TDD (as required by CLAUDE.md). This bridges the gap between project scaffolds and actual feature development. Also includes the database and environment variable configuration deferred from T-2.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [x] Database migration framework (Flyway or Liquibase) is configured in the backend with a first empty migration that runs successfully against a PostgreSQL instance
|
|
||||||
- [x] App connects to external PostgreSQL via environment variable (e.g. `DATABASE_URL` or Spring-native `SPRING_DATASOURCE_*`)
|
|
||||||
- [x] All runtime configuration via environment variables: database connection, optional Unsplash API key, optional max active events
|
|
||||||
- [x] SPA router is configured in the Vue frontend (Vue Router) so pages can be navigated by URL path
|
|
||||||
- [x] Backend test infrastructure is set up: JUnit 5 with Spring Boot Test, plus integration test support using Testcontainers (PostgreSQL) so tests can run against a real database without external setup
|
|
||||||
- [x] Frontend test infrastructure is set up: Vitest with @vue/test-utils configured and a sample test runs successfully
|
|
||||||
- [x] Both test suites (backend and frontend) can be executed via their respective build tools (`mvn test` and `npm test` / `npx vitest`)
|
|
||||||
- [x] README documents deployment setup with a docker-compose example (app + postgres)
|
|
||||||
- [x] Container starts and responds to health checks with a running PostgreSQL (migrations run on startup)
|
|
||||||
|
|
||||||
**Dependencies:** T-2, T-5
|
|
||||||
|
|
||||||
**Notes:** T-4 is the prerequisite for all user story implementation. Without migration tooling, router, and test infrastructure, TDD (the mandated methodology per CLAUDE.md) cannot begin. The API client layer is provided by T-5 (openapi-fetch + generated types). All user stories that previously depended on T-1 and/or T-2 now depend on T-4 instead, since T-4 transitively includes T-1, T-2, and T-5.
|
|
||||||
|
|
||||||
**Addendum (2026-03-04):** Absorbed database connectivity, environment variable configuration, and docker-compose documentation from T-2 (see T-2 addendum). These criteria require JPA and Flyway to be testable, so they belong here.
|
|
||||||
@@ -1,486 +0,0 @@
|
|||||||
# User Stories
|
|
||||||
|
|
||||||
<!-- This file is managed by the Ralph Loop. Each iteration refines and adds user stories based on Ideen.md. -->
|
|
||||||
|
|
||||||
## Status
|
|
||||||
|
|
||||||
- Total stories: 21
|
|
||||||
- Complete: 0
|
|
||||||
- Remaining: 21
|
|
||||||
|
|
||||||
## Token Model
|
|
||||||
|
|
||||||
The following terms are used consistently across all stories:
|
|
||||||
|
|
||||||
- **Event token**: A public UUID embedded in the event URL. Used by guests to access the event page.
|
|
||||||
- **Organizer token**: A separate secret UUID stored in localStorage on the device where the event was created. Used to authenticate organizer actions.
|
|
||||||
- **Internal DB ID**: An implementation detail. Never exposed in stories, URLs, or to users.
|
|
||||||
|
|
||||||
## Stories
|
|
||||||
|
|
||||||
### US-1: Create an event
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** create a new event with a title, description, date, time, location, and mandatory expiry date,
|
|
||||||
**so that** I can share it with others as a dedicated event page.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The organizer can fill in: title (required), description (optional), date and time (required), location (optional), expiry date (required)
|
|
||||||
- [ ] On submission, the server stores the event and returns both a unique, non-guessable event token (UUID) and a separate organizer token (UUID) in the creation response
|
|
||||||
- [ ] The organizer is redirected to the event page after creation
|
|
||||||
- [ ] The organizer token from the creation response is stored in localStorage to grant organizer access on this device
|
|
||||||
- [ ] The event token, title, and date are also stored in localStorage alongside the organizer token, so the local event overview (US-7) can display the event without additional server contact
|
|
||||||
- [ ] No account, login, or personal data is required to create an event
|
|
||||||
- [ ] The expiry date field is mandatory and cannot be left blank
|
|
||||||
- [ ] The event is not discoverable except via its direct link
|
|
||||||
|
|
||||||
**Dependencies:** T-4
|
|
||||||
|
|
||||||
**Notes:** Non-guessable tokens (UUIDs) are specified in Ideen.md under security. Expiry date is mandatory per Ideen.md. No registration required per core principles. Per Q-4 resolution: organizer authentication uses the organizer token stored in localStorage on the device where the event was created. The organizer token is separate from the event token — since the event link is designed to be shared in group chats, using the same token for both public access and organizer auth would allow any guest to manage the event.
|
|
||||||
|
|
||||||
**Addendum (2026-03-04):** Honeypot field removed — overengineered for this project's scope. Expiry date must be in the future at creation time — an event should never exist in an invalid state (resolved during US-1 research).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-2: View event landing page
|
|
||||||
|
|
||||||
**As a** guest,
|
|
||||||
**I want to** open a shared event link and see all event details,
|
|
||||||
**so that** I know what the event is, when and where it takes place, and who else is attending.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The event page displays: title, description (if provided), date and time, location (if provided)
|
|
||||||
- [ ] The page lists the names of all confirmed attendees (those who RSVPed "attending")
|
|
||||||
- [ ] The page shows a count of attendees
|
|
||||||
- [ ] If the event has expired (past its expiry date), the page renders a clear "this event has ended" state and no RSVP actions are shown
|
|
||||||
- [ ] If the event has been cancelled (US-18), the page displays a clear "cancelled" state with the cancellation message (if provided by the organizer), and no RSVP actions are shown [deferred until US-18 is implemented]
|
|
||||||
- [ ] If the event token does not match any event on the server (e.g. because it was deleted after expiry per US-12 [deferred until US-12 is implemented], or deleted by the organizer per US-19 [deferred until US-19 is implemented]), the page displays a clear "event not found" message — no partial data or error traces are shown
|
|
||||||
- [ ] The page is accessible without any login, account, or access code — only the event link is required
|
|
||||||
- [ ] No external resources (CDNs, fonts, tracking scripts) are loaded
|
|
||||||
|
|
||||||
**Dependencies:** US-1, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md describes "a kind of landing page for each event — what, when, where." The attendee list is needed here so guests can see who else confirmed. The expired-state requirement follows from the mandatory expiry date in US-1. The cancelled-state requirement follows from US-18: a cancelled event remains visible until its expiry date but clearly communicates its cancelled status. Per Q-3 resolution: the app is a SPA with a RESTful API backend; JavaScript-dependent rendering is acceptable. The "no external resources" criterion derives from the privacy statutes. The "event not found" criterion covers the edge case where a guest navigates to a previously valid event link after the server has deleted the event data (US-12).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-3: RSVP to an event
|
|
||||||
|
|
||||||
**As a** guest,
|
|
||||||
**I want to** indicate whether I will attend an event,
|
|
||||||
**so that** the organizer and other guests can see who is coming.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The guest can choose "I'm attending" or "I'm not attending"
|
|
||||||
- [ ] When selecting "I'm attending", a name is required
|
|
||||||
- [ ] When selecting "I'm not attending", providing a name is optional
|
|
||||||
- [ ] The RSVP is submitted to the server and persisted server-side
|
|
||||||
- [ ] The guest's RSVP choice and name are stored in localStorage to prevent accidental duplicate submissions from the same device
|
|
||||||
- [ ] The event token, title, and date are also stored in localStorage alongside the RSVP data, so the local event overview (US-7) can display the event and link to it without server contact
|
|
||||||
- [ ] If a prior RSVP exists in localStorage for this event, the form pre-fills with the previous choice and name
|
|
||||||
- [ ] Re-submitting from the same device updates the existing RSVP entry rather than creating a duplicate
|
|
||||||
- [ ] RSVP submission is not possible after the event's expiry date
|
|
||||||
- [ ] RSVP submission is not possible if the event has been cancelled (US-18) [deferred until US-18 is implemented]
|
|
||||||
- [ ] No account, login, or data beyond the optionally entered name is required
|
|
||||||
|
|
||||||
**Dependencies:** US-2, T-4
|
|
||||||
|
|
||||||
**Notes:** RSVP flow specified in Ideen.md: "Ich komme" (with name) / "Ich komme nicht" (optional with name). LocalStorage device binding is the explicit duplicate-prevention mechanism — not a hard guarantee, but sufficient against accidental duplicates. Ideen.md acknowledges that malicious spam without accounts is an acceptable risk.
|
|
||||||
|
|
||||||
**Addendum (2026-03-04):** Honeypot field removed — overengineered for this project's scope.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-4: Manage guest list as organizer
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** view all RSVPs for my event and remove individual entries if needed,
|
|
||||||
**so that** I have an accurate overview of attendance and can moderate erroneous or spam entries.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] An organizer view is accessible from the event page when a valid organizer token for that event is present in localStorage
|
|
||||||
- [ ] When no organizer token is present, no organizer-specific UI (link, button, or view) is shown to the visitor
|
|
||||||
- [ ] The organizer view lists all RSVPs, showing each entry's name and attending status
|
|
||||||
- [ ] The organizer can permanently delete any individual RSVP entry
|
|
||||||
- [ ] After deletion, the attendee list on the public event page updates immediately to reflect the removal
|
|
||||||
- [ ] The organizer view is not accessible via a guessable URL — it requires the organizer token stored in localStorage during event creation (US-1)
|
|
||||||
- [ ] No additional authentication step is required beyond the presence of the organizer token in localStorage
|
|
||||||
|
|
||||||
**Dependencies:** US-1, T-4
|
|
||||||
|
|
||||||
**Notes:** The organizer token is established in localStorage during event creation (US-1). Removal capability is specified in Ideen.md: "Einsicht angemeldete Gäste, kann bei Bedarf Einträge entfernen." The organizer view for *editing* event details is a separate story. Per Q-4 resolution: organizer access is confirmed as localStorage-based, using an organizer token separate from the event token. Organizer access is therefore device-bound.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-5: Edit event details as organizer
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** update the details of an event I created,
|
|
||||||
**so that** guests always see accurate and up-to-date information if something changes.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The organizer can edit: title (required), description (optional), date and time (required), location (optional), expiry date (required)
|
|
||||||
- [ ] The expiry date can only be set to a date in the future — setting it to today or a past date is rejected with a clear validation message directing the organizer to use the delete feature (US-19) instead
|
|
||||||
- [ ] Editing is only accessible when a valid organizer token for the event is present in localStorage
|
|
||||||
- [ ] The edit form is pre-filled with the current event values
|
|
||||||
- [ ] Changes are persisted server-side upon submission
|
|
||||||
- [ ] After saving, the organizer is returned to the event page which reflects the updated details
|
|
||||||
- [ ] If the organizer token is absent or invalid, the edit UI is not shown and the server rejects the update request
|
|
||||||
- [ ] No account or additional authentication step is required beyond the organizer token
|
|
||||||
|
|
||||||
**Dependencies:** US-1, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md specifies "Updaten der Veranstaltung" as an organizer capability. Editing the expiry date is purely an edit operation. The expiry date must always be in the future — if the organizer wants an event gone immediately, they use the explicit delete feature (US-19) instead of manipulating the expiry date. Explicit event cancellation is a separate, dedicated action covered by US-18. Visual highlighting of changes on the public event page (Ideen.md: "Änderungen zum ursprünglichen Inhalt werden hervorgehoben") is a separate concern and is covered in US-9. Per Q-4 resolution: organizer authentication confirmed as localStorage-based organizer token.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-6: Bookmark an event
|
|
||||||
|
|
||||||
**As a** guest,
|
|
||||||
**I want to** bookmark an event on my current device without submitting an RSVP,
|
|
||||||
**so that** I can easily return to the event page later and stay aware of it without committing to attendance.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The event page shows a "Remember" / "Follow" action that requires no name or personal data
|
|
||||||
- [ ] Activating the action stores the event token, event title, and event date in localStorage — no server request is made
|
|
||||||
- [ ] The bookmark persists across browser sessions on the same device
|
|
||||||
- [ ] A second activation of the same action removes the bookmark ("unfollow"), again without any server contact
|
|
||||||
- [ ] The bookmark state is independent of the RSVP state: a guest who has already RSVPed on this device can still explicitly bookmark or un-bookmark the event
|
|
||||||
- [ ] If the event is expired, the bookmark action is still available (so the guest can still see it in their local overview)
|
|
||||||
- [ ] No personal data, IP address, or identifier is transmitted to the server when bookmarking or un-bookmarking
|
|
||||||
|
|
||||||
**Dependencies:** US-2, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md describes this as "Veranstaltung merken/folgen — rein lokal, kein Serverkontakt, kein Name nötig." Explicitly designed for two scenarios: (1) a guest who RSVPed on their phone and wants access on their laptop; (2) an undecided guest who wants to remember the event without committing. The bookmark is the prerequisite for the local event overview list (separate story). Because it is entirely client-side, it cannot be abused to fingerprint users. The event date is stored alongside the token and title so that US-7 (local event overview) can display it without making a server request. Locally cached title and date may become stale if the organizer edits the event — this is an acceptable trade-off for the fully-offline local overview; cached values are refreshed when the guest next visits the event page.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-7: Local event overview list
|
|
||||||
|
|
||||||
**As a** user,
|
|
||||||
**I want to** see a list of all events I have created, bookmarked, or RSVPed to on this device,
|
|
||||||
**so that** I can quickly navigate back to any event without having to find the original link again.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The root page (`/`) lists all events tracked locally on this device, below a project header/branding section
|
|
||||||
- [ ] An event appears in the list if it was created from this device (US-1, detected via organizer token in localStorage), bookmarked (US-6), or RSVPed from this device (US-3)
|
|
||||||
- [ ] Each entry shows at minimum: event title, date, and the user's relationship to the event (organizer / attending / not attending / bookmarked only)
|
|
||||||
- [ ] Each entry is a link that navigates directly to the event page
|
|
||||||
- [ ] The list is populated entirely from localStorage — no server request is made to render it
|
|
||||||
- [ ] Events whose date has passed are still shown in the list but visually distinguished (e.g. marked as "ended")
|
|
||||||
- [ ] If a user navigates to an event from the local overview and the server responds that the event no longer exists (deleted per US-12 or US-19), the app displays an "event no longer exists" message and offers to remove the entry from the local list
|
|
||||||
- [ ] If no events are tracked locally, an empty state is shown (not an error)
|
|
||||||
- [ ] An individual entry can be removed from the list: for bookmarked-only events this removes the bookmark; for RSVPed events it removes the local record (the server-side RSVP is unaffected); for organizer-created events it removes the local organizer token and event data
|
|
||||||
- [ ] When removing an organizer-created event entry, a confirmation warning is shown explaining that this will revoke organizer access on this device
|
|
||||||
- [ ] No personal data or event data is transmitted to the server when viewing or interacting with the overview
|
|
||||||
|
|
||||||
**Dependencies:** None
|
|
||||||
|
|
||||||
**Notes:** Ideen.md explicitly mentions "Übersichtsliste im LocalStorage: Alle Events die man zugesagt oder gemerkt hat (vgl. spliit)." Per Q-2 resolution: the overview lives at the root page `/` with a project header/branding above the event list. This feature is entirely client-side. Entries are populated from three localStorage sources: the organizer tokens (US-1), the bookmark set (US-6), and the RSVP records (US-3). All upstream stories store the event title and date in localStorage alongside their primary data, enabling this overview to render without any server contact. Locally cached values (title, date) may become stale if the organizer edits the event via US-5; stale data is refreshed when the user next visits the event page. Removing an entry from the local overview does not delete the server-side RSVP — that is intentional and consistent with the no-account design (the RSVP belongs to the organizer's data, not solely the guest's). Removing an organizer-created event entry removes the organizer token from localStorage, meaning the user loses organizer access on this device — the confirmation warning protects against accidental loss. Note: This story has no structural dependencies but requires the frontend scaffold from T-4 (which includes T-1) to be practically implementable. It is only meaningfully testable after US-1, US-3, or US-6 populate localStorage with event data — without those stories, the overview has nothing to display beyond the empty state.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-8: Add event to calendar
|
|
||||||
|
|
||||||
**As a** guest,
|
|
||||||
**I want to** add the event to my personal calendar,
|
|
||||||
**so that** I am reminded of it and always have the current date, time, and location at hand.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The event page provides a `.ics` file download link that generates a standards-compliant iCalendar (RFC 5545) file
|
|
||||||
- [ ] The `.ics` file includes: event title, description (if present), start date and time, location (if present), the public event URL, and a unique UID derived from the event token
|
|
||||||
- [ ] The `.ics` file is generated and served server-side; downloading it does not require JavaScript
|
|
||||||
- [ ] The event page also provides a `webcal://` subscription link so that calendar applications can subscribe and receive automatic updates when the event is edited (US-5)
|
|
||||||
- [ ] The `webcal://` endpoint serves the identical iCalendar content as the `.ics` download, using the same event token in the URL
|
|
||||||
- [ ] Both links are available to any visitor holding the event link — no RSVP, login, or personal data required
|
|
||||||
- [ ] No personal data, name, or IP address is logged when either link is accessed
|
|
||||||
- [ ] If the event has expired, both links remain available so the guest can still obtain the calendar record
|
|
||||||
- [ ] If the event has been cancelled (US-18), the `.ics` file and `webcal://` feed include `STATUS:CANCELLED` so that subscribed calendar applications reflect the cancellation on their next sync [deferred until US-18 is implemented]
|
|
||||||
|
|
||||||
**Dependencies:** US-2, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md specifies "Kalender-Integration: .ics-Download + optional webcal:// für Live-Updates bei Änderungen." The `webcal://` subscription is especially valuable alongside US-5 (Edit event details): when the organizer updates the date or location, subscribed guests see the change reflected in their calendar on the next sync without having to revisit the event page. The UID in the `.ics` file must be stable across regenerations (derived from the event token) so that calendar applications update the existing entry rather than creating duplicates.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-9: Highlight changed event details
|
|
||||||
|
|
||||||
**As a** guest,
|
|
||||||
**I want to** see which event details have changed since I last visited the event page,
|
|
||||||
**so that** I immediately notice important updates like a rescheduled date, a new time, or a changed location.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] When the organizer saves an edit (US-5), the server records which fields changed (title, description, date/time, location) and stores the timestamp of that edit alongside the event
|
|
||||||
- [ ] When a guest opens the event page, any field that was modified in the most recent organizer edit is visually highlighted (e.g. a "recently changed" indicator next to the field)
|
|
||||||
- [ ] The highlight is only shown to guests who have not visited the event page since the most recent edit — determined by comparing the event's `last_edited_at` timestamp against a `last_seen_at` value stored in localStorage per event token
|
|
||||||
- [ ] On first visit (no `last_seen_at` in localStorage), no highlight is shown — the event is new to the guest, so highlighting individual fields would be misleading
|
|
||||||
- [ ] After the event page is rendered, the guest's `last_seen_at` in localStorage is updated to match the current `last_edited_at`, so the highlight disappears on the next visit
|
|
||||||
- [ ] The highlight mechanism is entirely client-side: the `last_seen_at` timestamp is stored and read locally; no visit data is transmitted to the server
|
|
||||||
- [ ] If the organizer makes multiple successive edits, only the fields changed in the most recent edit are highlighted; earlier intermediate changes are not tracked
|
|
||||||
- [ ] If the event has not been edited since creation, no highlights are shown
|
|
||||||
|
|
||||||
**Dependencies:** US-2, US-5, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md specifies "Änderungen zum ursprünglichen Inhalt (z.b. geändertes datum/ort) werden iwi hervorgehoben." The comparison is against the most recent edit (not the original creation values) — simpler and more actionable for guests. Storing the set of changed field names server-side (alongside `last_edited_at`) is necessary because the client cannot reconstruct which fields changed from timestamps alone. The highlight logic runs client-side using only locally stored state; no server round-trip is required beyond the normal event page load.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-10a: Post update messages as organizer
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** post short update messages on the event page and manage them,
|
|
||||||
**so that** guests are informed of announcements or notes without requiring a separate communication channel.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] From the organizer view, the organizer can compose and submit a plain-text update message
|
|
||||||
- [ ] Each submitted message is stored server-side, associated with the event, and timestamped at the time of posting
|
|
||||||
- [ ] All update messages for an event are displayed on the public event page in reverse chronological order (newest first), each with a human-readable timestamp
|
|
||||||
- [ ] Update messages cannot be posted after the event's expiry date
|
|
||||||
- [ ] The organizer can delete any previously posted update message from the organizer view; deletion is permanent and the message is immediately removed from the public event page
|
|
||||||
- [ ] If the organizer token is absent or invalid, the compose and delete UI is not shown and the server rejects any attempt to post or delete update messages
|
|
||||||
- [ ] No account or additional authentication step is required beyond the organizer token
|
|
||||||
- [ ] No personal data or IP address is logged when update messages are fetched or posted
|
|
||||||
|
|
||||||
**Dependencies:** US-1, US-2, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md specifies "Veranstalter kann Updatenachrichten im Event posten." This story covers the server-side feature: posting, displaying, and deleting update messages. The client-side read-state tracking (badge/indicator for new updates) is a separate concern covered in US-10b. Per Q-3 resolution: the app is a SPA; JavaScript-dependent rendering is acceptable for update messages. Cancelled events (US-18): posting update messages is not blocked by cancellation, only by expiry (AC 4). This is intentional — the organizer may want to post post-cancellation communication (e.g. a rescheduling notice or explanation). The cancellation message (US-18) is a static one-time message, while update messages are a stream of announcements serving a different purpose.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-10b: New-update indicator for guests
|
|
||||||
|
|
||||||
**As a** guest,
|
|
||||||
**I want to** see a visual indicator when there are update messages I haven't seen yet,
|
|
||||||
**so that** I immediately notice new announcements without having to read through all messages.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] Guests who open the event page and have unread updates (i.e. updates posted since their last visit) see a visual indicator — a badge or highlighted section — drawing attention to the new messages
|
|
||||||
- [ ] "Read" state is tracked entirely in localStorage: on page load, the timestamp of the newest update is compared to a `updates_last_seen_at` value stored locally per event token; if the update is newer, the indicator is shown
|
|
||||||
- [ ] After the event page is rendered, `updates_last_seen_at` in localStorage is set to the current latest update timestamp, so the indicator clears on the next visit
|
|
||||||
- [ ] On first visit (no `updates_last_seen_at` in localStorage), no "new update" indicator is shown; updates are displayed as-is without a "new" badge
|
|
||||||
- [ ] No server request is made to record that a guest read the updates — tracking is purely local
|
|
||||||
|
|
||||||
**Dependencies:** US-10a
|
|
||||||
|
|
||||||
**Notes:** Ideen.md specifies "pro Device wird via LocalStorage gemerkt was man schon gesehen hat (Badge/Hervorhebung für neue Updates)." This story is the client-side read-state complement to US-10a (which covers posting, displaying, and deleting messages). The first-visit exclusion (no badge on first open) is intentional — a guest who has never seen the event before would find all updates misleading to label as "new". The `updates_last_seen_at` key is separate from the `last_seen_at` key used in US-9. This story is distinct from US-9 (change highlighting for edited event fields): US-9 highlights structural field changes (date, location, title), while this story covers awareness of new free-form announcements.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-11: Generate a QR code for an event
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** generate and download a QR code for my event,
|
|
||||||
**so that** I can print it on posters or flyers and let people access the event page by scanning it.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The event page displays a QR code that encodes the public event URL
|
|
||||||
- [ ] The QR code is generated entirely server-side — no external QR code service is called
|
|
||||||
- [ ] The QR code is downloadable as a file suitable for printing (e.g. SVG or high-resolution PNG)
|
|
||||||
- [ ] The QR code download is a direct link to a server endpoint — the actual file download does not require client-side generation
|
|
||||||
- [ ] The QR code is accessible to any visitor holding the event link, not only the organizer
|
|
||||||
- [ ] No personal data, IP address, or identifier is transmitted to any third party when the QR code is generated or downloaded
|
|
||||||
- [ ] The QR code remains available and downloadable after the event has expired
|
|
||||||
|
|
||||||
**Dependencies:** US-2, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md specifies "QR Code generieren (z.B. für Plakate/Flyer)." The QR code must be server-side generated — calling an external service would violate the no-external-dependencies-that-phone-home statute. The code encodes only the public event URL; no additional metadata is embedded. Making it available to all visitors (not just the organizer) reflects the use case: the organizer can hand printed material to guests, and guests who received a physical flyer can share the event link digitally by re-scanning. Per Q-3 resolution: the app is a SPA; client-side rendering of the QR code display is acceptable. The download mechanism remains a direct server endpoint link.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-12: Automatic data deletion after expiry date
|
|
||||||
|
|
||||||
**As a** guest,
|
|
||||||
**I want** all event data — including my RSVP and any other stored personal information — to be automatically and permanently deleted after the event's expiry date,
|
|
||||||
**so that** I can trust that data I submitted is not retained on the server longer than necessary.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The server runs a periodic cleanup process that deletes all data associated with events whose expiry date has passed
|
|
||||||
- [ ] The cleanup deletes the event record itself along with all associated RSVPs, update messages (US-10a), field-change metadata (US-9), stored header images (US-16) [deferred until US-16 is implemented], and cancellation state (US-18 if applicable)
|
|
||||||
- [ ] After deletion, the event's public URL returns a clear "event not found" response — no partial data is ever served
|
|
||||||
- [ ] The cleanup process runs automatically without manual operator intervention (e.g. a scheduled job or on-request lazy cleanup triggered by access attempts)
|
|
||||||
- [ ] No log entry records the names, RSVPs, or any personal data of the deleted event's guests — the deletion is silent from a logging perspective
|
|
||||||
- [ ] Extending the expiry date via US-5 (before it has passed) delays the deletion accordingly — the cleanup always uses the current stored expiry date
|
|
||||||
- [ ] The cleanup is not triggered early: data is retained until the expiry date has passed, not before
|
|
||||||
|
|
||||||
**Dependencies:** US-1, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md specifies "Ablaufdatum als Pflichtfeld, nach dem alle gespeicherten Daten gelöscht werden." This is a privacy guarantee, not merely a housekeeping task. The mandatory expiry date in US-1 is only meaningful if the server actually enforces deletion. The implementation strategy (scheduled cron job, lazy cleanup on access, or both) is an architectural decision to be made during implementation. What matters at the story level is the observable behavior: data is gone after expiry, and no residual records remain. LocalStorage entries on guests' devices are unaffected by server-side deletion — that is intentional and consistent with the client-side-only nature of localStorage.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-13: Limit the number of active events per instance
|
|
||||||
|
|
||||||
**As a** self-hoster,
|
|
||||||
**I want to** configure a maximum number of simultaneously active events via a server environment variable,
|
|
||||||
**so that** I can prevent storage exhaustion and limit potential abuse on my instance without modifying code.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The server reads a configurable environment variable (e.g. `MAX_ACTIVE_EVENTS`) at startup to determine the event cap
|
|
||||||
- [ ] If the configured limit is reached, any attempt to create a new event is rejected with a clear error response indicating the instance is at capacity
|
|
||||||
- [ ] The error is surfaced to the user on the event creation form — not as a silent failure
|
|
||||||
- [ ] If the environment variable is unset or empty, no limit is applied (unlimited events by default, suitable for personal or trusted-group instances)
|
|
||||||
- [ ] Only non-expired events count toward the limit; expired events awaiting cleanup are not counted
|
|
||||||
- [ ] The limit is enforced server-side; it cannot be bypassed by the client
|
|
||||||
- [ ] No personal data is logged when the limit is hit — only the rejection response is returned
|
|
||||||
- [ ] The `MAX_ACTIVE_EVENTS` environment variable is documented in the README's self-hosting section (configuration table)
|
|
||||||
|
|
||||||
**Dependencies:** US-1, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md lists "Max aktive Events als serverseitige Konfiguration (env variable)" under security/abuse-prevention measures. This is a deployment-time configuration intended for the self-hoster, not a user-facing feature in the traditional sense. The story is written from the self-hoster's perspective because they are the ones who configure and benefit from this capability. The environment variable approach aligns with the Dockerfile-based deployment model described in CLAUDE.md. Only non-expired events count toward the limit — consistent with US-12 (expired events are deleted and must not permanently consume capacity).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-14: Install as Progressive Web App
|
|
||||||
|
|
||||||
**As a** guest,
|
|
||||||
**I want to** install the app on my device from the browser,
|
|
||||||
**so that** it feels like a native app and I can launch it directly from my home screen.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The app serves a valid web app manifest with at minimum: app name, icons in multiple sizes, standalone display mode, theme color, and a start URL
|
|
||||||
- [ ] The app meets browser installability requirements (manifest + registered service worker) so that the browser's "Add to Home Screen" / install prompt is available on supported mobile and desktop browsers
|
|
||||||
- [ ] When launched from the home screen, the app opens in standalone mode — without browser address bar or navigation chrome
|
|
||||||
- [ ] The app displays the configured icon and name on the device's home screen and in the OS app switcher
|
|
||||||
- [ ] On repeat visits, previously loaded pages and app assets load quickly due to service worker caching
|
|
||||||
- [ ] No external resources are fetched by the manifest or service worker — all assets are self-hosted
|
|
||||||
- [ ] The manifest's start URL points to the root page (`/`), which serves the local event overview (US-7), so returning users see their tracked events immediately
|
|
||||||
|
|
||||||
**Dependencies:** T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md states "Soll als PWA im Browser laufen / Damit es sich wie eine normale app anfühlt." The PWA requirement is about installability and native-app feel — the app should be indistinguishable from a native app when launched from the home screen. Per Q-2 resolution: the local event overview lives at `/` and serves as the start URL. Per Q-3 resolution: the app is a SPA with a RESTful API; the service worker caching strategy will be determined during implementation. Note: While this story depends only on T-4 structurally, the service worker and manifest are only meaningfully testable after other stories (e.g. US-2, US-7) provide actual pages and assets to cache and serve from the home screen.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-15: Choose event color theme
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** choose a visual color theme for my event page,
|
|
||||||
**so that** the event page reflects the mood or style of the event and stands out visually.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] During event creation or editing (US-5), the organizer can select from a set of predefined color themes (e.g. color schemes or visual styles)
|
|
||||||
- [ ] A default theme is applied if the organizer makes no selection
|
|
||||||
- [ ] Theme selection is persisted server-side alongside the event data
|
|
||||||
- [ ] The guest-facing event page renders with the selected color theme
|
|
||||||
- [ ] Themes affect only the individual event page, not the app's global UI (navigation, local overview, forms)
|
|
||||||
- [ ] The customization UI is part of the event creation and editing forms
|
|
||||||
- [ ] No external resources are required for any predefined theme — all styles are self-contained
|
|
||||||
|
|
||||||
**Dependencies:** US-1, US-2, T-4
|
|
||||||
|
|
||||||
**Notes:** Ideen.md mentions "Irgendwie auch Designbar, sofern man das will." Per Q-1 resolution: predefined themes for event pages. This is event-level styling — each event can have its own visual identity. The app's global appearance (including dark/light mode, US-17) is a separate concern. No external service or API key is needed for predefined themes. The interaction between event-level color themes and the app-level dark/light mode (US-17) must be considered during implementation: predefined themes should remain readable and visually coherent regardless of whether the user has dark or light mode active on the surrounding app chrome (see also US-17 notes).
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-16: Select event header image from Unsplash
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** search for and select a header image for my event page via an integrated image search,
|
|
||||||
**so that** the event page has a visually appealing header that matches the event's theme.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] During event creation or editing (US-5), the organizer can search for a header image via an integrated Unsplash search
|
|
||||||
- [ ] The Unsplash search is server-proxied: the client sends the search query to the app's backend, which calls the Unsplash API and returns results — the client never contacts Unsplash directly
|
|
||||||
- [ ] When an image is selected, the server downloads and stores the image locally on disk; it is served from the app's own domain, not from Unsplash's CDN
|
|
||||||
- [ ] Proper Unsplash attribution (photographer name, link to Unsplash) is displayed alongside the header image on the event page, as required by the Unsplash API terms
|
|
||||||
- [ ] The organizer can remove a previously selected header image
|
|
||||||
- [ ] The guest-facing event page renders with the selected header image (if set)
|
|
||||||
- [ ] No guest data, IP address, or identifier is transmitted to Unsplash or any third party from the guest's browser
|
|
||||||
- [ ] If the server has no Unsplash API key configured, the image search feature is unavailable — the option is simply not shown, no error
|
|
||||||
- [ ] If the API key is removed from the config after images were already stored, existing images continue to render from disk; only the search/select UI becomes unavailable; the event page never breaks due to a missing API key
|
|
||||||
- [ ] When the event expires and is deleted (US-12), the stored image file is deleted along with all other event data
|
|
||||||
- [ ] The `UNSPLASH_API_KEY` environment variable and the persistent volume requirement for image storage are documented in the README's self-hosting section (configuration table and storage notes)
|
|
||||||
|
|
||||||
**Dependencies:** US-1, US-2, T-4
|
|
||||||
|
|
||||||
**Notes:** Per Q-1 resolution: Unsplash image search as an optional feature alongside predefined themes (US-15). The Unsplash integration is privacy-safe because it is server-proxied: guests never contact Unsplash; the server fetches and stores images locally. The Unsplash API requires attribution (photographer name and Unsplash link) — this is a legal/terms requirement. The Unsplash API key is an optional deployment configuration for the self-hoster; if not configured, only predefined themes (US-15) are available. Image storage on disk requires the hoster to configure a persistent volume — standard Docker practice, documented in README alongside the docker-compose example.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-17: Dark/light mode
|
|
||||||
|
|
||||||
**As a** user,
|
|
||||||
**I want to** switch between dark and light mode for the app's interface,
|
|
||||||
**so that** I can use the app comfortably in different lighting conditions and according to my personal preference.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The app respects the user's operating system / browser preference (`prefers-color-scheme`) as the default mode on first visit
|
|
||||||
- [ ] A visible toggle allows the user to manually switch between dark and light mode
|
|
||||||
- [ ] The user's manual preference is stored in localStorage and takes precedence over the system preference on subsequent visits
|
|
||||||
- [ ] The toggle is accessible from any page in the app (e.g. in a header or navigation element)
|
|
||||||
- [ ] Dark/light mode affects the app's global UI: navigation, local event overview, forms, and all non-event-page chrome
|
|
||||||
- [ ] Event pages use their own color theme (US-15) which is independent of the app-level dark/light mode
|
|
||||||
- [ ] The mode switch is purely client-side — no server request is made, no preference data is transmitted
|
|
||||||
- [ ] Both modes meet accessibility contrast requirements (WCAG AA minimum)
|
|
||||||
|
|
||||||
**Dependencies:** None
|
|
||||||
|
|
||||||
**Notes:** This is app-level theming — completely separate from the event-level color themes (US-15). Dark/light mode affects the overall UI (navigation, local overview, forms, etc.), not individual event pages. The interaction between app-level dark/light mode and event-level color themes (US-15) should be considered during implementation to ensure readability in both modes. Note: This story has no structural dependencies but requires the frontend scaffold from T-4 (which includes T-1) to be practically implementable.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-18: Cancel an event as organizer
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** explicitly cancel my event and optionally provide a reason,
|
|
||||||
**so that** guests clearly see the event is cancelled and understand why.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The organizer view provides a dedicated "Cancel event" action, separate from editing event details (US-5)
|
|
||||||
- [ ] When cancelling, the organizer can optionally enter a cancellation message (reason/explanation)
|
|
||||||
- [ ] When cancelling, the organizer can optionally adjust the event's expiry date (to control how long the cancellation notice remains visible before data deletion per US-12); the adjusted date must be in the future — consistent with US-5's expiry date constraint and US-19's role as the immediate-removal mechanism
|
|
||||||
- [ ] A confirmation step is required before cancellation is finalized
|
|
||||||
- [ ] After cancellation, the event page clearly displays a "cancelled" state with the cancellation message if provided (US-2)
|
|
||||||
- [ ] RSVPs are no longer possible on a cancelled event — the RSVP form is hidden and the server rejects any RSVP submissions (US-3)
|
|
||||||
- [ ] The event remains visible and accessible via its link until its expiry date, so guests can still see what happened and why
|
|
||||||
- [ ] After cancellation, the organizer can still edit the cancellation message but cannot "un-cancel" the event
|
|
||||||
- [ ] Cancellation is only accessible when a valid organizer token for the event is present in localStorage
|
|
||||||
- [ ] If the organizer token is absent or invalid, the cancel action is not shown and the server rejects cancellation requests
|
|
||||||
- [ ] The cancellation state and message are persisted server-side
|
|
||||||
- [ ] No account or additional authentication step is required beyond the organizer token
|
|
||||||
|
|
||||||
**Dependencies:** US-1, T-4
|
|
||||||
|
|
||||||
**Notes:** The overseer identified that cancellation was previously conflated with shortening the expiry date in US-5, which was unintuitive and conflated two fundamentally different actions. Editing the expiry date (US-5) is now purely an edit operation; cancellation is an explicit, dedicated action. The event remains visible after cancellation (unlike immediate deletion) until the expiry date passes, giving guests time to see the cancellation notice. The optional expiry date adjustment during cancellation lets the organizer control how long the notice stays visible — e.g. keep it up for a week so everyone sees it, or shorten it to trigger faster cleanup. The "no un-cancel" constraint keeps the model simple: cancellation is a one-way state transition. If the organizer made a mistake, they can create a new event.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-19: Delete an event as organizer
|
|
||||||
|
|
||||||
**As an** event organizer,
|
|
||||||
**I want to** immediately and permanently delete my event and all its data,
|
|
||||||
**so that** I can remove an event entirely when it was created by mistake or is no longer needed.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] The organizer view provides a dedicated "Delete event" action, separate from editing (US-5) and cancelling (US-18)
|
|
||||||
- [ ] A confirmation warning is displayed before deletion, clearly stating that the action is immediate, permanent, and irreversible — all event data including RSVPs, update messages, and images will be lost
|
|
||||||
- [ ] Upon confirmation, the server permanently deletes the event record and all associated data: RSVPs, update messages (US-10a), field-change metadata (US-9), stored header images (US-16), and cancellation state (US-18 if applicable)
|
|
||||||
- [ ] After deletion, the event's public URL returns a "event not found" response (consistent with US-2 and US-12 behavior)
|
|
||||||
- [ ] After successful deletion, the app removes the event's organizer token and metadata from localStorage and redirects the organizer to the root page (`/`)
|
|
||||||
- [ ] Deletion is accessible only when a valid organizer token for the event is present in localStorage
|
|
||||||
- [ ] If the organizer token is absent or invalid, the delete action is not shown and the server rejects deletion requests
|
|
||||||
- [ ] No personal data or event data is logged during deletion — the deletion is silent from a logging perspective
|
|
||||||
- [ ] No account or additional authentication step is required beyond the organizer token
|
|
||||||
- [ ] The event can be deleted regardless of its current state (active, cancelled, or expired)
|
|
||||||
|
|
||||||
**Dependencies:** US-1, T-4
|
|
||||||
|
|
||||||
**Notes:** The overseer identified that using the expiry date as a deletion mechanism (setting it to today or a past date in US-5) was unintuitive and conflated two different actions. US-5 now enforces that the expiry date can only be set to a future date. If the organizer wants the event gone immediately, they use this explicit deletion feature. Unlike cancellation (US-18), which keeps the event visible with a cancellation notice until the expiry date, deletion removes the event entirely and immediately. This is the organizer's "nuclear option" — useful when the event was created by mistake, contains wrong information, or is no longer needed at all. The deletion behavior is identical to what US-12 does automatically after expiry, but triggered manually and immediately by the organizer.
|
|
||||||
|
|
||||||
---
|
|
||||||
|
|
||||||
### US-20: 404 page
|
|
||||||
|
|
||||||
**As a** user who navigates to a non-existent URL,
|
|
||||||
**I want to** see a helpful error page,
|
|
||||||
**so that** I can find my way back instead of seeing a blank screen.
|
|
||||||
|
|
||||||
**Acceptance Criteria:**
|
|
||||||
- [ ] Unknown routes show a "Page not found" message
|
|
||||||
- [ ] The page includes a link back to the home page
|
|
||||||
- [ ] The page follows the design system
|
|
||||||
|
|
||||||
**Dependencies:** None
|
|
||||||
|
|
||||||
**Notes:** Identified during US-1 post-review: navigating to an unknown path currently shows a blank page because the Vue Router has no catch-all route. This is a small UX story but important for polish. Note: This story has no structural dependencies but requires the frontend scaffold from T-4 (which includes T-1) to be practically implementable.
|
|
||||||
|
|
||||||
63
specs/001-monorepo-setup/spec.md
Normal file
63
specs/001-monorepo-setup/spec.md
Normal file
@@ -0,0 +1,63 @@
|
|||||||
|
# Feature Specification: Initialize Monorepo Structure
|
||||||
|
|
||||||
|
**Feature**: `001-monorepo-setup`
|
||||||
|
**Created**: 2026-03-06
|
||||||
|
**Status**: Implemented
|
||||||
|
**Source**: Migrated from spec/setup-tasks.md
|
||||||
|
|
||||||
|
> **Note**: This is a setup task (infrastructure), not a user-facing feature. It establishes the repository structure as a prerequisite for all subsequent development work.
|
||||||
|
|
||||||
|
## User Scenarios & Testing
|
||||||
|
|
||||||
|
### User Story 1 - Developer can scaffold and build the monorepo (Priority: P1)
|
||||||
|
|
||||||
|
A developer cloning the repository for the first time can build both the backend and frontend from a clean checkout with no source code beyond the scaffold.
|
||||||
|
|
||||||
|
**Why this priority**: Without a working monorepo structure, no further development or CI work is possible.
|
||||||
|
|
||||||
|
**Independent Test**: Clone the repository, run `./mvnw verify` in `backend/` and `npm run build` in `frontend/` — both must succeed against the empty scaffold.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** a fresh clone of the repository, **When** the developer inspects the root, **Then** separate `backend/` and `frontend/` directories exist alongside shared top-level files (README, Dockerfile, CLAUDE.md, LICENSE, .gitignore).
|
||||||
|
2. **Given** the `backend/` directory, **When** the developer runs `./mvnw verify`, **Then** the build succeeds with no source code beyond the hexagonal/onion architecture scaffold using Java (latest LTS), Spring Boot, and Maven.
|
||||||
|
3. **Given** the `frontend/` directory, **When** the developer runs `npm run build`, **Then** the build succeeds with the Vue 3 + Vite + TypeScript + Vue Router scaffold.
|
||||||
|
4. **Given** the repository root, **When** the developer inspects `.gitignore`, **Then** build artifacts, IDE files, and dependency directories for both Java/Maven and Node/Vue are covered.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
- What happens when a developer uses an older Java version? [NEEDS EXPANSION]
|
||||||
|
- How does the scaffold behave with no `.env` or environment variables set? [NEEDS EXPANSION]
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: Repository MUST have a `backend/` directory containing a Java Spring Boot Maven project with hexagonal/onion architecture scaffold.
|
||||||
|
- **FR-002**: Repository MUST have a `frontend/` directory containing a Vue 3 project with Vite, TypeScript, and Vue Router.
|
||||||
|
- **FR-003**: Repository MUST include shared top-level files: README, Dockerfile, CLAUDE.md, LICENSE (GPL), and .gitignore.
|
||||||
|
- **FR-004**: Both projects MUST build successfully from an empty scaffold (no application source code required).
|
||||||
|
- **FR-005**: `.gitignore` MUST cover build artifacts, IDE files, and dependency directories for both Java/Maven and Node/Vue.
|
||||||
|
|
||||||
|
### Key Entities
|
||||||
|
|
||||||
|
- **Monorepo**: Single git repository containing both `backend/` and `frontend/` as separate projects sharing a root.
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: `cd backend && ./mvnw verify` exits 0 on a clean checkout.
|
||||||
|
- **SC-002**: `cd frontend && npm run build` exits 0 on a clean checkout.
|
||||||
|
- **SC-003**: All six acceptance criteria are checked off (all complete — status: Implemented).
|
||||||
|
|
||||||
|
### Acceptance Criteria (original)
|
||||||
|
|
||||||
|
- [x] Single repository with `backend/` and `frontend/` directories
|
||||||
|
- [x] Backend: Java (latest LTS), Spring Boot, Maven, hexagonal/onion architecture scaffold
|
||||||
|
- [x] Frontend: Vue 3 with Vite as bundler, TypeScript, Vue Router
|
||||||
|
- [x] Shared top-level files: README, Dockerfile, CLAUDE.md, LICENSE (GPL), .gitignore
|
||||||
|
- [x] Both projects build successfully with no source code (empty scaffold)
|
||||||
|
- [x] .gitignore covers build artifacts, IDE files, and dependency directories for both Java/Maven and Node/Vue
|
||||||
55
specs/002-docker-deployment/spec.md
Normal file
55
specs/002-docker-deployment/spec.md
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
# Feature Specification: Docker Deployment Setup
|
||||||
|
|
||||||
|
**Feature**: `002-docker-deployment`
|
||||||
|
**Created**: 2026-03-06
|
||||||
|
**Status**: Implemented
|
||||||
|
**Source**: Migrated from spec/setup-tasks.md
|
||||||
|
|
||||||
|
> Note: This is a setup task (infrastructure), not a user-facing feature.
|
||||||
|
|
||||||
|
## User Scenarios & Testing
|
||||||
|
|
||||||
|
### User Story 1 - Build and run the application as a single Docker container (Priority: P1)
|
||||||
|
|
||||||
|
A developer or operator can build the project with a single `docker build .` command and run it as a self-contained container. The multi-stage Dockerfile compiles backend and frontend in isolation and produces a minimal runnable image.
|
||||||
|
|
||||||
|
**Why this priority**: Docker packaging is the primary deployment mechanism. Without it, no deployment can happen.
|
||||||
|
|
||||||
|
**Independent Test**: Run `docker build .` and then start the container. Verify the health-check endpoint responds.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** the repository is checked out, **When** `docker build .` is executed, **Then** the build succeeds and produces a working image.
|
||||||
|
2. **Given** a built Docker image, **When** a container is started from it, **Then** the health-check endpoint responds successfully.
|
||||||
|
3. **Given** the repository is checked out, **When** a `.dockerignore` file is present, **Then** build artifacts, IDE files, and unnecessary files are excluded from the build context.
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
- What happens when the frontend build fails inside the Docker build? The multi-stage build must fail visibly so the broken image is never produced.
|
||||||
|
- What happens if the health-check endpoint is not reachable? Docker and orchestrators mark the container as unhealthy.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: A multi-stage Dockerfile MUST exist at the repo root that builds both backend and frontend and produces a single runnable container.
|
||||||
|
- **FR-002**: The Dockerfile MUST use separate build stages so backend and frontend build dependencies are not included in the final image.
|
||||||
|
- **FR-003**: A `.dockerignore` file MUST exclude build artifacts, IDE files, and unnecessary files from the build context.
|
||||||
|
- **FR-004**: The container MUST expose a health-check endpoint so Docker and orchestrators can verify the app is alive.
|
||||||
|
- **FR-005**: `docker build .` MUST succeed and produce a working image.
|
||||||
|
- **FR-006**: A started container MUST respond successfully to the health-check endpoint.
|
||||||
|
|
||||||
|
> Scope note (Addendum 2026-03-04): Database connectivity (`DATABASE_URL`), runtime environment variable configuration (Unsplash API key, max active events), and README docker-compose documentation are out of scope for T-2. They are deferred to T-4 where JPA and Flyway are introduced and can be tested end-to-end.
|
||||||
|
|
||||||
|
### Key Entities
|
||||||
|
|
||||||
|
- **Dockerfile**: Multi-stage build definition at the repo root. Stages: frontend build (Node), backend build (Maven/Java), final runtime image.
|
||||||
|
- **.dockerignore**: Excludes `target/`, `node_modules/`, IDE directories, and other non-essential files.
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: `docker build .` completes without errors from a clean checkout.
|
||||||
|
- **SC-002**: A container started from the built image responds to the health-check endpoint within the configured timeout.
|
||||||
|
- **SC-003**: The final Docker image does not contain Maven, Node.js, or build tool binaries (multi-stage isolation verified).
|
||||||
55
specs/003-cicd-pipeline/spec.md
Normal file
55
specs/003-cicd-pipeline/spec.md
Normal file
@@ -0,0 +1,55 @@
|
|||||||
|
# Feature Specification: CI/CD Pipeline
|
||||||
|
|
||||||
|
**Feature**: `003-cicd-pipeline`
|
||||||
|
**Created**: 2026-03-06
|
||||||
|
**Status**: Implemented
|
||||||
|
**Source**: Migrated from spec/setup-tasks.md
|
||||||
|
|
||||||
|
> Note: This is a setup task (infrastructure), not a user-facing feature. It describes the CI/CD pipeline that validates every push to the repository.
|
||||||
|
|
||||||
|
## User Scenarios & Testing
|
||||||
|
|
||||||
|
### Setup Task T-3 — CI/CD Pipeline (Priority: P0)
|
||||||
|
|
||||||
|
Set up a Gitea Actions CI/CD pipeline that runs on every push, ensuring code quality before deployment.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** a push is made to the repository, **When** the Gitea Actions workflow triggers, **Then** backend tests run via Maven, frontend tests run via Vitest, and the Docker image is built.
|
||||||
|
|
||||||
|
2. **Given** all tests pass and the build succeeds, **When** the pipeline completes, **Then** the Docker image is published to the Gitea container registry.
|
||||||
|
|
||||||
|
3. **Given** any test fails or the build breaks, **When** the pipeline runs, **Then** it fails visibly and the Docker image is not published.
|
||||||
|
|
||||||
|
4. **Given** the workflow file exists, **When** inspected, **Then** it is located in `.gitea/workflows/` and runs on push.
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
- Pipeline must not publish the image if tests pass but the Docker build itself fails.
|
||||||
|
- Docker image is only published to the Gitea container registry on the same instance (no external registries).
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-T03-01**: Gitea Actions workflow file in `.gitea/workflows/` runs on push: test, build, publish Docker image.
|
||||||
|
- **FR-T03-02**: Backend tests run via Maven as part of the pipeline.
|
||||||
|
- **FR-T03-03**: Frontend tests run via Vitest as part of the pipeline.
|
||||||
|
- **FR-T03-04**: Docker image is published to the Gitea container registry on the same instance.
|
||||||
|
- **FR-T03-05**: Pipeline fails visibly if any test fails or the build breaks.
|
||||||
|
- **FR-T03-06**: Docker image is only published if all tests pass and the build succeeds.
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
|
||||||
|
Per Q-5 resolution: the project uses Gitea as its hosting and CI/CD platform. The pipeline uses Gitea Actions (`.gitea/workflows/`) and publishes Docker images to the Gitea container registry. T-3 depends on T-1 (repository structure with both projects to test and build) and T-2 (Dockerfile used by the pipeline to build and publish the container image).
|
||||||
|
|
||||||
|
**Dependencies:** T-1, T-2
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
- [x] Gitea Actions workflow file in `.gitea/workflows/` runs on push: test, build, publish Docker image
|
||||||
|
- [x] Backend tests run via Maven
|
||||||
|
- [x] Frontend tests run via Vitest
|
||||||
|
- [x] Docker image is published to the Gitea container registry on the same instance
|
||||||
|
- [x] Pipeline fails visibly if any test fails or the build breaks
|
||||||
|
- [x] Docker image is only published if all tests pass and the build succeeds
|
||||||
121
specs/004-dev-infrastructure/spec.md
Normal file
121
specs/004-dev-infrastructure/spec.md
Normal file
@@ -0,0 +1,121 @@
|
|||||||
|
# Feature Specification: Development Infrastructure Setup
|
||||||
|
|
||||||
|
**Feature**: `004-dev-infrastructure`
|
||||||
|
**Created**: 2026-03-06
|
||||||
|
**Status**: Implemented
|
||||||
|
**Source**: Migrated from spec/setup-tasks.md
|
||||||
|
|
||||||
|
> **Note**: This is a setup task (infrastructure), not a user-facing feature. It establishes the development foundation required before the first user story can be implemented with TDD.
|
||||||
|
|
||||||
|
## User Scenarios & Testing
|
||||||
|
|
||||||
|
### Setup Task 1 - Database Connectivity and Migration (Priority: P1)
|
||||||
|
|
||||||
|
The application connects to an external PostgreSQL database via environment variables. A database migration framework (Flyway or Liquibase) is configured and runs migrations on startup against the PostgreSQL instance.
|
||||||
|
|
||||||
|
**Why this priority**: Without a running database, no user story can be implemented or tested.
|
||||||
|
|
||||||
|
**Independent Test**: Can be tested by starting the application with a PostgreSQL instance and verifying that migrations run and the health check responds.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** a PostgreSQL instance is available, **When** the application starts with `DATABASE_URL` (or `SPRING_DATASOURCE_*`) environment variables set, **Then** it connects successfully and runs migrations.
|
||||||
|
2. **Given** the migration framework is configured, **When** `mvnw compile` and `mvnw spring-boot:run` execute, **Then** a first empty migration completes without errors.
|
||||||
|
3. **Given** the Docker container is started with a running PostgreSQL, **When** the health-check endpoint is queried, **Then** it responds successfully (migrations ran on startup).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Setup Task 2 - Runtime Configuration via Environment Variables (Priority: P1)
|
||||||
|
|
||||||
|
All runtime configuration is exposed as environment variables: database connection, optional Unsplash API key, and optional max active events limit. No credentials or settings are hard-coded.
|
||||||
|
|
||||||
|
**Why this priority**: Required for secure, portable deployment (Docker/compose).
|
||||||
|
|
||||||
|
**Independent Test**: Can be tested by verifying that the application reads all documented environment variables and that the docker-compose example works with only environment variables.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** all runtime config is environment-driven, **When** the app starts with only environment variables set, **Then** database connection, optional Unsplash key, and optional max-events limit are all honoured.
|
||||||
|
2. **Given** a docker-compose file exists in the README, **When** it is run as documented, **Then** the app container and PostgreSQL container start and the application is reachable.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Setup Task 3 - Backend Test Infrastructure (Priority: P1)
|
||||||
|
|
||||||
|
The backend test infrastructure is set up with JUnit 5, Spring Boot Test, and Testcontainers (PostgreSQL) so that integration tests can run against a real database without external setup.
|
||||||
|
|
||||||
|
**Why this priority**: TDD (mandated by CLAUDE.md) requires test infrastructure before any feature implementation begins.
|
||||||
|
|
||||||
|
**Independent Test**: Can be tested by running `mvnw test` and confirming that the sample tests pass, including integration tests that spin up a PostgreSQL container.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** the backend test infrastructure is configured, **When** `./mvnw test` is run, **Then** JUnit 5 tests execute and the sample test passes.
|
||||||
|
2. **Given** Testcontainers (PostgreSQL) is configured, **When** an integration test annotated with `@SpringBootTest` runs, **Then** a real PostgreSQL container is spun up automatically and the test runs against it.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Setup Task 4 - Frontend Test Infrastructure (Priority: P1)
|
||||||
|
|
||||||
|
The frontend test infrastructure is set up with Vitest and `@vue/test-utils` so that Vue component tests can be written and run.
|
||||||
|
|
||||||
|
**Why this priority**: TDD on the frontend requires Vitest to be configured before any component is implemented.
|
||||||
|
|
||||||
|
**Independent Test**: Can be tested by running `npm test` (or `npx vitest`) and verifying that a sample test passes.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** Vitest and `@vue/test-utils` are configured, **When** `npm run test:unit` is run, **Then** a sample test executes and passes successfully.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Setup Task 5 - SPA Router Configuration (Priority: P2)
|
||||||
|
|
||||||
|
Vue Router is configured in the frontend so that pages can be navigated by URL path (client-side routing).
|
||||||
|
|
||||||
|
**Why this priority**: Required before any multi-page user story can be implemented.
|
||||||
|
|
||||||
|
**Independent Test**: Can be tested by starting the dev server and navigating to a defined route by URL, verifying the correct view is rendered.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** Vue Router is configured, **When** a user navigates to a defined URL path, **Then** the corresponding view is rendered without a full page reload.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
- What happens when `DATABASE_URL` is not set? Application should fail fast with a clear error on startup.
|
||||||
|
- What happens when PostgreSQL is unreachable at startup? Migration should fail visibly with an actionable error message.
|
||||||
|
- What happens when a migration fails? Application must not start; the error must be logged clearly.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: System MUST connect to an external PostgreSQL database via environment variables (`DATABASE_URL` or `SPRING_DATASOURCE_*`).
|
||||||
|
- **FR-002**: System MUST run database migrations on startup using Flyway or Liquibase; a first empty migration MUST succeed.
|
||||||
|
- **FR-003**: All runtime configuration (database connection, optional Unsplash API key, optional max active events) MUST be configurable via environment variables.
|
||||||
|
- **FR-004**: The Vue frontend MUST have Vue Router configured so that pages are navigatable by URL path.
|
||||||
|
- **FR-005**: The backend MUST have JUnit 5 with Spring Boot Test configured; integration tests MUST use Testcontainers (PostgreSQL) for database isolation.
|
||||||
|
- **FR-006**: The frontend MUST have Vitest with `@vue/test-utils` configured; a sample test MUST run and pass.
|
||||||
|
- **FR-007**: Both test suites MUST be executable via their respective build tools (`./mvnw test` for backend, `npm run test:unit` for frontend).
|
||||||
|
- **FR-008**: The README MUST document a docker-compose example (app + PostgreSQL) for deployment.
|
||||||
|
- **FR-009**: The Docker container MUST start and respond to health checks with a running PostgreSQL instance (migrations run on startup).
|
||||||
|
|
||||||
|
### Key Entities
|
||||||
|
|
||||||
|
- **Environment Configuration**: All runtime settings injected via environment variables; no hard-coded credentials.
|
||||||
|
- **Database Migration**: Versioned migration scripts managed by Flyway or Liquibase; run automatically on startup.
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: `./mvnw test` completes without failures; integration tests spin up a real PostgreSQL container via Testcontainers.
|
||||||
|
- **SC-002**: `npm run test:unit` completes without failures; sample component test passes.
|
||||||
|
- **SC-003**: `docker-compose up` (using the README example) starts both containers and the application responds to health checks.
|
||||||
|
- **SC-004**: All runtime configuration is driven exclusively by environment variables; no credentials or settings are hard-coded in source.
|
||||||
|
- **SC-005**: Vue Router is configured; navigating to a defined URL path renders the correct view.
|
||||||
|
|
||||||
|
**Addendum (2026-03-04):** T-4 absorbed database connectivity, environment variable configuration, and docker-compose documentation from T-2 (see T-2 addendum). These criteria require JPA and Flyway to be testable, so they belong here rather than in T-2.
|
||||||
100
specs/005-api-first-tooling/spec.md
Normal file
100
specs/005-api-first-tooling/spec.md
Normal file
@@ -0,0 +1,100 @@
|
|||||||
|
# Feature Specification: API-First Tooling Setup
|
||||||
|
|
||||||
|
**Feature**: `005-api-first-tooling`
|
||||||
|
**Created**: 2026-03-06
|
||||||
|
**Status**: Implemented
|
||||||
|
**Source**: Migrated from spec/setup-tasks.md (setup task — infrastructure, not user-facing)
|
||||||
|
|
||||||
|
## User Scenarios & Testing
|
||||||
|
|
||||||
|
### Setup Story 1 - Backend OpenAPI Code Generation (Priority: P1)
|
||||||
|
|
||||||
|
The development toolchain generates Java server interfaces and model classes from the OpenAPI spec so that the backend implementation always matches the API contract.
|
||||||
|
|
||||||
|
**Why this priority**: Code generation from the spec is the foundation of API-first development. Without it, backend implementation can diverge from the contract.
|
||||||
|
|
||||||
|
**Independent Test**: Run `./mvnw compile` and verify that generated sources appear in `target/generated-sources/openapi/`.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** the OpenAPI spec exists at `backend/src/main/resources/openapi/api.yaml`, **When** `./mvnw compile` is run, **Then** Java interfaces and model classes are generated into `target/generated-sources/openapi/` with packages `de.fete.adapter.in.web.api` and `de.fete.adapter.in.web.model`.
|
||||||
|
2. **Given** the generator is configured with `interfaceOnly: true`, **When** compilation completes, **Then** only interfaces (not implementations) are generated, keeping implementation separate from contract.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Setup Story 2 - Frontend TypeScript Type Generation (Priority: P1)
|
||||||
|
|
||||||
|
The development toolchain generates TypeScript types from the OpenAPI spec so that the frontend is always type-safe against the API contract.
|
||||||
|
|
||||||
|
**Why this priority**: Type generation ensures frontend-backend contract alignment at compile time.
|
||||||
|
|
||||||
|
**Independent Test**: Run `npm run generate:api` and verify that `frontend/src/api/schema.d.ts` is created with types matching the spec.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** `openapi-typescript` is installed as a devDependency and `openapi-fetch` as a dependency, **When** `npm run generate:api` is run, **Then** TypeScript types are generated into `frontend/src/api/schema.d.ts`.
|
||||||
|
2. **Given** the `dev` and `build` scripts include type generation as a pre-step, **When** `npm run dev` or `npm run build` is run, **Then** types are regenerated automatically before the build proceeds.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Setup Story 3 - Minimal API Client (Priority: P2)
|
||||||
|
|
||||||
|
A minimal API client using `openapi-fetch` is wired up so that frontend code can call the backend with full type safety.
|
||||||
|
|
||||||
|
**Why this priority**: The client is needed before any user story can make API calls, but can be a thin wrapper initially.
|
||||||
|
|
||||||
|
**Independent Test**: Verify `frontend/src/api/client.ts` exists and uses `createClient<paths>()` from `openapi-fetch`.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** the generated `schema.d.ts` exists, **When** a developer imports the API client, **Then** all request/response types are fully inferred from the OpenAPI spec.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Setup Story 4 - Minimal OpenAPI Spec (Priority: P1)
|
||||||
|
|
||||||
|
A minimal OpenAPI 3.1 spec exists at the canonical location and is sufficient to prove the tooling works end-to-end.
|
||||||
|
|
||||||
|
**Why this priority**: The spec is the prerequisite for all code generation. It must exist before any other story can proceed.
|
||||||
|
|
||||||
|
**Independent Test**: Run both generation steps (backend + frontend) and verify both succeed without errors.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** a minimal spec at `backend/src/main/resources/openapi/api.yaml`, **When** both generation steps run, **Then** both complete successfully and the project compiles cleanly (backend + frontend).
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
- What happens when the OpenAPI spec contains a syntax error? Generation should fail with a clear error message.
|
||||||
|
- What happens when the spec is updated with a breaking change? Generated types and interfaces reflect the change, causing compile errors that force the developer to update implementations.
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: `openapi-generator-maven-plugin` (v7.20.x, `spring` generator, `interfaceOnly: true`) MUST be configured in `backend/pom.xml`.
|
||||||
|
- **FR-002**: A minimal OpenAPI 3.1 spec MUST exist at `backend/src/main/resources/openapi/api.yaml`.
|
||||||
|
- **FR-003**: `./mvnw compile` MUST generate Java interfaces and model classes into `target/generated-sources/openapi/` with packages `de.fete.adapter.in.web.api` and `de.fete.adapter.in.web.model`.
|
||||||
|
- **FR-004**: `openapi-typescript` MUST be installed as a devDependency in the frontend.
|
||||||
|
- **FR-005**: `openapi-fetch` MUST be installed as a runtime dependency in the frontend.
|
||||||
|
- **FR-006**: `npm run generate:api` MUST generate TypeScript types from the spec into `frontend/src/api/schema.d.ts`.
|
||||||
|
- **FR-007**: Frontend `dev` and `build` scripts MUST include type generation as a pre-step.
|
||||||
|
- **FR-008**: A minimal API client at `frontend/src/api/client.ts` MUST use `createClient<paths>()` from `openapi-fetch`.
|
||||||
|
- **FR-009**: Both generation steps MUST succeed and the project MUST compile cleanly (backend + frontend).
|
||||||
|
|
||||||
|
### Key Entities
|
||||||
|
|
||||||
|
- **OpenAPI Spec**: The single source of truth for the REST API contract, located at `backend/src/main/resources/openapi/api.yaml`. A living document that grows with each user story.
|
||||||
|
- **Generated Sources**: Backend Java interfaces/models in `target/generated-sources/openapi/`; frontend TypeScript types in `frontend/src/api/schema.d.ts`.
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: `./mvnw compile` succeeds and generated sources exist in `target/generated-sources/openapi/`.
|
||||||
|
- **SC-002**: `npm run generate:api` succeeds and `frontend/src/api/schema.d.ts` is created.
|
||||||
|
- **SC-003**: Frontend `npm run dev` and `npm run build` automatically regenerate types before building.
|
||||||
|
- **SC-004**: The project compiles cleanly end-to-end (backend + frontend) with the generated code.
|
||||||
|
- **SC-005**: A working API client exists at `frontend/src/api/client.ts` using the generated types.
|
||||||
97
specs/006-create-event/spec.md
Normal file
97
specs/006-create-event/spec.md
Normal file
@@ -0,0 +1,97 @@
|
|||||||
|
# Feature Specification: Create an Event
|
||||||
|
|
||||||
|
**Feature**: `006-create-event`
|
||||||
|
**Created**: 2026-03-06
|
||||||
|
**Status**: Approved
|
||||||
|
**Source**: Migrated from spec/userstories.md
|
||||||
|
|
||||||
|
## User Scenarios & Testing
|
||||||
|
|
||||||
|
### User Story 1 - Create Event with Required Fields (Priority: P1)
|
||||||
|
|
||||||
|
An event organizer fills in the event creation form with a title, date/time, and mandatory expiry date, submits it, and is redirected to the new event page. The server returns both an event token and an organizer token. The organizer token is stored in localStorage on the current device.
|
||||||
|
|
||||||
|
**Why this priority**: Core action of the entire application. All other stories depend on event creation existing. Without it, there is nothing to view, RSVP to, or manage.
|
||||||
|
|
||||||
|
**Independent Test**: Can be fully tested by submitting the creation form and verifying the redirect to the event page, localStorage state, and the server-side persistence.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** the organizer opens the event creation form, **When** they fill in title, date/time, and expiry date and submit, **Then** the server stores the event and returns a UUID event token and a separate UUID organizer token in the response.
|
||||||
|
2. **Given** the event is created successfully, **When** the organizer is redirected, **Then** they land on the event page identified by the event token.
|
||||||
|
3. **Given** the event is created successfully, **When** the organizer token is received, **Then** it is stored in localStorage to grant organizer access on this device.
|
||||||
|
4. **Given** the event is created successfully, **When** the response is processed, **Then** the event token, title, and date are also stored in localStorage so the local event overview (US-7) can display the event without server contact.
|
||||||
|
5. **Given** the event creation form, **When** it is opened, **Then** no account, login, or personal data is required.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### User Story 2 - Optional Fields (Priority: P2)
|
||||||
|
|
||||||
|
An organizer can optionally provide a description and location when creating an event. These fields are not required but are stored alongside the event when provided.
|
||||||
|
|
||||||
|
**Why this priority**: Enriches the event page but does not block the core creation flow.
|
||||||
|
|
||||||
|
**Independent Test**: Can be tested by creating an event with and without description/location, verifying both cases result in a valid event.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** the creation form, **When** the organizer leaves description and location blank and submits, **Then** the event is created successfully without those fields.
|
||||||
|
2. **Given** the creation form, **When** the organizer fills in description and location and submits, **Then** the event is created and those fields are stored.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### User Story 3 - Expiry Date Validation (Priority: P2)
|
||||||
|
|
||||||
|
The expiry date field is mandatory and must be set to a date in the future. The organizer cannot submit the form without providing it, and cannot set a past date.
|
||||||
|
|
||||||
|
**Why this priority**: Mandatory expiry is a core privacy guarantee (linked to US-12 data deletion). The field must always be valid at creation time.
|
||||||
|
|
||||||
|
**Independent Test**: Can be tested by attempting to submit the form without an expiry date, or with a past expiry date, and verifying the form is rejected with a clear validation message.
|
||||||
|
|
||||||
|
**Acceptance Scenarios**:
|
||||||
|
|
||||||
|
1. **Given** the creation form, **When** the organizer attempts to submit without an expiry date, **Then** the submission is rejected and the expiry date field is flagged as required.
|
||||||
|
2. **Given** the creation form, **When** the organizer enters a past date as the expiry date and submits, **Then** the submission is rejected with a clear validation message.
|
||||||
|
3. **Given** the creation form, **When** the organizer enters a future date as the expiry date and submits, **Then** the event is created successfully.
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
### Edge Cases
|
||||||
|
|
||||||
|
- What happens when the organizer submits the form with only whitespace in the title?
|
||||||
|
- How does the system handle the expiry date set to exactly today (midnight boundary)?
|
||||||
|
- What if localStorage is unavailable or full when storing the organizer token?
|
||||||
|
- What happens if the server returns an error during event creation (network failure, server error)?
|
||||||
|
|
||||||
|
## Requirements
|
||||||
|
|
||||||
|
### Functional Requirements
|
||||||
|
|
||||||
|
- **FR-001**: System MUST accept event creation with: title (required), description (optional), date and time (required), location (optional), expiry date (required).
|
||||||
|
- **FR-002**: System MUST reject event creation if title is missing.
|
||||||
|
- **FR-003**: System MUST reject event creation if date/time is missing.
|
||||||
|
- **FR-004**: System MUST reject event creation if expiry date is missing or is not in the future.
|
||||||
|
- **FR-005**: System MUST generate a unique, non-guessable UUID event token upon successful event creation.
|
||||||
|
- **FR-006**: System MUST generate a separate unique, non-guessable UUID organizer token upon successful event creation.
|
||||||
|
- **FR-007**: System MUST return both tokens in the creation response.
|
||||||
|
- **FR-008**: Frontend MUST store the organizer token in localStorage to grant organizer access on the current device.
|
||||||
|
- **FR-009**: Frontend MUST store the event token, title, and date in localStorage alongside the organizer token.
|
||||||
|
- **FR-010**: Frontend MUST redirect the organizer to the event page after successful creation.
|
||||||
|
- **FR-011**: System MUST NOT require any account, login, or personal data to create an event.
|
||||||
|
- **FR-012**: The event MUST NOT be discoverable except via its direct link (no public listing).
|
||||||
|
|
||||||
|
### Key Entities
|
||||||
|
|
||||||
|
- **Event**: Represents a scheduled gathering. Key attributes: event token (UUID, public), organizer token (UUID, secret), title, description, date/time, location, expiry date, creation timestamp.
|
||||||
|
- **Organizer Token**: A secret UUID stored in localStorage on the device where the event was created. Used to authenticate organizer actions on that device.
|
||||||
|
- **Event Token**: A public UUID embedded in the event URL. Used by guests to access the event page.
|
||||||
|
|
||||||
|
## Success Criteria
|
||||||
|
|
||||||
|
### Measurable Outcomes
|
||||||
|
|
||||||
|
- **SC-001**: An organizer can complete the event creation form and be redirected to the new event page in a single form submission.
|
||||||
|
- **SC-002**: After creation, the organizer token and event metadata are present in localStorage on the current device.
|
||||||
|
- **SC-003**: An event created without description or location renders correctly on the event page without errors.
|
||||||
|
- **SC-004**: Submitting the form with a missing or past expiry date displays a clear, user-readable validation error.
|
||||||
|
- **SC-005**: The event is not accessible via any URL other than the one containing the event token.
|
||||||
94
specs/007-view-event/contracts/get-event.yaml
Normal file
94
specs/007-view-event/contracts/get-event.yaml
Normal file
@@ -0,0 +1,94 @@
|
|||||||
|
# OpenAPI contract addition for GET /events/{token}
|
||||||
|
# To be merged into backend/src/main/resources/openapi/api.yaml
|
||||||
|
|
||||||
|
paths:
|
||||||
|
/events/{token}:
|
||||||
|
get:
|
||||||
|
operationId: getEvent
|
||||||
|
summary: Get public event details by token
|
||||||
|
tags:
|
||||||
|
- events
|
||||||
|
parameters:
|
||||||
|
- name: token
|
||||||
|
in: path
|
||||||
|
required: true
|
||||||
|
schema:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Public event token
|
||||||
|
responses:
|
||||||
|
"200":
|
||||||
|
description: Event found
|
||||||
|
content:
|
||||||
|
application/json:
|
||||||
|
schema:
|
||||||
|
$ref: "#/components/schemas/GetEventResponse"
|
||||||
|
"404":
|
||||||
|
description: Event not found
|
||||||
|
content:
|
||||||
|
application/problem+json:
|
||||||
|
schema:
|
||||||
|
$ref: "#/components/schemas/ProblemDetail"
|
||||||
|
|
||||||
|
components:
|
||||||
|
schemas:
|
||||||
|
GetEventResponse:
|
||||||
|
type: object
|
||||||
|
required:
|
||||||
|
- eventToken
|
||||||
|
- title
|
||||||
|
- dateTime
|
||||||
|
- timezone
|
||||||
|
- attendeeCount
|
||||||
|
- expired
|
||||||
|
properties:
|
||||||
|
eventToken:
|
||||||
|
type: string
|
||||||
|
format: uuid
|
||||||
|
description: Public event token
|
||||||
|
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||||
|
title:
|
||||||
|
type: string
|
||||||
|
description: Event title
|
||||||
|
example: "Summer BBQ"
|
||||||
|
description:
|
||||||
|
type: string
|
||||||
|
description: Event description (absent if not set)
|
||||||
|
example: "Bring your own drinks!"
|
||||||
|
dateTime:
|
||||||
|
type: string
|
||||||
|
format: date-time
|
||||||
|
description: Event date/time with organizer's UTC offset
|
||||||
|
example: "2026-03-15T20:00:00+01:00"
|
||||||
|
timezone:
|
||||||
|
type: string
|
||||||
|
description: IANA timezone name of the organizer
|
||||||
|
example: "Europe/Berlin"
|
||||||
|
location:
|
||||||
|
type: string
|
||||||
|
description: Event location (absent if not set)
|
||||||
|
example: "Central Park, NYC"
|
||||||
|
attendeeCount:
|
||||||
|
type: integer
|
||||||
|
minimum: 0
|
||||||
|
description: Number of confirmed attendees (attending=true)
|
||||||
|
example: 12
|
||||||
|
expired:
|
||||||
|
type: boolean
|
||||||
|
description: Whether the event's expiry date has passed
|
||||||
|
example: false
|
||||||
|
|
||||||
|
# Modification to existing CreateEventRequest — add timezone field
|
||||||
|
# CreateEventRequest (additions):
|
||||||
|
# timezone:
|
||||||
|
# type: string
|
||||||
|
# description: IANA timezone of the organizer
|
||||||
|
# example: "Europe/Berlin"
|
||||||
|
# (make required)
|
||||||
|
|
||||||
|
# Modification to existing CreateEventResponse — add timezone field
|
||||||
|
# CreateEventResponse (additions):
|
||||||
|
# timezone:
|
||||||
|
# type: string
|
||||||
|
# description: IANA timezone of the organizer
|
||||||
|
# example: "Europe/Berlin"
|
||||||
56
specs/007-view-event/data-model.md
Normal file
56
specs/007-view-event/data-model.md
Normal file
@@ -0,0 +1,56 @@
|
|||||||
|
# Data Model: View Event Landing Page (007)
|
||||||
|
|
||||||
|
**Date**: 2026-03-06
|
||||||
|
|
||||||
|
## Entities
|
||||||
|
|
||||||
|
### Event (modified — adds `timezone` field)
|
||||||
|
|
||||||
|
| Field | Type | Required | Constraints | Notes |
|
||||||
|
|-----------------|------------------|----------|--------------------------|----------------------------------|
|
||||||
|
| id | Long | yes | BIGSERIAL, PK | Internal only, never exposed |
|
||||||
|
| eventToken | UUID | yes | UNIQUE, NOT NULL | Public identifier in URLs |
|
||||||
|
| organizerToken | UUID | yes | UNIQUE, NOT NULL | Secret, never in public API |
|
||||||
|
| title | String | yes | 1–200 chars | |
|
||||||
|
| description | String | no | max 2000 chars | |
|
||||||
|
| dateTime | OffsetDateTime | yes | | Organizer's original offset |
|
||||||
|
| timezone | String | yes | IANA zone ID, max 64 | **NEW** — e.g. "Europe/Berlin" |
|
||||||
|
| location | String | no | max 500 chars | |
|
||||||
|
| expiryDate | LocalDate | yes | Must be future at create | Auto-deletion trigger |
|
||||||
|
| createdAt | OffsetDateTime | yes | Server-generated | |
|
||||||
|
|
||||||
|
**Validation rules**:
|
||||||
|
- `timezone` must be a valid IANA zone ID (`ZoneId.getAvailableZoneIds()`).
|
||||||
|
- `expiryDate` must be in the future at creation time (existing rule).
|
||||||
|
|
||||||
|
**State transitions**:
|
||||||
|
- Active → Expired: when `expiryDate < today` (computed, not stored).
|
||||||
|
- Active → Cancelled: future (US-18), adds `cancelledAt` + `cancellationMessage`.
|
||||||
|
|
||||||
|
### RSVP (future — not created in this feature)
|
||||||
|
|
||||||
|
Documented here for context only. Created when the RSVP feature (US-8+) is implemented.
|
||||||
|
|
||||||
|
| Field | Type | Required | Constraints |
|
||||||
|
|------------|---------|----------|------------------------------|
|
||||||
|
| id | Long | yes | BIGSERIAL, PK |
|
||||||
|
| eventId | Long | yes | FK → events.id |
|
||||||
|
| guestName | String | yes | 1–100 chars |
|
||||||
|
| attending | Boolean | yes | true = attending |
|
||||||
|
| createdAt | OffsetDateTime | yes | Server-generated |
|
||||||
|
|
||||||
|
## Relationships
|
||||||
|
|
||||||
|
```
|
||||||
|
Event 1 ←── * RSVP (future)
|
||||||
|
```
|
||||||
|
|
||||||
|
## Type Mapping (full stack)
|
||||||
|
|
||||||
|
| Concept | Java | PostgreSQL | OpenAPI | TypeScript |
|
||||||
|
|--------------|-------------------|---------------|---------------------|------------|
|
||||||
|
| Event time | `OffsetDateTime` | `timestamptz` | `string` `date-time`| `string` |
|
||||||
|
| Timezone | `String` | `varchar(64)` | `string` | `string` |
|
||||||
|
| Expiry date | `LocalDate` | `date` | `string` `date` | `string` |
|
||||||
|
| Token | `UUID` | `uuid` | `string` `uuid` | `string` |
|
||||||
|
| Count | `int` | `integer` | `integer` | `number` |
|
||||||
89
specs/007-view-event/plan.md
Normal file
89
specs/007-view-event/plan.md
Normal file
@@ -0,0 +1,89 @@
|
|||||||
|
# Implementation Plan: View Event Landing Page
|
||||||
|
|
||||||
|
**Branch**: `007-view-event` | **Date**: 2026-03-06 | **Spec**: [spec.md](spec.md)
|
||||||
|
**Input**: Feature specification from `/specs/007-view-event/spec.md`
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
Add a public event detail page at `/events/:token` that displays event information (title, date/time with IANA timezone, description, location, attendee count) without requiring authentication. The page handles four states: loaded, expired ("event has ended"), not found (404), and server error (retry button). Loading uses skeleton-shimmer placeholders. Backend adds `GET /events/{token}` endpoint and a `timezone` field to the Event model (cross-cutting change to US-1).
|
||||||
|
|
||||||
|
## Technical Context
|
||||||
|
|
||||||
|
**Language/Version**: Java 25 (backend), TypeScript 5.9 (frontend)
|
||||||
|
**Primary Dependencies**: Spring Boot 3.5.x, Vue 3, Vue Router 5, openapi-fetch, openapi-typescript
|
||||||
|
**Storage**: PostgreSQL (JPA via Spring Data, Liquibase migrations)
|
||||||
|
**Testing**: JUnit (backend), Vitest (frontend unit), Playwright + MSW (frontend E2E)
|
||||||
|
**Target Platform**: Self-hosted web application (Docker)
|
||||||
|
**Project Type**: Web service + SPA
|
||||||
|
**Performance Goals**: N/A (single-user scale, self-hosted)
|
||||||
|
**Constraints**: No external resources (CDNs, fonts, tracking), WCAG AA, privacy-first
|
||||||
|
**Scale/Scope**: Single new view + one new API endpoint + one cross-cutting model change
|
||||||
|
|
||||||
|
## Constitution Check
|
||||||
|
|
||||||
|
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
|
||||||
|
|
||||||
|
| Principle | Status | Notes |
|
||||||
|
|-----------|--------|-------|
|
||||||
|
| I. Privacy by Design | PASS | No PII exposed. Only attendee count shown (not names). No external resources. No tracking. |
|
||||||
|
| II. Test-Driven Methodology | PASS | TDD enforced: backend unit tests, frontend unit tests, E2E tests per spec. |
|
||||||
|
| III. API-First Development | PASS | OpenAPI spec updated first. Types generated. Response schemas include `example:` fields. |
|
||||||
|
| IV. Simplicity & Quality | PASS | Minimal changes: one GET endpoint, one new view, one model field. `attendeeCount` returns 0 (no RSVP stub). Cancelled state deferred. |
|
||||||
|
| V. Dependency Discipline | PASS | No new dependencies. Skeleton shimmer is CSS-only. |
|
||||||
|
| VI. Accessibility | PASS | Semantic HTML, ARIA attributes, keyboard navigable, WCAG AA contrast via design system. |
|
||||||
|
|
||||||
|
**Post-Phase-1 re-check**: All gates still pass. The `timezone` field addition is a justified cross-cutting change documented in research.md R-1.
|
||||||
|
|
||||||
|
## Project Structure
|
||||||
|
|
||||||
|
### Documentation (this feature)
|
||||||
|
|
||||||
|
```text
|
||||||
|
specs/007-view-event/
|
||||||
|
├── plan.md # This file
|
||||||
|
├── spec.md # Feature specification
|
||||||
|
├── research.md # Phase 0: research decisions
|
||||||
|
├── data-model.md # Phase 1: entity definitions
|
||||||
|
├── quickstart.md # Phase 1: implementation overview
|
||||||
|
├── contracts/
|
||||||
|
│ └── get-event.yaml # Phase 1: GET endpoint contract
|
||||||
|
└── tasks.md # Phase 2: implementation tasks (via /speckit.tasks)
|
||||||
|
```
|
||||||
|
|
||||||
|
### Source Code (repository root)
|
||||||
|
|
||||||
|
```text
|
||||||
|
backend/
|
||||||
|
├── src/main/java/de/fete/
|
||||||
|
│ ├── domain/
|
||||||
|
│ │ ├── model/Event.java # Add timezone field
|
||||||
|
│ │ └── port/in/GetEventUseCase.java # NEW: inbound port
|
||||||
|
│ ├── application/service/EventService.java # Implement GetEventUseCase
|
||||||
|
│ ├── adapter/
|
||||||
|
│ │ ├── in/web/EventController.java # Implement getEvent()
|
||||||
|
│ │ └── out/persistence/
|
||||||
|
│ │ ├── EventJpaEntity.java # Add timezone column
|
||||||
|
│ │ └── EventPersistenceAdapter.java # Map timezone field
|
||||||
|
│ └── config/
|
||||||
|
├── src/main/resources/
|
||||||
|
│ ├── openapi/api.yaml # Add GET endpoint + timezone
|
||||||
|
│ └── db/changelog/ # Liquibase: add timezone column
|
||||||
|
└── src/test/java/de/fete/ # Unit + integration tests
|
||||||
|
|
||||||
|
frontend/
|
||||||
|
├── src/
|
||||||
|
│ ├── api/schema.d.ts # Regenerated from OpenAPI
|
||||||
|
│ ├── views/EventDetailView.vue # NEW: event detail page
|
||||||
|
│ ├── views/EventCreateView.vue # Add timezone to create request
|
||||||
|
│ ├── router/index.ts # Point /events/:token to EventDetailView
|
||||||
|
│ └── assets/main.css # Skeleton shimmer styles
|
||||||
|
├── e2e/
|
||||||
|
│ └── event-view.spec.ts # NEW: E2E tests for view event
|
||||||
|
└── src/__tests__/ # Unit tests for EventDetailView
|
||||||
|
```
|
||||||
|
|
||||||
|
**Structure Decision**: Existing web application structure (backend + frontend). No new packages or modules — extends existing hexagonal architecture with one new inbound port and one new frontend view.
|
||||||
|
|
||||||
|
## Complexity Tracking
|
||||||
|
|
||||||
|
No constitution violations. No entries needed.
|
||||||
39
specs/007-view-event/quickstart.md
Normal file
39
specs/007-view-event/quickstart.md
Normal file
@@ -0,0 +1,39 @@
|
|||||||
|
# Quickstart: View Event Landing Page (007)
|
||||||
|
|
||||||
|
## What this feature does
|
||||||
|
|
||||||
|
Adds a public event detail page at `/events/:token`. Guests open a shared link and see:
|
||||||
|
- Event title, date/time (with IANA timezone), description, location
|
||||||
|
- Count of confirmed attendees (no names)
|
||||||
|
- "Event has ended" state for expired events
|
||||||
|
- "Event not found" for invalid tokens
|
||||||
|
- Skeleton shimmer while loading
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
- US-1 (Create Event) is implemented — Event entity, JPA persistence, POST endpoint exist.
|
||||||
|
- No RSVP model yet — attendee count returns 0 until RSVP feature is built.
|
||||||
|
|
||||||
|
## Key changes
|
||||||
|
|
||||||
|
### Backend
|
||||||
|
|
||||||
|
1. **OpenAPI**: Add `GET /events/{token}` endpoint + `GetEventResponse` schema. Add `timezone` field to `CreateEventRequest`, `CreateEventResponse`, and `GetEventResponse`.
|
||||||
|
2. **Domain**: Add `timezone` (String) to `Event.java`.
|
||||||
|
3. **Persistence**: Add `timezone` column to `EventJpaEntity`, Liquibase migration.
|
||||||
|
4. **Use case**: New `GetEventUseCase` (inbound port) + implementation in `EventService`.
|
||||||
|
5. **Controller**: `EventController` implements `getEvent()` — maps to `GetEventResponse`, computes `expired` and `attendeeCount`.
|
||||||
|
|
||||||
|
### Frontend
|
||||||
|
|
||||||
|
1. **API types**: Regenerate `schema.d.ts` from updated OpenAPI spec.
|
||||||
|
2. **EventDetailView.vue**: New view component — fetches event by token, renders detail card.
|
||||||
|
3. **Router**: Replace `EventStubView` import at `/events/:token` with `EventDetailView`.
|
||||||
|
4. **States**: Loading (skeleton shimmer), loaded, expired, not-found, server-error (retry button).
|
||||||
|
5. **Create form**: Send `timezone` field (auto-detected via `Intl.DateTimeFormat`).
|
||||||
|
|
||||||
|
### Testing
|
||||||
|
|
||||||
|
- Backend: Unit tests for `GetEventUseCase`, controller tests for GET endpoint (200, 404).
|
||||||
|
- Frontend: Unit tests for EventDetailView (all states).
|
||||||
|
- E2E: Playwright tests with MSW mocks for all states (loaded, expired, not-found, error).
|
||||||
Some files were not shown because too many files have changed in this diff Show More
Reference in New Issue
Block a user