Migrate project artifacts to spec-kit format
- Move cross-cutting docs (personas, design system, implementation phases, Ideen.md) to .specify/memory/ - Move cross-cutting research and plans to .specify/memory/research/ and .specify/memory/plans/ - Extract 5 setup tasks from spec/setup-tasks.md into individual specs/001-005/spec.md files with spec-kit template format - Extract 20 user stories from spec/userstories.md into individual specs/006-026/spec.md files with spec-kit template format - Relocate feature-specific research and plan docs into specs/[feature]/ - Add spec-kit constitution, templates, scripts, and slash commands - Slim down CLAUDE.md to Claude-Code-specific config, delegate principles to .specify/memory/constitution.md - Update ralph.sh with stream-json output and per-iteration logging - Delete old spec/ and docs/agents/ directories - Gitignore Ralph iteration JSONL logs Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
This commit is contained in:
134
.specify/memory/constitution.md
Normal file
134
.specify/memory/constitution.md
Normal file
@@ -0,0 +1,134 @@
|
||||
<!--
|
||||
Sync Impact Report
|
||||
==================
|
||||
Version change: 0.0.0 (template) -> 1.0.0
|
||||
Modified principles: N/A (initial adoption)
|
||||
Added sections:
|
||||
- 6 Core Principles (Privacy, Methodology, API-First, Quality, Dependencies, Accessibility)
|
||||
- Tech Stack & Constraints
|
||||
- Development Workflow
|
||||
- Governance
|
||||
Removed sections: N/A
|
||||
Templates requiring updates:
|
||||
- .specify/templates/plan-template.md: OK (Constitution Check section already generic)
|
||||
- .specify/templates/spec-template.md: OK (no constitution-specific references)
|
||||
- .specify/templates/tasks-template.md: OK (no constitution-specific references)
|
||||
- .specify/templates/checklist-template.md: OK (no constitution-specific references)
|
||||
- .specify/templates/commands/*.md: N/A (no command files exist)
|
||||
Follow-up TODOs: none
|
||||
-->
|
||||
|
||||
# fete Constitution
|
||||
|
||||
## Core Principles
|
||||
|
||||
### I. Privacy by Design
|
||||
|
||||
Privacy is a design constraint, not a feature. It shapes every decision from
|
||||
the start.
|
||||
|
||||
- The system MUST NOT include analytics, telemetry, or tracking of any kind.
|
||||
- The server MUST NOT log PII or IP addresses.
|
||||
- Every feature MUST critically evaluate what data is necessary; only data
|
||||
absolutely required for functionality may be stored.
|
||||
- External dependencies that phone home (CDNs, Google Fonts, tracking-capable
|
||||
libraries) MUST NOT be used.
|
||||
|
||||
### II. Test-Driven Methodology
|
||||
|
||||
Development follows a strict Research -> Spec -> Test -> Implement -> Review
|
||||
sequence. No shortcuts.
|
||||
|
||||
- Tests MUST be written before implementation (TDD). The Red -> Green ->
|
||||
Refactor cycle is strictly enforced.
|
||||
- No implementation code may be written without a specification.
|
||||
- E2E tests are mandatory for every frontend user story.
|
||||
- When a setup task or user story is completed, its acceptance criteria MUST
|
||||
be checked off in the corresponding spec file before committing.
|
||||
|
||||
### III. API-First Development
|
||||
|
||||
The OpenAPI spec (`backend/src/main/resources/openapi/api.yaml`) is the
|
||||
single source of truth for the REST API contract.
|
||||
|
||||
- Endpoints and schemas MUST be defined in the spec first.
|
||||
- Backend interfaces and frontend types MUST be generated from the spec
|
||||
before writing implementation code.
|
||||
- Response schemas MUST include `example:` fields for mock generation and
|
||||
documentation.
|
||||
|
||||
### IV. Simplicity & Quality
|
||||
|
||||
KISS and grugbrain. Engineer it properly, but do not over-engineer.
|
||||
|
||||
- No workarounds. Always fix the root cause, even if it takes longer.
|
||||
- Technical debt MUST be addressed immediately; it MUST NOT accumulate.
|
||||
- Refactoring is permitted freely as long as it does not alter the
|
||||
fundamental architecture.
|
||||
- Every line of code MUST be intentional and traceable to a requirement.
|
||||
No vibe coding.
|
||||
|
||||
### V. Dependency Discipline
|
||||
|
||||
Every dependency is a deliberate, justified decision.
|
||||
|
||||
- A dependency MUST provide substantial value and a significant portion of
|
||||
its features MUST actually be used.
|
||||
- Dependencies MUST be actively maintained and open source (copyleft is
|
||||
acceptable under GPL).
|
||||
- Dependencies that phone home or compromise user privacy MUST NOT be
|
||||
introduced.
|
||||
|
||||
### VI. Accessibility
|
||||
|
||||
Accessibility is a baseline requirement, not an afterthought.
|
||||
|
||||
- All frontend components MUST meet WCAG AA contrast requirements.
|
||||
- Semantic HTML and ARIA attributes MUST be used where appropriate.
|
||||
- The UI MUST be operable via keyboard navigation.
|
||||
|
||||
## Tech Stack & Constraints
|
||||
|
||||
- **Backend:** Java 25 (LTS, SDKMAN), Spring Boot 3.5.x, Maven with wrapper
|
||||
- **Frontend:** Vue 3, TypeScript, Vue Router, Vite, Vitest, ESLint, Prettier
|
||||
- **Testing:** Playwright + MSW for E2E, Vitest for unit tests, JUnit for
|
||||
backend
|
||||
- **Architecture:** Hexagonal (single Maven module, package-level separation),
|
||||
base package `de.fete`
|
||||
- **State management:** Composition API (`ref`/`reactive`) + localStorage;
|
||||
no Pinia
|
||||
- **Database:** No JPA until setup task T-4 is reached
|
||||
- **Design system:** Electric Dusk + Sora (see `.specify/memory/design-system.md`)
|
||||
- **Deployment:** Dockerfile provided; docker-compose example in README
|
||||
|
||||
## Development Workflow
|
||||
|
||||
- Document integrity: when a decision is revised, add an addendum with
|
||||
rationale. Never rewrite or delete the original decision.
|
||||
- The visual design system in `.specify/memory/design-system.md` is authoritative. All
|
||||
frontend implementation MUST follow it.
|
||||
- Research reports go to `docs/agents/research/`, implementation plans to
|
||||
`docs/agents/plan/`.
|
||||
- Conversation and brainstorming in German; code, comments, commits, and
|
||||
documentation in English.
|
||||
- Documentation lives in the README. No wiki, no elaborate docs site.
|
||||
|
||||
## Governance
|
||||
|
||||
This constitution supersedes all ad-hoc practices. It is the authoritative
|
||||
reference for project principles and constraints.
|
||||
|
||||
- **Amendment procedure:** Amendments require documentation of the change,
|
||||
rationale, and an updated version number. The original text MUST be
|
||||
preserved via addendum, not overwritten.
|
||||
- **Versioning:** The constitution follows semantic versioning. MAJOR for
|
||||
principle removals or redefinitions, MINOR for additions or material
|
||||
expansions, PATCH for clarifications and wording fixes.
|
||||
- **Compliance review:** All code changes and architectural decisions MUST
|
||||
be verified against these principles. The plan template's "Constitution
|
||||
Check" gate enforces this before implementation begins.
|
||||
- **Agent governance:** The agent works autonomously on implementation tasks.
|
||||
Architectural decisions, fundamental design questions, tech stack choices,
|
||||
and dependency selections MUST be proposed and approved before proceeding.
|
||||
|
||||
**Version**: 1.0.0 | **Ratified**: 2026-03-06 | **Last Amended**: 2026-03-06
|
||||
85
.specify/memory/design-system.md
Normal file
85
.specify/memory/design-system.md
Normal file
@@ -0,0 +1,85 @@
|
||||
# Design System
|
||||
|
||||
This document defines the visual design language for fete. All frontend implementation must follow these specifications.
|
||||
|
||||
## Principles
|
||||
|
||||
- **Mobile-first / App-native feel** — not a classic website. Think installed app, not browser page.
|
||||
- **Desktop:** centered narrow column (max ~480px), gradient background fills the rest.
|
||||
- **Generous whitespace** — elements breathe, nothing cramped.
|
||||
- **WCAG AA contrast** as baseline for all color choices.
|
||||
- **Accessibility is a baseline requirement** — not an afterthought (per project statutes).
|
||||
|
||||
## Color Palette: Electric Dusk
|
||||
|
||||
Chosen for best balance of style, broad appeal, and accessibility.
|
||||
|
||||
| Role | Hex | Description |
|
||||
|--------------------|-----------|--------------------|
|
||||
| Gradient Start | `#F06292` | Pink |
|
||||
| Gradient Mid | `#AB47BC` | Purple |
|
||||
| Gradient End | `#5C6BC0` | Indigo blue |
|
||||
| Accent (CTAs) | `#FF7043` | Deep orange |
|
||||
| Text (light mode) | `#1C1C1E` | Near black |
|
||||
| Text (dark mode) | `#FFFFFF` | White |
|
||||
| Surface (light) | `#FFF5F8` | Pinkish white |
|
||||
| Surface (dark) | `#1B1730` | Deep indigo-black |
|
||||
| Card (light) | `#FFFFFF` | White |
|
||||
| Card (dark) | `#2A2545` | Muted indigo |
|
||||
|
||||
### Primary Gradient
|
||||
|
||||
```css
|
||||
background: linear-gradient(135deg, #F06292 0%, #AB47BC 50%, #5C6BC0 100%);
|
||||
```
|
||||
|
||||
### Usage Rules
|
||||
|
||||
- Gradient for hero/splash areas and page backgrounds — not as direct text background for body copy.
|
||||
- Cards and content areas use solid surface colors with high-contrast text.
|
||||
- Accent color (`#FF7043`) for primary action buttons with dark text (`#1C1C1E`).
|
||||
- White text on gradient mid/end passes WCAG AA (4.82:1 and 4.86:1).
|
||||
- White text on gradient start passes AA-large (3.06:1) — use for headings 18px+ only.
|
||||
|
||||
## Typography: Sora
|
||||
|
||||
Contemporary geometric sans-serif with slightly rounded terminals. Modern and friendly without being childish.
|
||||
|
||||
- **Font:** Sora
|
||||
- **License:** SIL Open Font License 1.1 (OFL)
|
||||
- **Source:** https://github.com/sora-xor/sora-font
|
||||
- **Format:** Self-hosted WOFF2. No external CDN. No Google Fonts.
|
||||
- **Weights:** 400 (Regular), 500 (Medium), 600 (SemiBold), 700 (Bold), 800 (ExtraBold)
|
||||
|
||||
### Weight Usage
|
||||
|
||||
| Context | Weight | Size guideline |
|
||||
|------------------|--------|-----------------|
|
||||
| Body text | 400 | 0.85–1rem |
|
||||
| Labels | 600–700| 0.8–0.9rem |
|
||||
| Headlines | 700–800| 1.2–1.6rem |
|
||||
| Buttons | 700–800| 1rem |
|
||||
| Small/meta text | 400–500| 0.75–0.85rem |
|
||||
|
||||
## Component Patterns
|
||||
|
||||
### Card-Style Form Fields
|
||||
|
||||
- Rounded corners (`border-radius: 14px`)
|
||||
- Generous padding (`0.9rem 1rem`)
|
||||
- White/card-colored background on gradient pages
|
||||
- Subtle shadow (`box-shadow: 0 2px 8px rgba(0,0,0,0.1)`)
|
||||
- Bold label (font-weight 700), regular-weight input text
|
||||
|
||||
### Buttons
|
||||
|
||||
- Rounded corners matching card fields (`border-radius: 14px`)
|
||||
- Accent color background with dark text
|
||||
- Bold/ExtraBold weight (700–800)
|
||||
- Subtle shadow for depth
|
||||
|
||||
### Layout
|
||||
|
||||
- Mobile: full-width content with horizontal padding (~1.2rem)
|
||||
- Desktop: centered column, max-width ~480px, gradient background fills viewport
|
||||
- Vertical spacing between elements: ~0.75rem (compact), ~1.2rem (sections)
|
||||
80
.specify/memory/ideen.md
Normal file
80
.specify/memory/ideen.md
Normal file
@@ -0,0 +1,80 @@
|
||||
# fete
|
||||
|
||||
## Grundsätze
|
||||
* Soll als PWA im Browser laufen
|
||||
* Damit es sich wie eine normale app anfühlt
|
||||
* Soll ein kleiner Helfer sein, den man schnell mal nutzen kann
|
||||
* Ich will es mir selbst hosten können
|
||||
* Das schließt eigentlich schon AI features aus und das ist okay
|
||||
* Privacy als first class citizen
|
||||
* Schon das Produktdesign muss mit privacy im sinn entworfen werden
|
||||
* Keine Registrierung, kein login notwendig (nur z.B. einen code, um den "raum" zu finden oder so)
|
||||
* Alternativ könnten etwaige anfallende Daten auch im local storage gespeichert werden
|
||||
|
||||
## Die Idee
|
||||
Eine alternative zu Facebook Event Gruppen oder Telegram Gruppen, in denen eine Veranstaltung bekanntgegeben wird und Teilnahmen bestätigt werden
|
||||
|
||||
### Zielbild
|
||||
Person erstellt via App eine Veranstaltung und schickt seine Freunden irgendwie via Link eine Einladung. Freunde können zurückmelden, ob sie kommen oder nicht.
|
||||
|
||||
## Gedankensammlung
|
||||
* So ne Art Landingpage zu jedem Event
|
||||
* Ein Link pro Event, den man z.B. in ne WhatsApp-Gruppe werfen kann
|
||||
* Was, wie, wann, wo?
|
||||
* Irgendwie auch Designbar, sofern man das will
|
||||
* RSVP: "Ich komme" (mit Name) / "Ich komme nicht" (optional mit Name)
|
||||
* Wird serverseitig gespeichert + im LocalStorage gemerkt
|
||||
* Duplikatschutz: Kein perfekter Schutz ohne Accounts, aber gegen versehentliche Doppeleinträge reicht Gerätebindung via LocalStorage
|
||||
* Gegen malicious actors (Fake-Namen spammen etc.) kann man ohne Accounts wenig machen — akzeptables Risiko (vgl. spliit)
|
||||
* "Veranstaltung merken/folgen": Rein lokal, kein Serverkontakt, kein Name nötig
|
||||
* Löst das Multi-Geräte-Problem: Am Handy zugesagt, am Laptop einfach "Folgen" drücken
|
||||
* Auch nützlich für Unentschlossene, die sich das Event erstmal merken wollen
|
||||
* View für den Veranstalter:
|
||||
* Updaten der Veranstaltung
|
||||
* Einsicht angemeldete Gäste, kann bei Bedarf Einträge entfernen
|
||||
* Featureideen:
|
||||
* Kalender-Integration: .ics-Download + optional webcal:// für Live-Updates bei Änderungen
|
||||
* Änderungen zum ursprünglichen Inhalt (z.b. geändertes datum/ort) werden iwi hervorgehoben
|
||||
* Veranstalter kann Updatenachrichten im Event posten, pro Device wird via LocalStorage gemerkt was man schon gesehen hat (Badge/Hervorhebung für neue Updates)
|
||||
* QR Code generieren (z.B. für Plakate/Flyer)
|
||||
* Ablaufdatum als Pflichtfeld, nach dem alle gespeicherten Daten gelöscht werden
|
||||
* Übersichtsliste im LocalStorage: Alle Events die man zugesagt oder gemerkt hat (vgl. spliit)
|
||||
* Sicherheit/Missbrauchsschutz:
|
||||
* Nicht-erratbare Event-Tokens (z.B. UUIDs)
|
||||
* Event-Erstellung ist offen, kein Login/Passwort/Invite-Code nötig
|
||||
* Max aktive Events als serverseitige Konfiguration (env variable)
|
||||
* Honeypot-Felder in Formularen (verstecktes Feld das nur Bots ausfüllen → Anfrage ignorieren)
|
||||
* Abgrenzungskriterien
|
||||
* KEIN Chat
|
||||
* KEIN Discovery Feature über die App: Ohne Zugangslink geht nichts
|
||||
* KEINE Planung des Events! Also kein "wer macht/bringt was?", "was machen wir überhaupt?"
|
||||
|
||||
## Getroffene Designentscheidungen
|
||||
|
||||
Die folgenden Punkte wurden in Diskussionen bereits geklärt und sind verbindlich.
|
||||
|
||||
* RSVP-System:
|
||||
* Ein Link pro Event (NICHT individuelle Einladungslinks pro Person — zu umständlich für den Veranstalter)
|
||||
* Gäste geben beim RSVP einen Namen an, das reicht
|
||||
* Duplikate durch versehentliche Mehrfachanmeldung: LocalStorage-Gerätebindung reicht als Schutz
|
||||
* Bewusste Doppelanmeldung/Spam: Akzeptables Risiko, Veranstalter kann Einträge manuell löschen
|
||||
* Geräte-Sync ohne Account ist nicht sauber lösbar und das ist okay
|
||||
* Missbrauchsschutz:
|
||||
* Rate Limiting: Bewusst rausgelassen — zu viel Infra-Overhead für den Scope
|
||||
* Captcha: Bewusst rausgelassen — entweder Privacy-Problem (Google) oder hässlich
|
||||
* Admin-Passwort/Invite-Code für Event-Erstellung: Bewusst rausgelassen — die App soll organisch weitergegeben werden können
|
||||
* Erfahrungswert: Spliit-Instanz läuft auch komplett offen ohne nennenswerte Probleme
|
||||
* Stattdessen pragmatische Maßnahmen: Nicht-erratbare Tokens, Ablaufdatum als Pflichtfeld, Max Events per Konfiguration, Honeypot-Felder
|
||||
* Zielgruppe:
|
||||
* Primär Freundeskreise, nicht die breite Öffentlichkeit
|
||||
* Trotzdem: Die App hängt im Internet, also muss man grundlegende Absicherung haben
|
||||
* Architektur (bereits entschieden):
|
||||
* SPA + RESTful API Backend, kein SSR
|
||||
* Datenbank: PostgreSQL, wird separat gehostet (nicht im App-Container — der Hoster betreibt seinen eigenen Postgres)
|
||||
* Organizer-Authentifizierung: Zwei separate UUIDs pro Event — ein öffentliches Event-Token (in der URL, für Gäste) und ein geheimes Organizer-Token (in localStorage, für Verwaltung). Interne DB-ID ist ein Implementierungsdetail.
|
||||
* App wird als einzelner Docker-Container ausgeliefert, verbindet sich per Konfiguration (env variable) mit der externen Postgres-Instanz
|
||||
* Techstack:
|
||||
* Backend: Java (neuste LTS Version), Spring Boot, Maven, Hexagonal/Onion Architecture
|
||||
* Frontend: Vue 3 (mit Vite als Bundler, TypeScript, Vue Router)
|
||||
* Architekturentscheidungen die NOCH NICHT getroffen wurden (hier darf nichts eigenmächtig entschieden werden!):
|
||||
* (derzeit keine offenen Architekturentscheidungen)
|
||||
123
.specify/memory/implementation-phases.md
Normal file
123
.specify/memory/implementation-phases.md
Normal file
@@ -0,0 +1,123 @@
|
||||
# Implementation Order
|
||||
|
||||
Sequential implementation order for all user stories. No parallelization — one story at a time.
|
||||
|
||||
## Progress Tracker
|
||||
|
||||
- [ ] US-1 Create event
|
||||
- [ ] US-2 View event page
|
||||
- [ ] US-3 RSVP
|
||||
- [ ] US-5 Edit event
|
||||
- [ ] US-4 Manage guest list
|
||||
- [ ] US-18 Cancel event
|
||||
- [ ] US-19 Delete event
|
||||
- [ ] US-12 Auto-cleanup after expiry
|
||||
- [ ] US-13 Limit active events
|
||||
- [ ] US-6 Bookmark event
|
||||
- [ ] US-7 Local event overview
|
||||
- [ ] US-17 Dark/light mode
|
||||
- [ ] US-8 Calendar integration
|
||||
- [ ] US-11 QR code
|
||||
- [ ] US-9 Change highlights
|
||||
- [ ] US-10a Update messages
|
||||
- [ ] US-10b New-update indicator
|
||||
- [ ] US-15 Color themes
|
||||
- [ ] US-16 Unsplash header images
|
||||
- [ ] US-14 PWA install
|
||||
|
||||
## Prerequisites
|
||||
|
||||
All setup tasks (T-1 through T-5) are complete.
|
||||
|
||||
## Order Rationale
|
||||
|
||||
### Increment 1: Minimal Viable Event — US-1, US-2, US-3
|
||||
|
||||
The vertical slice. After these three stories, the app is usable: an organizer creates an event, shares the link, guests view it and RSVP.
|
||||
|
||||
| # | Story | Depends on | Delivers |
|
||||
|---|-------|------------|----------|
|
||||
| 1 | US-1: Create event | T-4 | Event creation with tokens, localStorage |
|
||||
| 2 | US-2: View event page | US-1 | Public event page with attendee list, expired state |
|
||||
| 3 | US-3: RSVP | US-2 | Attend/decline flow, localStorage dedup |
|
||||
|
||||
### Increment 2: Organizer Toolset — US-5, US-4
|
||||
|
||||
The organizer needs to correct mistakes and moderate spam before the app goes to real users.
|
||||
|
||||
| # | Story | Depends on | Delivers |
|
||||
|---|-------|------------|----------|
|
||||
| 4 | US-5: Edit event | US-1 | Edit all fields, expiry-must-be-future constraint |
|
||||
| 5 | US-4: Manage guest list | US-1 | View RSVPs, delete spam entries |
|
||||
|
||||
US-5 before US-4: US-9 (change highlights) depends on US-5, so getting it done early unblocks Phase 3 work.
|
||||
|
||||
### Increment 3: Event Lifecycle — US-18, US-19, US-12, US-13
|
||||
|
||||
Complete lifecycle management. After this increment, the privacy guarantee is enforced and abuse prevention is in place.
|
||||
|
||||
| # | Story | Depends on | Delivers | Activates deferred ACs |
|
||||
|---|-------|------------|----------|----------------------|
|
||||
| 6 | US-18: Cancel event | US-1 | One-way cancellation with optional message, expiry adjustment | US-2 AC5, US-3 AC11 |
|
||||
| 7 | US-19: Delete event | US-1 | Immediate permanent deletion, localStorage cleanup | US-2 AC6 (partial) |
|
||||
| 8 | US-12: Auto-cleanup | US-1 | Scheduled deletion after expiry, silent logging | US-2 AC6 (complete) |
|
||||
| 9 | US-13: Event limit | US-1 | `MAX_ACTIVE_EVENTS` env var, server-side enforcement | — |
|
||||
|
||||
When implementing US-18, US-19, and US-12: immediately activate their deferred ACs in US-2 and US-3 (cancelled state display, RSVP blocking, event-not-found handling). These stories exist at this point — no reason to defer further.
|
||||
|
||||
### Increment 4: App Shell — US-6, US-7, US-17
|
||||
|
||||
The app gets a home screen. Users can find their events without the original link.
|
||||
|
||||
| # | Story | Depends on | Delivers |
|
||||
|---|-------|------------|----------|
|
||||
| 10 | US-6: Bookmark event | US-2 | Client-only bookmark, no server contact |
|
||||
| 11 | US-7: Local event overview | — | Root page `/` with all tracked events from localStorage |
|
||||
| 12 | US-17: Dark/light mode | — | System preference detection, manual toggle, localStorage persistence |
|
||||
|
||||
US-6 before US-7: bookmarking populates localStorage entries that the overview displays. Without US-6, the overview only shows created and RSVPed events.
|
||||
|
||||
US-17 here (not in a late phase): event color themes (US-15) must account for dark/light mode. Having it in place before US-15 avoids rework.
|
||||
|
||||
### Increment 5: Rich Event Page — US-8, US-11, US-9, US-10a, US-10b
|
||||
|
||||
Features that enrich the event page for guests and organizers.
|
||||
|
||||
| # | Story | Depends on | Delivers |
|
||||
|---|-------|------------|----------|
|
||||
| 13 | US-8: Calendar .ics + webcal | US-2 | RFC 5545 download, webcal subscription, STATUS:CANCELLED support |
|
||||
| 14 | US-11: QR code | US-2 | Server-generated QR, SVG/PNG download |
|
||||
| 15 | US-9: Change highlights | US-2, US-5 | Field-level change indicators, localStorage-based read tracking |
|
||||
| 16 | US-10a: Update messages | US-1, US-2 | Organizer posts, reverse-chronological display, delete capability |
|
||||
| 17 | US-10b: New-update indicator | US-10a | localStorage-based unread badge |
|
||||
|
||||
US-8 benefits from US-18 being complete: `STATUS:CANCELLED` in .ics can be implemented directly instead of deferred.
|
||||
|
||||
US-9 benefits from US-5 being complete (increment 2): no dependency waiting.
|
||||
|
||||
### Increment 6: Visual Polish & PWA — US-15, US-16, US-14
|
||||
|
||||
Final layer: visual customization and native app feel.
|
||||
|
||||
| # | Story | Depends on | Delivers |
|
||||
|---|-------|------------|----------|
|
||||
| 18 | US-15: Color themes | US-1, US-2 | Predefined theme picker, event-scoped styling |
|
||||
| 19 | US-16: Unsplash images | US-1, US-2 | Server-proxied search, local storage, attribution |
|
||||
| 20 | US-14: PWA | T-4 | Manifest, service worker, installability |
|
||||
|
||||
US-15 before US-16: themes are self-contained, Unsplash adds external API complexity.
|
||||
|
||||
US-14 last: PWA caching is most effective when the app has all its pages and assets. Service worker strategy can cover everything in one pass.
|
||||
|
||||
Note: US-12 AC2 (delete stored header images on expiry) remains deferred until US-16 is implemented. When implementing US-16, activate this AC in US-12.
|
||||
|
||||
## Deferred AC Activation Schedule
|
||||
|
||||
| When implementing | Activate deferred AC in | AC description |
|
||||
|-------------------|------------------------|----------------|
|
||||
| US-18 (#6) | US-2 AC5 | Cancelled state display |
|
||||
| US-18 (#6) | US-3 AC11 | RSVP blocked on cancelled event |
|
||||
| US-18 (#6) | US-8 AC9 | STATUS:CANCELLED in .ics (if US-8 not yet done — in this order, US-8 comes later, so implement directly) |
|
||||
| US-19 (#7) | US-2 AC6 | Event not found (organizer deletion) |
|
||||
| US-12 (#8) | US-2 AC6 | Event not found (expiry deletion) |
|
||||
| US-16 (#19) | US-12 AC2 | Delete stored header images on expiry |
|
||||
95
.specify/memory/personas.md
Normal file
95
.specify/memory/personas.md
Normal file
@@ -0,0 +1,95 @@
|
||||
# Personas
|
||||
|
||||
<!-- Extracted from user stories. These are the roles referenced across all stories. -->
|
||||
|
||||
## Event Organizer
|
||||
|
||||
**Description:** A person who creates and manages an event using the app.
|
||||
|
||||
**Goals and motivations:**
|
||||
- Create an event quickly without needing to register or log in
|
||||
- Share the event with friends via a simple link
|
||||
- Keep event details up to date
|
||||
- See who is attending and manage the guest list
|
||||
- Optionally personalize the event page's appearance
|
||||
|
||||
**Capabilities:**
|
||||
- Create events (US-1)
|
||||
- Edit event details (US-5)
|
||||
- Cancel an event with an optional cancellation message (US-18)
|
||||
- Delete an event immediately and permanently (US-19)
|
||||
- View and manage the guest list — including removing entries (US-4)
|
||||
- Post update messages to the event (US-10a)
|
||||
- Choose a color theme for the event page (US-15)
|
||||
- Select a header image from Unsplash (US-16)
|
||||
- Generate and download a QR code for the event (US-11)
|
||||
- View local event overview (US-7, shared with Guest — story uses "user" role)
|
||||
- Switch between dark/light mode (US-17, shared with Guest — story uses "user" role)
|
||||
|
||||
**Limitations:**
|
||||
- Organizer access is device-bound via the organizer token in localStorage (no cross-device organizer access without accounts)
|
||||
- Cannot moderate beyond their own event (no global admin role)
|
||||
- Losing the device or clearing localStorage means losing organizer access
|
||||
|
||||
**Appears in:** US-1, US-4, US-5, US-7, US-10a, US-11, US-15, US-16, US-17, US-18, US-19
|
||||
|
||||
---
|
||||
|
||||
## Guest
|
||||
|
||||
**Description:** A person who receives an event link and interacts with the event page. May be attending, declining, or just browsing.
|
||||
|
||||
**Goals and motivations:**
|
||||
- View event details (what, when, where, who else is coming)
|
||||
- RSVP to indicate attendance or non-attendance
|
||||
- Keep track of events they're interested in across devices (via bookmarks)
|
||||
- Add events to their personal calendar
|
||||
- Stay informed about event updates or changes
|
||||
|
||||
**Capabilities:**
|
||||
- View the event landing page (US-2)
|
||||
- RSVP to an event (US-3)
|
||||
- Bookmark an event locally (US-6)
|
||||
- View the local event overview (US-7)
|
||||
- Download .ics / subscribe via webcal (US-8)
|
||||
- See highlighted changes to event details (US-9)
|
||||
- See organizer update messages (US-10a)
|
||||
- See new-update indicator for unread messages (US-10b)
|
||||
- Download QR code for the event (US-11)
|
||||
- Install the app as a PWA (US-14)
|
||||
- Switch between dark/light mode (US-17)
|
||||
|
||||
**Limitations:**
|
||||
- Cannot edit event details or manage the guest list
|
||||
- Cannot post update messages
|
||||
- RSVP duplicate protection is device-bound (localStorage), not identity-bound
|
||||
- Local data (bookmarks, RSVP records, preferences) is device-bound and not synced across devices
|
||||
|
||||
**Note on sub-states:** A guest may be in different states relative to an event (RSVPed attending, RSVPed not attending, bookmarked only, or just viewing). These are contextual states within the same persona — the same person moves between them. The user stories handle these states through their acceptance criteria rather than defining separate roles.
|
||||
|
||||
**Appears in:** US-2, US-3, US-6, US-7, US-8, US-9, US-10a (as reader), US-10b, US-11, US-12, US-14, US-17
|
||||
|
||||
---
|
||||
|
||||
## Self-Hoster
|
||||
|
||||
**Description:** A technical person who deploys and operates an instance of the app on their own infrastructure.
|
||||
|
||||
**Goals and motivations:**
|
||||
- Run a private, self-hosted event platform for their community
|
||||
- Control resource usage and prevent abuse on their instance
|
||||
- Configure the app via environment variables without modifying code
|
||||
- Deploy easily using Docker
|
||||
|
||||
**Capabilities:**
|
||||
- Configure maximum active events (US-13)
|
||||
- Configure Unsplash API key for image search (US-16)
|
||||
- Configure database connection and other runtime settings (T-2)
|
||||
- Deploy using Docker with docker-compose (T-2)
|
||||
|
||||
**Limitations:**
|
||||
- Not a user role in the app's UI — interacts only through deployment and configuration
|
||||
- Cannot moderate individual events (that is the organizer's role)
|
||||
- App design decisions are not influenced by the self-hoster at runtime
|
||||
|
||||
**Appears in:** US-13, US-16 (configuration aspect)
|
||||
487
.specify/memory/plans/backpressure-agentic-coding.md
Normal file
487
.specify/memory/plans/backpressure-agentic-coding.md
Normal file
@@ -0,0 +1,487 @@
|
||||
---
|
||||
date: 2026-03-04T01:40:21+01:00
|
||||
git_commit: a55174b32333d0f46a55d94a50604344d1ba33f6
|
||||
branch: master
|
||||
topic: "Backpressure for Agentic Coding"
|
||||
tags: [plan, backpressure, hooks, checkstyle, spotbugs, archunit, quality]
|
||||
status: complete
|
||||
---
|
||||
|
||||
# Backpressure for Agentic Coding — Implementation Plan
|
||||
|
||||
## Overview
|
||||
|
||||
Implement automated feedback mechanisms (backpressure) that force the AI agent to self-correct before a human reviews the output. The approach follows the 90/10 rule: 90% deterministic constraints (types, linting, architecture tests), 10% agentic review.
|
||||
|
||||
## Current State vs. Desired State
|
||||
|
||||
| Layer | Backend (now) | Backend (after) | Frontend (now) | Frontend (after) |
|
||||
|-------|---------------|-----------------|----------------|------------------|
|
||||
| Type System | Java 25 (strong) | *unchanged* | TS strict + `noUncheckedIndexedAccess` | *unchanged* |
|
||||
| Static Analysis | **None** | Checkstyle (Google Style) + SpotBugs | ESLint + oxlint + Prettier | *unchanged* |
|
||||
| Architecture Tests | **None** | ArchUnit (hexagonal enforcement) | N/A | N/A |
|
||||
| Unit Tests | JUnit 5 | JUnit 5 + fail-fast (`skipAfterFailureCount: 1`) | Vitest | Vitest + fail-fast (`bail: 1`) |
|
||||
| PostToolUse Hook | **None** | `./mvnw compile -q` (incl. Checkstyle) | **None** | `vue-tsc --noEmit` |
|
||||
| Stop Hook | **None** | `./mvnw test` | **None** | `npm run test:unit -- --run` |
|
||||
|
||||
## What We're NOT Doing
|
||||
|
||||
- **Error Prone** — overlaps with SpotBugs, Java 25 compatibility uncertain, more invasive setup
|
||||
- **Custom ESLint rules** — add later when recurring agent mistakes are observed
|
||||
- **MCP LSP Server** — experimental, high setup cost, unclear benefit vs. hooks
|
||||
- **Pre-commit git hooks** — orthogonal concern, not part of this plan
|
||||
- **CI/CD pipeline** — out of scope, this is about local agent feedback
|
||||
|
||||
## Design Decisions
|
||||
|
||||
- **Hook matchers** are regex on **tool names** (not file paths). File-path filtering must happen inside the hook script via `tool_input.file_path` from stdin JSON.
|
||||
- **Context-efficient output**: ✓ on success, full error on failure. Don't waste the agent's context window with passing output.
|
||||
- **Fail-fast**: one failure at a time. Prevents context-switching between multiple bugs.
|
||||
- **Stop hook** checks `git status --porcelain` to determine if code files changed. Skips test runs on conversational responses.
|
||||
- **Checkstyle** bound to Maven `validate` phase — automatically triggered by `./mvnw compile`, which means the PostToolUse hook gets Checkstyle for free.
|
||||
- **SpotBugs** bound to Maven `verify` phase — NOT hooked, run manually via `./mvnw verify`.
|
||||
- **ArchUnit**: use `archunit-junit5` 1.4.1 only. Do NOT use `archunit-hexagonal` addon (dormant since Jul 2023, pulls Kotlin, pinned to ArchUnit 1.0.1, expects different package naming).
|
||||
|
||||
---
|
||||
|
||||
## Task List
|
||||
|
||||
Tasks are ordered by priority. Execute one task per iteration, in order. Each task includes the exact changes and a verification command.
|
||||
|
||||
### T-BP-01: Create backend compile-check hook script `[x]`
|
||||
|
||||
**File**: `.claude/hooks/backend-compile-check.sh` (new, create `.claude/hooks/` directory if needed)
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Read hook input from stdin (JSON with tool_input.file_path)
|
||||
INPUT=$(cat)
|
||||
FILE_PATH=$(echo "$INPUT" | python3 -c "import sys,json; print(json.load(sys.stdin).get('tool_input',{}).get('file_path',''))" 2>/dev/null || echo "")
|
||||
|
||||
# Only run for Java files under backend/
|
||||
case "$FILE_PATH" in
|
||||
*/backend/src/*.java) ;;
|
||||
*) exit 0 ;;
|
||||
esac
|
||||
|
||||
cd "$CLAUDE_PROJECT_DIR/backend"
|
||||
|
||||
# Run compile (includes validate phase → Checkstyle if configured)
|
||||
# Context-efficient: suppress output on success, show full output on failure
|
||||
if OUTPUT=$(./mvnw compile -q 2>&1); then
|
||||
echo '{"hookSpecificOutput":{"hookEventName":"PostToolUse","additionalContext":"✓ Backend compile passed."}}'
|
||||
else
|
||||
ESCAPED=$(echo "$OUTPUT" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))")
|
||||
echo "{\"hookSpecificOutput\":{\"hookEventName\":\"PostToolUse\",\"additionalContext\":$ESCAPED}}"
|
||||
fi
|
||||
```
|
||||
|
||||
Make executable: `chmod +x .claude/hooks/backend-compile-check.sh`
|
||||
|
||||
**Verify**: `echo '{"tool_input":{"file_path":"backend/src/main/java/de/fete/FeteApplication.java"}}' | CLAUDE_PROJECT_DIR=. .claude/hooks/backend-compile-check.sh` → should output JSON with "✓ Backend compile passed."
|
||||
|
||||
**Verify skip**: `echo '{"tool_input":{"file_path":"README.md"}}' | CLAUDE_PROJECT_DIR=. .claude/hooks/backend-compile-check.sh` → should exit 0 silently.
|
||||
|
||||
---
|
||||
|
||||
### T-BP-02: Create frontend type-check hook script `[x]`
|
||||
|
||||
**File**: `.claude/hooks/frontend-type-check.sh` (new)
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
# Read hook input from stdin (JSON with tool_input.file_path)
|
||||
INPUT=$(cat)
|
||||
FILE_PATH=$(echo "$INPUT" | python3 -c "import sys,json; print(json.load(sys.stdin).get('tool_input',{}).get('file_path',''))" 2>/dev/null || echo "")
|
||||
|
||||
# Only run for TS/Vue files under frontend/
|
||||
case "$FILE_PATH" in
|
||||
*/frontend/src/*.ts|*/frontend/src/*.vue) ;;
|
||||
*) exit 0 ;;
|
||||
esac
|
||||
|
||||
cd "$CLAUDE_PROJECT_DIR/frontend"
|
||||
|
||||
# Run type-check
|
||||
# Context-efficient: suppress output on success, show full output on failure
|
||||
if OUTPUT=$(npx vue-tsc --noEmit 2>&1); then
|
||||
echo '{"hookSpecificOutput":{"hookEventName":"PostToolUse","additionalContext":"✓ Frontend type-check passed."}}'
|
||||
else
|
||||
ESCAPED=$(echo "$OUTPUT" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))")
|
||||
echo "{\"hookSpecificOutput\":{\"hookEventName\":\"PostToolUse\",\"additionalContext\":$ESCAPED}}"
|
||||
fi
|
||||
```
|
||||
|
||||
Make executable: `chmod +x .claude/hooks/frontend-type-check.sh`
|
||||
|
||||
**Verify**: `echo '{"tool_input":{"file_path":"frontend/src/App.vue"}}' | CLAUDE_PROJECT_DIR=. .claude/hooks/frontend-type-check.sh` → should output JSON with "✓ Frontend type-check passed."
|
||||
|
||||
---
|
||||
|
||||
### T-BP-03: Create stop hook script (test gate) `[x]`
|
||||
|
||||
**File**: `.claude/hooks/run-tests.sh` (new)
|
||||
|
||||
Runs after the agent finishes its response. Checks `git status` for changed source files and runs the relevant test suites. Skips on conversational responses (no code changes).
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
cd "$CLAUDE_PROJECT_DIR"
|
||||
|
||||
# Check for uncommitted changes in backend/frontend source
|
||||
HAS_BACKEND=$(git status --porcelain backend/src/ 2>/dev/null | head -1)
|
||||
HAS_FRONTEND=$(git status --porcelain frontend/src/ 2>/dev/null | head -1)
|
||||
|
||||
# Nothing changed — skip
|
||||
if [[ -z "$HAS_BACKEND" && -z "$HAS_FRONTEND" ]]; then
|
||||
exit 0
|
||||
fi
|
||||
|
||||
ERRORS=""
|
||||
PASSED=""
|
||||
|
||||
# Run backend tests if Java sources changed
|
||||
if [[ -n "$HAS_BACKEND" ]]; then
|
||||
if OUTPUT=$(cd backend && ./mvnw test -q 2>&1); then
|
||||
PASSED+="✓ Backend tests passed. "
|
||||
else
|
||||
ERRORS+="Backend tests failed:\n$OUTPUT\n\n"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Run frontend tests if TS/Vue sources changed
|
||||
if [[ -n "$HAS_FRONTEND" ]]; then
|
||||
if OUTPUT=$(cd frontend && npm run test:unit -- --run 2>&1); then
|
||||
PASSED+="✓ Frontend tests passed. "
|
||||
else
|
||||
ERRORS+="Frontend tests failed:\n$OUTPUT\n\n"
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ -n "$ERRORS" ]]; then
|
||||
ESCAPED=$(printf '%s' "$ERRORS" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))")
|
||||
echo "{\"hookSpecificOutput\":{\"hookEventName\":\"Stop\",\"additionalContext\":$ESCAPED}}"
|
||||
else
|
||||
echo "{\"hookSpecificOutput\":{\"hookEventName\":\"Stop\",\"additionalContext\":\"$PASSED\"}}"
|
||||
fi
|
||||
```
|
||||
|
||||
Make executable: `chmod +x .claude/hooks/run-tests.sh`
|
||||
|
||||
**Verify**: `CLAUDE_PROJECT_DIR=. .claude/hooks/run-tests.sh` → if no uncommitted changes in source dirs, should exit 0 silently.
|
||||
|
||||
---
|
||||
|
||||
### T-BP-04: Create `.claude/settings.json` with hook configuration `[x]`
|
||||
|
||||
**File**: `.claude/settings.json` (new — do NOT modify `.claude/settings.local.json`, that has permissions)
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Edit|Write",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "\"$CLAUDE_PROJECT_DIR/.claude/hooks/backend-compile-check.sh\"",
|
||||
"timeout": 120
|
||||
},
|
||||
{
|
||||
"type": "command",
|
||||
"command": "\"$CLAUDE_PROJECT_DIR/.claude/hooks/frontend-type-check.sh\"",
|
||||
"timeout": 60
|
||||
}
|
||||
]
|
||||
}
|
||||
],
|
||||
"Stop": [
|
||||
{
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "\"$CLAUDE_PROJECT_DIR/.claude/hooks/run-tests.sh\"",
|
||||
"timeout": 300
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
**Verify**: File exists and is valid JSON: `python3 -c "import json; json.load(open('.claude/settings.json'))"`
|
||||
|
||||
---
|
||||
|
||||
### T-BP-05: Configure Vitest fail-fast `[x]`
|
||||
|
||||
**File**: `frontend/vitest.config.ts` (modify)
|
||||
|
||||
Add `bail: 1` to the test configuration object. The result should look like:
|
||||
|
||||
```typescript
|
||||
export default mergeConfig(
|
||||
viteConfig,
|
||||
defineConfig({
|
||||
test: {
|
||||
environment: 'jsdom',
|
||||
exclude: [...configDefaults.exclude, 'e2e/**'],
|
||||
root: fileURLToPath(new URL('./', import.meta.url)),
|
||||
bail: 1,
|
||||
},
|
||||
}),
|
||||
)
|
||||
```
|
||||
|
||||
**Verify**: `cd frontend && npm run test:unit -- --run` passes.
|
||||
|
||||
---
|
||||
|
||||
### T-BP-06: Configure Maven Surefire fail-fast `[x]`
|
||||
|
||||
**File**: `backend/pom.xml` (modify)
|
||||
|
||||
Add `maven-surefire-plugin` configuration within `<build><plugins>`:
|
||||
|
||||
```xml
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-surefire-plugin</artifactId>
|
||||
<configuration>
|
||||
<!-- Fail-fast: stop on first test failure -->
|
||||
<skipAfterFailureCount>1</skipAfterFailureCount>
|
||||
</configuration>
|
||||
</plugin>
|
||||
```
|
||||
|
||||
**Verify**: `cd backend && ./mvnw test` passes.
|
||||
|
||||
---
|
||||
|
||||
### T-BP-07: Add Checkstyle plugin + fix violations `[x]`
|
||||
|
||||
**File**: `backend/pom.xml` (modify)
|
||||
|
||||
Add `maven-checkstyle-plugin` within `<build><plugins>`:
|
||||
|
||||
```xml
|
||||
<plugin>
|
||||
<groupId>org.apache.maven.plugins</groupId>
|
||||
<artifactId>maven-checkstyle-plugin</artifactId>
|
||||
<version>3.6.0</version>
|
||||
<dependencies>
|
||||
<dependency>
|
||||
<groupId>com.puppycrawl.tools</groupId>
|
||||
<artifactId>checkstyle</artifactId>
|
||||
<version>13.3.0</version>
|
||||
</dependency>
|
||||
</dependencies>
|
||||
<configuration>
|
||||
<configLocation>google_checks.xml</configLocation>
|
||||
<consoleOutput>true</consoleOutput>
|
||||
<failOnViolation>true</failOnViolation>
|
||||
<violationSeverity>warning</violationSeverity>
|
||||
</configuration>
|
||||
<executions>
|
||||
<execution>
|
||||
<id>checkstyle-validate</id>
|
||||
<phase>validate</phase>
|
||||
<goals>
|
||||
<goal>check</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
```
|
||||
|
||||
Then run `cd backend && ./mvnw checkstyle:check` to find violations. Fix all violations in existing source files (`FeteApplication.java`, `HealthController.java`, `FeteApplicationTest.java`, all `package-info.java` files). Google Style requires: 2-space indentation, specific import order, Javadoc on public types, max line length 100.
|
||||
|
||||
**Verify**: `cd backend && ./mvnw checkstyle:check` passes with zero violations AND `cd backend && ./mvnw compile` passes (Checkstyle now runs during validate phase).
|
||||
|
||||
---
|
||||
|
||||
### T-BP-08: Add SpotBugs plugin + verify `[x]`
|
||||
|
||||
**File**: `backend/pom.xml` (modify)
|
||||
|
||||
Add `spotbugs-maven-plugin` within `<build><plugins>`:
|
||||
|
||||
```xml
|
||||
<plugin>
|
||||
<groupId>com.github.spotbugs</groupId>
|
||||
<artifactId>spotbugs-maven-plugin</artifactId>
|
||||
<version>4.9.8.2</version>
|
||||
<configuration>
|
||||
<effort>Max</effort>
|
||||
<threshold>Low</threshold>
|
||||
<xmlOutput>true</xmlOutput>
|
||||
<failOnError>true</failOnError>
|
||||
</configuration>
|
||||
<executions>
|
||||
<execution>
|
||||
<goals>
|
||||
<goal>check</goal>
|
||||
</goals>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
```
|
||||
|
||||
Run `cd backend && ./mvnw verify` — if SpotBugs finds issues, fix them.
|
||||
|
||||
**Verify**: `cd backend && ./mvnw verify` passes (includes compile + test + SpotBugs).
|
||||
|
||||
---
|
||||
|
||||
### T-BP-09: Add ArchUnit dependency + write architecture tests `[x]`
|
||||
|
||||
**File 1**: `backend/pom.xml` (modify) — add within `<dependencies>`:
|
||||
|
||||
```xml
|
||||
<dependency>
|
||||
<groupId>com.tngtech.archunit</groupId>
|
||||
<artifactId>archunit-junit5</artifactId>
|
||||
<version>1.4.1</version>
|
||||
<scope>test</scope>
|
||||
</dependency>
|
||||
```
|
||||
|
||||
Do NOT use the `archunit-hexagonal` addon.
|
||||
|
||||
**File 2**: `backend/src/test/java/de/fete/HexagonalArchitectureTest.java` (new)
|
||||
|
||||
```java
|
||||
package de.fete;
|
||||
|
||||
import com.tngtech.archunit.core.importer.ImportOption;
|
||||
import com.tngtech.archunit.junit.AnalyzeClasses;
|
||||
import com.tngtech.archunit.junit.ArchTest;
|
||||
import com.tngtech.archunit.lang.ArchRule;
|
||||
|
||||
import static com.tngtech.archunit.lang.syntax.ArchRuleDefinition.classes;
|
||||
import static com.tngtech.archunit.lang.syntax.ArchRuleDefinition.noClasses;
|
||||
import static com.tngtech.archunit.library.Architectures.onionArchitecture;
|
||||
|
||||
@AnalyzeClasses(packages = "de.fete", importOptions = ImportOption.DoNotIncludeTests.class)
|
||||
class HexagonalArchitectureTest {
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule onion_architecture_is_respected = onionArchitecture()
|
||||
.domainModels("de.fete.domain.model..")
|
||||
.domainServices("de.fete.domain.port.in..", "de.fete.domain.port.out..")
|
||||
.applicationServices("de.fete.application.service..")
|
||||
.adapter("web", "de.fete.adapter.in.web..")
|
||||
.adapter("persistence", "de.fete.adapter.out.persistence..")
|
||||
.adapter("config", "de.fete.config..");
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule domain_must_not_depend_on_adapters = noClasses()
|
||||
.that().resideInAPackage("de.fete.domain..")
|
||||
.should().dependOnClassesThat().resideInAPackage("de.fete.adapter..");
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule domain_must_not_depend_on_application = noClasses()
|
||||
.that().resideInAPackage("de.fete.domain..")
|
||||
.should().dependOnClassesThat().resideInAPackage("de.fete.application..");
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule domain_must_not_depend_on_config = noClasses()
|
||||
.that().resideInAPackage("de.fete.domain..")
|
||||
.should().dependOnClassesThat().resideInAPackage("de.fete.config..");
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule inbound_ports_must_be_interfaces = classes()
|
||||
.that().resideInAPackage("de.fete.domain.port.in..")
|
||||
.should().beInterfaces();
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule outbound_ports_must_be_interfaces = classes()
|
||||
.that().resideInAPackage("de.fete.domain.port.out..")
|
||||
.should().beInterfaces();
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule domain_must_not_use_spring = noClasses()
|
||||
.that().resideInAPackage("de.fete.domain..")
|
||||
.should().dependOnClassesThat().resideInAPackage("org.springframework..");
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule web_must_not_depend_on_persistence = noClasses()
|
||||
.that().resideInAPackage("de.fete.adapter.in.web..")
|
||||
.should().dependOnClassesThat().resideInAPackage("de.fete.adapter.out.persistence..");
|
||||
|
||||
@ArchTest
|
||||
static final ArchRule persistence_must_not_depend_on_web = noClasses()
|
||||
.that().resideInAPackage("de.fete.adapter.out.persistence..")
|
||||
.should().dependOnClassesThat().resideInAPackage("de.fete.adapter.in.web..");
|
||||
}
|
||||
```
|
||||
|
||||
**Verify**: `cd backend && ./mvnw test` passes and output shows `HexagonalArchitectureTest` with 9 tests.
|
||||
|
||||
---
|
||||
|
||||
### T-BP-10: Update CLAUDE.md `[x]`
|
||||
|
||||
**File**: `CLAUDE.md` (modify)
|
||||
|
||||
Add two rows to the Build Commands table:
|
||||
|
||||
| What | Command |
|
||||
|------|---------|
|
||||
| Backend checkstyle | `cd backend && ./mvnw checkstyle:check` |
|
||||
| Backend full verify | `cd backend && ./mvnw verify` |
|
||||
|
||||
---
|
||||
|
||||
### T-BP-11: Final verification `[x]`
|
||||
|
||||
Run all verification commands to confirm the complete backpressure stack works:
|
||||
|
||||
1. `test -x .claude/hooks/backend-compile-check.sh`
|
||||
2. `test -x .claude/hooks/frontend-type-check.sh`
|
||||
3. `test -x .claude/hooks/run-tests.sh`
|
||||
4. `python3 -c "import json; json.load(open('.claude/settings.json'))"`
|
||||
5. `cd backend && ./mvnw verify` (triggers: Checkstyle → compile → test w/ ArchUnit → SpotBugs)
|
||||
6. `cd frontend && npm run test:unit -- --run`
|
||||
7. `echo '{"tool_input":{"file_path":"backend/src/main/java/de/fete/FeteApplication.java"}}' | CLAUDE_PROJECT_DIR=. .claude/hooks/backend-compile-check.sh`
|
||||
8. `echo '{"tool_input":{"file_path":"frontend/src/App.vue"}}' | CLAUDE_PROJECT_DIR=. .claude/hooks/frontend-type-check.sh`
|
||||
9. `echo '{"tool_input":{"file_path":"README.md"}}' | CLAUDE_PROJECT_DIR=. .claude/hooks/backend-compile-check.sh` (should be silent)
|
||||
|
||||
All commands must exit 0. If any fail, go back and fix the issue before marking complete.
|
||||
|
||||
---
|
||||
|
||||
## Addendum: Implementation Deviations (2026-03-04)
|
||||
|
||||
Changes made during implementation that deviate from or extend the original plan:
|
||||
|
||||
1. **Stop hook JSON schema**: The plan used `hookSpecificOutput` with `hookEventName: "Stop"` for the stop hook output. This is invalid — `hookSpecificOutput` is only supported for `PreToolUse`, `PostToolUse`, and `UserPromptSubmit` events. Fixed to use top-level `"decision": "approve"/"block"` with `"reason"` field.
|
||||
|
||||
2. **Stop hook loop prevention**: Added `stop_hook_active` check from stdin JSON to prevent infinite re-engagement loops. Not in original plan.
|
||||
|
||||
3. **Context-efficient test output**: Added `logback-test.xml` (root level WARN), `redirectTestOutputToFile=true`, and `trimStackTrace=true` to Surefire config. The stop hook script filters output to `[ERROR]` lines only, stripping Maven boilerplate. Not in original plan — added after observing that raw test failure output consumed excessive context.
|
||||
|
||||
4. **Hook path matching**: Case patterns in hook scripts extended to match both absolute and relative file paths (`*/backend/src/*.java|backend/src/*.java`). Original plan only had `*/backend/src/*.java` which doesn't match relative paths.
|
||||
|
||||
5. **Checkstyle `includeTestSourceDirectory`**: Set to `true` so test sources also follow Google Style. Not in original plan. `FeteApplicationTest.java` was reformatted to 2-space indentation with correct import order (static imports first).
|
||||
|
||||
6. **ArchUnit field naming**: Changed from `snake_case` (`onion_architecture_is_respected`) to `camelCase` (`onionArchitectureIsRespected`) to comply with Google Checkstyle rules.
|
||||
|
||||
---
|
||||
|
||||
## References
|
||||
|
||||
- Research document: `docs/agents/research/2026-03-04-backpressure-agentic-coding.md`
|
||||
- Geoffrey Huntley: [Don't waste your back pressure](https://ghuntley.com/pressure/)
|
||||
- JW: [If you don't engineer backpressure, you'll get slopped](https://jw.hn/engineering-backpressure)
|
||||
- HumanLayer: [Context-Efficient Backpressure for Coding Agents](https://www.hlyr.dev/blog/context-efficient-backpressure)
|
||||
- Claude Code: [Hooks Reference](https://code.claude.com/docs/en/hooks)
|
||||
- ArchUnit: [User Guide](https://www.archunit.org/userguide/html/000_Index.html)
|
||||
504
.specify/memory/plans/e2e-testing-playwright-setup.md
Normal file
504
.specify/memory/plans/e2e-testing-playwright-setup.md
Normal file
@@ -0,0 +1,504 @@
|
||||
---
|
||||
date: 2026-03-05T10:29:08+00:00
|
||||
git_commit: ffea279b54ad84be09bd0e82b3ed9c89a95fc606
|
||||
branch: master
|
||||
topic: "E2E Testing with Playwright — Setup & Initial Tests"
|
||||
tags: [plan, e2e, playwright, testing, frontend, msw]
|
||||
status: draft
|
||||
---
|
||||
|
||||
# E2E Testing with Playwright — Setup & Initial Tests
|
||||
|
||||
## Overview
|
||||
|
||||
Set up Playwright E2E testing infrastructure for the fete Vue 3 frontend with mocked backend (via `@msw/playwright` + `@msw/source`), write initial smoke and US-1 event-creation tests, and integrate into CI.
|
||||
|
||||
## Current State Analysis
|
||||
|
||||
- **Vitest** is configured and already excludes `e2e/**` (`vitest.config.ts:10`)
|
||||
- **Three routes** exist: `/` (Home), `/create` (EventCreate), `/events/:token` (EventStub)
|
||||
- **No E2E framework** installed — no Playwright, no MSW
|
||||
- **OpenAPI spec** at `backend/src/main/resources/openapi/api.yaml` defines `POST /events` with `CreateEventRequest` and `CreateEventResponse` schemas
|
||||
- **`CreateEventResponse` lacks `example:` fields** — required for `@msw/source` mock generation
|
||||
- **CI pipeline** (`.gitea/workflows/ci.yaml`) has backend-test and frontend-test jobs but no E2E step
|
||||
- **`.gitignore`** does not include Playwright output directories
|
||||
|
||||
### Key Discoveries:
|
||||
- `frontend/vitest.config.ts:10` — `e2e/**` already excluded from Vitest
|
||||
- `frontend/vite.config.ts:18-25` — Dev proxy forwards `/api` → `localhost:8080`
|
||||
- `frontend/package.json:8` — `dev` script runs `generate:api` first, then Vite
|
||||
- `frontend/src/views/EventCreateView.vue` — Full form with client-side validation, API call via `openapi-fetch`, redirect to event stub on success
|
||||
- `frontend/src/views/EventStubView.vue` — Shows "Event created!" confirmation with shareable link
|
||||
- `frontend/src/views/HomeView.vue` — Empty state with "Create Event" CTA
|
||||
|
||||
## Desired End State
|
||||
|
||||
After this plan is complete:
|
||||
- Playwright is installed and configured with Chromium-only
|
||||
- `@msw/playwright` + `@msw/source` provide automatic API mocking from the OpenAPI spec
|
||||
- A smoke test verifies the app loads and basic navigation works
|
||||
- A US-1 E2E test covers the full event creation flow (form fill → mocked API → redirect → stub page)
|
||||
- `npm run test:e2e` runs all E2E tests locally
|
||||
- CI runs E2E tests after unit tests, uploading the report as artifact on failure
|
||||
- OpenAPI response schemas include `example:` fields for mock generation
|
||||
|
||||
### Verification:
|
||||
```bash
|
||||
cd frontend && npm run test:e2e # all E2E tests pass locally
|
||||
```
|
||||
|
||||
## What We're NOT Doing
|
||||
|
||||
- Firefox/WebKit browser testing — Chromium only for now
|
||||
- Page Object Model pattern — premature with <5 tests
|
||||
- CI caching of Playwright browser binaries — separate concern
|
||||
- Full-stack E2E tests with real Spring Boot backend
|
||||
- E2E tests for US-2 through US-20 — only US-1 flow + smoke test
|
||||
- Service worker / PWA testing
|
||||
- `data-testid` attributes — using accessible locators (`getByRole`, `getByLabel`) where possible
|
||||
|
||||
## Implementation Approach
|
||||
|
||||
Install Playwright and MSW packages, configure Playwright to spawn the Vite dev server, set up MSW to auto-generate handlers from the OpenAPI spec, then write two test files: a smoke test and a US-1 event creation flow test. Finally, add an E2E step to CI.
|
||||
|
||||
---
|
||||
|
||||
## Phase 1: Playwright Infrastructure
|
||||
|
||||
### Overview
|
||||
Install dependencies, create configuration, add npm scripts, update `.gitignore`.
|
||||
|
||||
### Changes Required:
|
||||
|
||||
#### [x] 1. Install npm packages
|
||||
**Command**: `cd frontend && npm install --save-dev @playwright/test @msw/playwright @msw/source msw`
|
||||
|
||||
Four packages:
|
||||
- `@playwright/test` — Playwright test runner
|
||||
- `msw` — Mock Service Worker core
|
||||
- `@msw/playwright` — Playwright integration for MSW (intercepts at network level via `page.route()`)
|
||||
- `@msw/source` — Reads OpenAPI spec and generates MSW request handlers
|
||||
|
||||
#### [x] 2. Install Chromium browser binary
|
||||
**Command**: `cd frontend && npx playwright install --with-deps chromium`
|
||||
|
||||
Only Chromium — saves ~2 min vs installing all browsers. `--with-deps` installs OS-level libraries.
|
||||
|
||||
#### [x] 3. Create `playwright.config.ts`
|
||||
**File**: `frontend/playwright.config.ts`
|
||||
|
||||
```typescript
|
||||
import { defineConfig, devices } from '@playwright/test'
|
||||
|
||||
export default defineConfig({
|
||||
testDir: './e2e',
|
||||
fullyParallel: true,
|
||||
forbidOnly: !!process.env.CI,
|
||||
retries: process.env.CI ? 2 : 0,
|
||||
workers: process.env.CI ? 1 : undefined,
|
||||
reporter: process.env.CI ? 'github' : 'html',
|
||||
|
||||
use: {
|
||||
baseURL: 'http://localhost:5173',
|
||||
trace: 'on-first-retry',
|
||||
screenshot: 'only-on-failure',
|
||||
},
|
||||
|
||||
projects: [
|
||||
{
|
||||
name: 'chromium',
|
||||
use: { ...devices['Desktop Chrome'] },
|
||||
},
|
||||
],
|
||||
|
||||
webServer: {
|
||||
command: 'npm run dev',
|
||||
url: 'http://localhost:5173',
|
||||
reuseExistingServer: !process.env.CI,
|
||||
timeout: 120_000,
|
||||
stdout: 'pipe',
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
Key decisions per research doc:
|
||||
- `testDir: './e2e'` — separate from Vitest unit tests
|
||||
- `forbidOnly: !!process.env.CI` — prevents `.only` in CI
|
||||
- `workers: 1` in CI — avoids shared-state flakiness
|
||||
- `reuseExistingServer` locally — fast iteration when `npm run dev` is already running
|
||||
|
||||
#### [x] 4. Add npm scripts to `package.json`
|
||||
**File**: `frontend/package.json`
|
||||
|
||||
Add to `"scripts"`:
|
||||
```json
|
||||
"test:e2e": "playwright test",
|
||||
"test:e2e:ui": "playwright test --ui",
|
||||
"test:e2e:debug": "playwright test --debug"
|
||||
```
|
||||
|
||||
#### [x] 5. Update `.gitignore`
|
||||
**File**: `frontend/.gitignore`
|
||||
|
||||
Append:
|
||||
```
|
||||
# Playwright
|
||||
playwright-report/
|
||||
test-results/
|
||||
```
|
||||
|
||||
#### [x] 6. Create `e2e/` directory
|
||||
**Command**: `mkdir -p frontend/e2e`
|
||||
|
||||
### Success Criteria:
|
||||
|
||||
#### Automated Verification:
|
||||
- [ ] `cd frontend && npx playwright --version` outputs a version
|
||||
- [ ] `cd frontend && npx playwright test --list` runs without error (shows 0 tests initially)
|
||||
- [ ] `npm run test:e2e` script exists in package.json
|
||||
|
||||
#### Manual Verification:
|
||||
- [ ] `playwright-report/` and `test-results/` are in `.gitignore`
|
||||
- [ ] No unintended changes to existing config files
|
||||
|
||||
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human before proceeding to the next phase.
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: OpenAPI Response Examples
|
||||
|
||||
### Overview
|
||||
Add `example:` fields to `CreateEventResponse` properties so `@msw/source` can generate realistic mock responses.
|
||||
|
||||
### Changes Required:
|
||||
|
||||
#### [x] 1. Add examples to `CreateEventResponse`
|
||||
**File**: `backend/src/main/resources/openapi/api.yaml`
|
||||
|
||||
Update `CreateEventResponse` properties to include `example:` fields:
|
||||
|
||||
```yaml
|
||||
CreateEventResponse:
|
||||
type: object
|
||||
required:
|
||||
- eventToken
|
||||
- organizerToken
|
||||
- title
|
||||
- dateTime
|
||||
- expiryDate
|
||||
properties:
|
||||
eventToken:
|
||||
type: string
|
||||
format: uuid
|
||||
description: Public token for the event URL
|
||||
example: "a1b2c3d4-e5f6-7890-abcd-ef1234567890"
|
||||
organizerToken:
|
||||
type: string
|
||||
format: uuid
|
||||
description: Secret token for organizer access
|
||||
example: "f9e8d7c6-b5a4-3210-fedc-ba9876543210"
|
||||
title:
|
||||
type: string
|
||||
example: "Summer BBQ"
|
||||
dateTime:
|
||||
type: string
|
||||
format: date-time
|
||||
example: "2026-03-15T20:00:00+01:00"
|
||||
expiryDate:
|
||||
type: string
|
||||
format: date
|
||||
example: "2026-06-15"
|
||||
```
|
||||
|
||||
### Success Criteria:
|
||||
|
||||
#### Automated Verification:
|
||||
- [ ] `cd backend && ./mvnw compile` succeeds (OpenAPI codegen still works)
|
||||
- [ ] `cd frontend && npm run generate:api` succeeds (TypeScript types regenerate)
|
||||
|
||||
#### Manual Verification:
|
||||
- [ ] All response schema properties have `example:` fields
|
||||
|
||||
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human before proceeding to the next phase.
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: MSW Integration
|
||||
|
||||
### Overview
|
||||
Set up `@msw/source` to read the OpenAPI spec and generate MSW handlers, and configure `@msw/playwright` to intercept network requests in E2E tests.
|
||||
|
||||
### Changes Required:
|
||||
|
||||
#### [x] 1. Create MSW setup helper
|
||||
**File**: `frontend/e2e/msw-setup.ts`
|
||||
|
||||
```typescript
|
||||
import { fromOpenApi } from '@msw/source'
|
||||
import { createWorkerFixture } from '@msw/playwright'
|
||||
import { test as base, expect } from '@playwright/test'
|
||||
import path from 'node:path'
|
||||
import { fileURLToPath } from 'node:url'
|
||||
|
||||
const __dirname = path.dirname(fileURLToPath(import.meta.url))
|
||||
const specPath = path.resolve(__dirname, '../../backend/src/main/resources/openapi/api.yaml')
|
||||
|
||||
// Generate MSW handlers from the OpenAPI spec.
|
||||
// These return example values defined in the spec by default.
|
||||
const handlers = await fromOpenApi(specPath)
|
||||
|
||||
// Create a Playwright fixture that intercepts network requests via page.route()
|
||||
// and delegates them to MSW handlers.
|
||||
export const test = base.extend(createWorkerFixture(handlers))
|
||||
export { expect }
|
||||
```
|
||||
|
||||
This module:
|
||||
- Reads the OpenAPI spec at test startup
|
||||
- Generates MSW request handlers that return `example:` values by default
|
||||
- Exports a `test` fixture with MSW network interception built in
|
||||
- Tests import `{ test, expect }` from this file instead of `@playwright/test`
|
||||
|
||||
#### [x] 2. Verify MSW integration works
|
||||
Write a minimal test in Phase 4 that uses the fixture — if the import chain works and a test passes, MSW is correctly configured.
|
||||
|
||||
### Success Criteria:
|
||||
|
||||
#### Automated Verification:
|
||||
- [ ] `frontend/e2e/msw-setup.ts` type-checks (no TS errors)
|
||||
- [ ] Import path to OpenAPI spec resolves correctly
|
||||
|
||||
#### Manual Verification:
|
||||
- [ ] MSW helper is clean and minimal
|
||||
|
||||
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human before proceeding to the next phase.
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: E2E Tests
|
||||
|
||||
### Overview
|
||||
Write two test files: a smoke test for basic app functionality and a US-1 event creation flow test.
|
||||
|
||||
### Changes Required:
|
||||
|
||||
#### [x] 1. Smoke test
|
||||
**File**: `frontend/e2e/smoke.spec.ts`
|
||||
|
||||
```typescript
|
||||
import { test, expect } from './msw-setup'
|
||||
|
||||
test.describe('Smoke', () => {
|
||||
test('home page loads and shows branding', async ({ page }) => {
|
||||
await page.goto('/')
|
||||
await expect(page.getByRole('heading', { name: 'fete' })).toBeVisible()
|
||||
})
|
||||
|
||||
test('home page has create event CTA', async ({ page }) => {
|
||||
await page.goto('/')
|
||||
await expect(page.getByRole('link', { name: /create event/i })).toBeVisible()
|
||||
})
|
||||
|
||||
test('navigating to /create shows the creation form', async ({ page }) => {
|
||||
await page.goto('/')
|
||||
await page.getByRole('link', { name: /create event/i }).click()
|
||||
await expect(page).toHaveURL('/create')
|
||||
await expect(page.getByLabel(/title/i)).toBeVisible()
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
#### [x] 2. US-1 event creation flow test
|
||||
**File**: `frontend/e2e/event-create.spec.ts`
|
||||
|
||||
```typescript
|
||||
import { test, expect } from './msw-setup'
|
||||
|
||||
test.describe('US-1: Create an event', () => {
|
||||
test('shows validation errors for empty required fields', async ({ page }) => {
|
||||
await page.goto('/create')
|
||||
|
||||
await page.getByRole('button', { name: /create event/i }).click()
|
||||
|
||||
await expect(page.getByText('Title is required.')).toBeVisible()
|
||||
await expect(page.getByText('Date and time are required.')).toBeVisible()
|
||||
await expect(page.getByText('Expiry date is required.')).toBeVisible()
|
||||
})
|
||||
|
||||
test('creates an event and redirects to stub page', async ({ page }) => {
|
||||
await page.goto('/create')
|
||||
|
||||
// Fill the form
|
||||
await page.getByLabel(/title/i).fill('Summer BBQ')
|
||||
await page.getByLabel(/description/i).fill('Bring your own drinks')
|
||||
await page.getByLabel(/date/i).first().fill('2026-04-15T18:00')
|
||||
await page.getByLabel(/location/i).fill('Central Park')
|
||||
await page.getByLabel(/expiry/i).fill('2026-06-15')
|
||||
|
||||
// Submit — MSW returns the OpenAPI example response
|
||||
await page.getByRole('button', { name: /create event/i }).click()
|
||||
|
||||
// Should redirect to the event stub page
|
||||
await expect(page).toHaveURL(/\/events\/.+/)
|
||||
await expect(page.getByText('Event created!')).toBeVisible()
|
||||
})
|
||||
|
||||
test('stores event data in localStorage after creation', async ({ page }) => {
|
||||
await page.goto('/create')
|
||||
|
||||
await page.getByLabel(/title/i).fill('Summer BBQ')
|
||||
await page.getByLabel(/date/i).first().fill('2026-04-15T18:00')
|
||||
await page.getByLabel(/expiry/i).fill('2026-06-15')
|
||||
|
||||
await page.getByRole('button', { name: /create event/i }).click()
|
||||
await expect(page).toHaveURL(/\/events\/.+/)
|
||||
|
||||
// Verify localStorage was populated
|
||||
const storage = await page.evaluate(() => {
|
||||
const raw = localStorage.getItem('fete_events')
|
||||
return raw ? JSON.parse(raw) : null
|
||||
})
|
||||
expect(storage).not.toBeNull()
|
||||
expect(storage).toEqual(
|
||||
expect.arrayContaining([
|
||||
expect.objectContaining({ title: 'Summer BBQ' }),
|
||||
]),
|
||||
)
|
||||
})
|
||||
|
||||
test('shows server error on API failure', async ({ page, network }) => {
|
||||
// Override the default MSW handler to return a 400 error
|
||||
await network.use(
|
||||
// Exact override syntax depends on @msw/playwright API —
|
||||
// may need adjustment based on actual package API
|
||||
)
|
||||
|
||||
await page.goto('/create')
|
||||
await page.getByLabel(/title/i).fill('Test')
|
||||
await page.getByLabel(/date/i).first().fill('2026-04-15T18:00')
|
||||
await page.getByLabel(/expiry/i).fill('2026-06-15')
|
||||
|
||||
await page.getByRole('button', { name: /create event/i }).click()
|
||||
|
||||
// Should show error message, not redirect
|
||||
await expect(page.getByRole('alert')).toBeVisible()
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
**Note on the server error test:** The exact override syntax for `network.use()` depends on the `@msw/playwright` API. During implementation, this will need to be adapted to the actual package API. The pattern is: override the `POST /api/events` handler to return a 400/500 response.
|
||||
|
||||
### Success Criteria:
|
||||
|
||||
#### Automated Verification:
|
||||
- [ ] `cd frontend && npm run test:e2e` passes — all tests green
|
||||
- [ ] No TypeScript errors in test files
|
||||
|
||||
#### Manual Verification:
|
||||
- [ ] Tests cover: home page rendering, navigation, form validation, successful creation flow, localStorage persistence
|
||||
- [ ] Test names are descriptive and map to acceptance criteria
|
||||
|
||||
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human before proceeding to the next phase.
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: CI Integration
|
||||
|
||||
### Overview
|
||||
Add a Playwright E2E test step to the Gitea Actions CI pipeline.
|
||||
|
||||
### Changes Required:
|
||||
|
||||
#### [x] 1. Add E2E job to CI workflow
|
||||
**File**: `.gitea/workflows/ci.yaml`
|
||||
|
||||
Add a new job `frontend-e2e` after the existing `frontend-test` job:
|
||||
|
||||
```yaml
|
||||
frontend-e2e:
|
||||
runs-on: ubuntu-latest
|
||||
steps:
|
||||
- uses: actions/checkout@v4
|
||||
|
||||
- name: Set up Node 24
|
||||
uses: actions/setup-node@v4
|
||||
with:
|
||||
node-version: 24
|
||||
|
||||
- name: Install dependencies
|
||||
run: cd frontend && npm ci
|
||||
|
||||
- name: Install Playwright browsers
|
||||
run: cd frontend && npx playwright install --with-deps chromium
|
||||
|
||||
- name: Run E2E tests
|
||||
run: cd frontend && npm run test:e2e
|
||||
|
||||
- name: Upload Playwright report
|
||||
uses: actions/upload-artifact@v4
|
||||
if: ${{ !cancelled() }}
|
||||
with:
|
||||
name: playwright-report
|
||||
path: frontend/playwright-report/
|
||||
retention-days: 30
|
||||
```
|
||||
|
||||
#### [x] 2. Add E2E to the `needs` array of `build-and-publish`
|
||||
**File**: `.gitea/workflows/ci.yaml`
|
||||
|
||||
Update the `build-and-publish` job:
|
||||
```yaml
|
||||
build-and-publish:
|
||||
needs: [backend-test, frontend-test, frontend-e2e]
|
||||
```
|
||||
|
||||
This ensures Docker images are only published if E2E tests also pass.
|
||||
|
||||
### Success Criteria:
|
||||
|
||||
#### Automated Verification:
|
||||
- [ ] CI YAML is valid (no syntax errors)
|
||||
- [ ] `frontend-e2e` job uses `chromium` only (no full browser install)
|
||||
|
||||
#### Manual Verification:
|
||||
- [ ] E2E job runs independently from backend-test (no unnecessary dependency)
|
||||
- [ ] `build-and-publish` requires all three test jobs
|
||||
- [ ] Report artifact is uploaded even on test failure (`!cancelled()`)
|
||||
|
||||
**Implementation Note**: After completing this phase and all automated verification passes, pause here for manual confirmation from the human before proceeding to the next phase.
|
||||
|
||||
---
|
||||
|
||||
## Testing Strategy
|
||||
|
||||
### E2E Tests (this plan):
|
||||
- Smoke test: app loads, branding visible, navigation works
|
||||
- US-1 happy path: fill form → submit → redirect → stub page
|
||||
- US-1 validation: empty required fields show errors
|
||||
- US-1 localStorage: event data persisted after creation
|
||||
- US-1 error handling: API failure shows error message
|
||||
|
||||
### Existing Unit/Component Tests (unchanged):
|
||||
- `useEventStorage.spec.ts` — 6 tests
|
||||
- `EventCreateView.spec.ts` — 11 tests
|
||||
- `EventStubView.spec.ts` — 8 tests
|
||||
|
||||
### Future:
|
||||
- Each new user story adds its own E2E tests
|
||||
- Page Object Model when test suite grows beyond 5-10 tests
|
||||
- Cross-browser testing (Firefox/WebKit) as needed
|
||||
|
||||
## Performance Considerations
|
||||
|
||||
- Chromium-only keeps install time and test runtime low
|
||||
- `reuseExistingServer` in local dev avoids restarting Vite per test run
|
||||
- Single worker in CI prevents flakiness from parallel state issues
|
||||
- MSW intercepts at network level — no real backend needed, fast test execution
|
||||
|
||||
## References
|
||||
|
||||
- Research: `docs/agents/research/2026-03-05-e2e-testing-playwright-vue3.md`
|
||||
- OpenAPI spec: `backend/src/main/resources/openapi/api.yaml`
|
||||
- Existing views: `frontend/src/views/EventCreateView.vue`, `EventStubView.vue`, `HomeView.vue`
|
||||
- CI pipeline: `.gitea/workflows/ci.yaml`
|
||||
- Vitest config: `frontend/vitest.config.ts`
|
||||
322
.specify/memory/research/api-first-approach.md
Normal file
322
.specify/memory/research/api-first-approach.md
Normal file
@@ -0,0 +1,322 @@
|
||||
# Research Report: API-First Approach
|
||||
|
||||
**Date:** 2026-03-04
|
||||
**Scope:** API-first development with Spring Boot backend and Vue 3 frontend
|
||||
**Status:** Complete
|
||||
|
||||
## Context
|
||||
|
||||
The fete project needs a strategy for API design and implementation. Two fundamental approaches exist:
|
||||
|
||||
- **Code-first:** Write annotated Java controllers, generate OpenAPI spec from code (e.g., springdoc-openapi)
|
||||
- **API-first (spec-first):** Write OpenAPI spec as YAML, generate server interfaces and client types from it
|
||||
|
||||
This report evaluates API-first for the fete stack (Spring Boot 3.5.x, Java 25, Vue 3, TypeScript).
|
||||
|
||||
## Why API-First
|
||||
|
||||
| Aspect | Code-First | API-First |
|
||||
|--------|-----------|-----------|
|
||||
| Source of truth | Java source code | OpenAPI YAML file |
|
||||
| Parallel development | Backend must exist first | Frontend + backend from day one |
|
||||
| Contract stability | Implicit, can drift | Explicit, version-controlled, reviewed |
|
||||
| Spec review in PRs | Derived artifact | First-class reviewable diff |
|
||||
| Runtime dependency | springdoc library at runtime | None (build-time only) |
|
||||
| Hexagonal fit | Controllers define contract | Spec defines contract, controllers implement |
|
||||
|
||||
API-first aligns with the project statutes:
|
||||
|
||||
- **No vibe coding**: the spec forces deliberate API design before implementation.
|
||||
- **Research → Spec → Test → Implement**: the OpenAPI spec IS the specification for the API layer.
|
||||
- **Privacy**: no runtime documentation library needed (no springdoc serving endpoints).
|
||||
- **KISS**: one YAML file is the single source of truth for both sides.
|
||||
|
||||
## Backend: openapi-generator-maven-plugin
|
||||
|
||||
### Tool Assessment
|
||||
|
||||
- **Project:** [OpenAPITools/openapi-generator](https://github.com/OpenAPITools/openapi-generator)
|
||||
- **Current version:** 7.20.0 (released 2026-02-16)
|
||||
- **GitHub stars:** ~22k
|
||||
- **License:** Apache 2.0
|
||||
- **Maintenance:** Active, frequent releases (monthly cadence)
|
||||
- **Spring Boot 3.5.x compatibility:** Confirmed via `useSpringBoot3: true` (Jakarta EE namespace)
|
||||
- **Java 25 compatibility:** No blocking issues reported for Java 21+
|
||||
|
||||
### Generator: `spring` with `interfaceOnly: true`
|
||||
|
||||
The `spring` generator offers two modes:
|
||||
|
||||
1. **`interfaceOnly: true`** — generates API interfaces and model classes only. You write controllers that implement the interfaces.
|
||||
2. **`delegatePattern: true`** — generates controllers + delegate interfaces. You implement the delegates.
|
||||
|
||||
**Recommendation: `interfaceOnly: true`** — cleaner integration with hexagonal architecture. The generated interface is the port definition, the controller is the driving adapter.
|
||||
|
||||
### What Gets Generated
|
||||
|
||||
From a spec like:
|
||||
|
||||
```yaml
|
||||
paths:
|
||||
/events:
|
||||
post:
|
||||
operationId: createEvent
|
||||
requestBody:
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/CreateEventRequest'
|
||||
responses:
|
||||
'201':
|
||||
content:
|
||||
application/json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/EventResponse'
|
||||
```
|
||||
|
||||
The generator produces:
|
||||
|
||||
- `EventsApi.java` — interface with `@RequestMapping` annotations
|
||||
- `CreateEventRequest.java` — POJO with Jackson annotations + Bean Validation
|
||||
- `EventResponse.java` — POJO with Jackson annotations
|
||||
|
||||
You then write:
|
||||
|
||||
```java
|
||||
@RestController
|
||||
public class EventController implements EventsApi {
|
||||
private final CreateEventUseCase createEventUseCase;
|
||||
|
||||
@Override
|
||||
public ResponseEntity<EventResponse> createEvent(CreateEventRequest request) {
|
||||
// Map DTO → domain command
|
||||
// Call use case
|
||||
// Map domain result → DTO
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Recommended Plugin Configuration
|
||||
|
||||
```xml
|
||||
<plugin>
|
||||
<groupId>org.openapitools</groupId>
|
||||
<artifactId>openapi-generator-maven-plugin</artifactId>
|
||||
<version>7.20.0</version>
|
||||
<executions>
|
||||
<execution>
|
||||
<goals>
|
||||
<goal>generate</goal>
|
||||
</goals>
|
||||
<configuration>
|
||||
<inputSpec>${project.basedir}/src/main/resources/openapi/api.yaml</inputSpec>
|
||||
<generatorName>spring</generatorName>
|
||||
<apiPackage>de.fete.adapter.in.web.api</apiPackage>
|
||||
<modelPackage>de.fete.adapter.in.web.model</modelPackage>
|
||||
<generateSupportingFiles>true</generateSupportingFiles>
|
||||
<supportingFilesToGenerate>ApiUtil.java</supportingFilesToGenerate>
|
||||
<configOptions>
|
||||
<interfaceOnly>true</interfaceOnly>
|
||||
<useSpringBoot3>true</useSpringBoot3>
|
||||
<useBeanValidation>true</useBeanValidation>
|
||||
<performBeanValidation>true</performBeanValidation>
|
||||
<openApiNullable>false</openApiNullable>
|
||||
<skipDefaultInterface>true</skipDefaultInterface>
|
||||
<useResponseEntity>true</useResponseEntity>
|
||||
<documentationProvider>none</documentationProvider>
|
||||
<annotationLibrary>none</annotationLibrary>
|
||||
</configOptions>
|
||||
</configuration>
|
||||
</execution>
|
||||
</executions>
|
||||
</plugin>
|
||||
```
|
||||
|
||||
Key options rationale:
|
||||
|
||||
| Option | Value | Why |
|
||||
|--------|-------|-----|
|
||||
| `interfaceOnly` | `true` | Only interfaces + models; controllers are yours |
|
||||
| `useSpringBoot3` | `true` | Jakarta EE namespace (required for Spring Boot 3.x) |
|
||||
| `useBeanValidation` | `true` | `@Valid`, `@NotNull` on parameters |
|
||||
| `openApiNullable` | `false` | Avoids `jackson-databind-nullable` dependency |
|
||||
| `skipDefaultInterface` | `true` | No default method stubs — forces full implementation |
|
||||
| `documentationProvider` | `none` | No Swagger UI / springdoc annotations |
|
||||
| `annotationLibrary` | `none` | Minimal annotations on generated code |
|
||||
|
||||
### Build Integration
|
||||
|
||||
- Runs in Maven's `generate-sources` phase (before compilation)
|
||||
- Output: `target/generated-sources/openapi/` — already gitignored
|
||||
- `mvn clean compile` always regenerates from spec
|
||||
- No generated code in git — the spec is the source of truth
|
||||
|
||||
### Additional Dependencies
|
||||
|
||||
With `openApiNullable: false` and `annotationLibrary: none`, minimal additional dependencies are needed. `jakarta.validation-api` is already transitively provided by `spring-boot-starter-web`.
|
||||
|
||||
### Hexagonal Architecture Mapping
|
||||
|
||||
```
|
||||
adapter.in.web/
|
||||
├── api/ ← generated interfaces (EventsApi.java)
|
||||
├── model/ ← generated DTOs (CreateEventRequest.java, EventResponse.java)
|
||||
└── controller/ ← your implementations (EventController implements EventsApi)
|
||||
|
||||
application.port.in/
|
||||
└── CreateEventUseCase.java
|
||||
|
||||
domain.model/
|
||||
└── Event.java ← clean domain object (can be a record)
|
||||
```
|
||||
|
||||
Rules:
|
||||
1. Generated DTOs exist ONLY in `adapter.in.web.model`
|
||||
2. Domain objects are never exposed to the web layer
|
||||
3. Controllers map between generated DTOs and domain objects
|
||||
4. Mapping is manual (project is small enough; no MapStruct needed)
|
||||
|
||||
## Frontend: openapi-typescript + openapi-fetch
|
||||
|
||||
### Tool Comparison
|
||||
|
||||
| Tool | npm Weekly DL | Approach | Runtime | Active |
|
||||
|------|--------------|----------|---------|--------|
|
||||
| **openapi-typescript** | ~2.5M | Types only (.d.ts) | 0 kb | Yes |
|
||||
| **openapi-fetch** | ~1.2M | Type-safe fetch wrapper | 6 kb | Yes |
|
||||
| orval | ~828k | Full client codegen | Varies | Yes |
|
||||
| @hey-api/openapi-ts | ~200-400k | Full client codegen | Varies | Yes (volatile API) |
|
||||
| openapi-generator TS | ~500k | Full codegen (Java needed) | Heavy | Yes |
|
||||
| swagger-typescript-api | ~43 | Full codegen | Varies | Declining |
|
||||
|
||||
### Recommendation: openapi-typescript + openapi-fetch
|
||||
|
||||
**Why this combination wins for fete:**
|
||||
|
||||
1. **Minimal footprint.** Types-only generation = zero generated runtime code. The `.d.ts` file disappears after TypeScript compilation.
|
||||
2. **No Axios.** Uses native `fetch` — no unnecessary dependency.
|
||||
3. **No phone home, no CDN.** Pure TypeScript types + a 6 kb fetch wrapper.
|
||||
4. **Vue 3 Composition API fit.** Composables wrap `api.GET()`/`api.POST()` calls naturally.
|
||||
5. **Actively maintained.** High download counts, regular releases, OpenAPI 3.0 + 3.1 support.
|
||||
6. **Compile-time safety.** Wrong paths, missing parameters, wrong body types = TypeScript errors.
|
||||
|
||||
**Why NOT the alternatives:**
|
||||
|
||||
- **orval / hey-api:** Generate full runtime code (functions, classes). More than needed. Additional abstraction layer.
|
||||
- **openapi-generator TypeScript:** Requires Java for generation. Produces verbose classes. Heavyweight.
|
||||
- **swagger-typescript-api:** Declining maintenance. Not recommended for new projects.
|
||||
|
||||
### How It Works
|
||||
|
||||
#### Step 1: Generate Types
|
||||
|
||||
```bash
|
||||
npx openapi-typescript ../backend/src/main/resources/openapi/api.yaml -o src/api/schema.d.ts
|
||||
```
|
||||
|
||||
Produces a `.d.ts` file with `paths` and `components` interfaces that mirror the OpenAPI spec exactly.
|
||||
|
||||
#### Step 2: Create Client
|
||||
|
||||
```typescript
|
||||
// src/api/client.ts
|
||||
import createClient from "openapi-fetch";
|
||||
import type { paths } from "./schema";
|
||||
|
||||
export const api = createClient<paths>({ baseUrl: "/api" });
|
||||
```
|
||||
|
||||
#### Step 3: Use in Composables
|
||||
|
||||
```typescript
|
||||
// src/composables/useEvent.ts
|
||||
import { ref } from "vue";
|
||||
import { api } from "@/api/client";
|
||||
|
||||
export function useEvent(eventId: string) {
|
||||
const event = ref(null);
|
||||
const error = ref(null);
|
||||
|
||||
async function load() {
|
||||
const { data, error: err } = await api.GET("/events/{eventId}", {
|
||||
params: { path: { eventId } },
|
||||
});
|
||||
if (err) error.value = err;
|
||||
else event.value = data;
|
||||
}
|
||||
|
||||
return { event, error, load };
|
||||
}
|
||||
```
|
||||
|
||||
Type safety guarantees:
|
||||
- Path must exist in spec → TypeScript error if not
|
||||
- Path parameters enforced → TypeScript error if missing
|
||||
- Request body must match schema → TypeScript error if wrong
|
||||
- Response `data` is typed as the 2xx response schema
|
||||
|
||||
### Build Integration
|
||||
|
||||
```json
|
||||
{
|
||||
"scripts": {
|
||||
"generate:api": "openapi-typescript ../backend/src/main/resources/openapi/api.yaml -o src/api/schema.d.ts",
|
||||
"dev": "npm run generate:api && vite",
|
||||
"build": "npm run generate:api && vue-tsc && vite build"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The generated `schema.d.ts` can be committed to git (it is a stable, deterministic output) or gitignored and regenerated on each build. For simplicity, committing it is pragmatic — it allows IDE support without running the generator first.
|
||||
|
||||
### Dependencies
|
||||
|
||||
```json
|
||||
{
|
||||
"devDependencies": {
|
||||
"openapi-typescript": "^7.x"
|
||||
},
|
||||
"dependencies": {
|
||||
"openapi-fetch": "^0.13.x"
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Requirements: Node.js 20+, TypeScript 5.x, `"module": "ESNext"` + `"moduleResolution": "Bundler"` in tsconfig.
|
||||
|
||||
## End-to-End Workflow
|
||||
|
||||
```
|
||||
1. WRITE/EDIT SPEC
|
||||
backend/src/main/resources/openapi/api.yaml
|
||||
│
|
||||
├──── 2. BACKEND: mvnw compile
|
||||
│ → target/generated-sources/openapi/
|
||||
│ ├── de/fete/adapter/in/web/api/EventsApi.java
|
||||
│ └── de/fete/adapter/in/web/model/*.java
|
||||
│ → Compiler errors show what controllers need updating
|
||||
│
|
||||
└──── 3. FRONTEND: npm run generate:api
|
||||
→ frontend/src/api/schema.d.ts
|
||||
→ TypeScript errors show what composables/views need updating
|
||||
```
|
||||
|
||||
On spec change, both sides get compile-time feedback. The spec is a **compile-time contract**.
|
||||
|
||||
## Open Questions
|
||||
|
||||
1. **Spec location sharing.** The spec lives in `backend/src/main/resources/openapi/`. The frontend references it via relative path (`../backend/...`). This works in a monorepo. Alternative: symlink or copy step. Relative path is simplest.
|
||||
|
||||
2. **Generated `schema.d.ts` — commit or gitignore?** Committing is pragmatic (IDE support without running generator). Gitignoring is purist (derived artifact). Recommend: commit it, regenerate during build to catch drift.
|
||||
|
||||
3. **Spec validation in CI.** The openapi-generator-maven-plugin validates the spec during build. Frontend side could add `openapi-typescript` as a build step. Both fail on invalid specs.
|
||||
|
||||
## Conclusion
|
||||
|
||||
API-first with `openapi-generator-maven-plugin` (backend) and `openapi-typescript` + `openapi-fetch` (frontend) is a strong fit for fete:
|
||||
|
||||
- Single source of truth (one YAML file)
|
||||
- Compile-time contract enforcement on both sides
|
||||
- Minimal dependencies (no Swagger UI, no Axios, no runtime codegen libraries)
|
||||
- Clean hexagonal architecture integration
|
||||
- Actively maintained, well-adopted tooling
|
||||
216
.specify/memory/research/backpressure-agentic-coding.md
Normal file
216
.specify/memory/research/backpressure-agentic-coding.md
Normal file
@@ -0,0 +1,216 @@
|
||||
---
|
||||
date: 2026-03-04T01:40:21+01:00
|
||||
git_commit: a55174b32333d0f46a55d94a50604344d1ba33f6
|
||||
branch: master
|
||||
topic: "Backpressure for Agentic Coding"
|
||||
tags: [research, backpressure, agentic-coding, quality, tooling, hooks, static-analysis, archunit]
|
||||
status: complete
|
||||
---
|
||||
|
||||
# Research: Backpressure for Agentic Coding
|
||||
|
||||
## Research Question
|
||||
|
||||
What tools, methodologies, and patterns exist for implementing backpressure in agentic coding workflows? Which are applicable to the fete tech stack (Java 25, Spring Boot 3.5, Maven, Vue 3, TypeScript, Vitest)?
|
||||
|
||||
## Summary
|
||||
|
||||
Backpressure in agentic coding means: **automated feedback mechanisms that reject wrong output deterministically**, forcing the agent to self-correct before a human ever sees the result. The concept is borrowed from distributed systems (reactive streams, flow control) and applied to AI-assisted development.
|
||||
|
||||
The key insight from the literature: **90% deterministic, 10% agentic.** Encode constraints in the type system, linting rules, architecture tests, and test suites — not in prose instructions. The agent runs verification on its own output, sees failures, and fixes itself. Humans review only code that has already passed all automated gates.
|
||||
|
||||
### Core Sources
|
||||
|
||||
| Source | Author | Key Contribution |
|
||||
|--------|--------|-----------------|
|
||||
| [Don't waste your back pressure](https://ghuntley.com/pressure/) | Geoffrey Huntley | Coined "backpressure for agents." Feedback-driven quality, progressive delegation. |
|
||||
| [If you don't engineer backpressure, you'll get slopped](https://jw.hn/engineering-backpressure) | JW | Verification hierarchy: types → linting → tests → agentic review. 90/10 rule. |
|
||||
| [Context-Efficient Backpressure for Coding Agents](https://www.hlyr.dev/blog/context-efficient-backpressure) | HumanLayer | Output filtering, fail-fast, context window preservation. |
|
||||
| [Claude Code Hooks Reference](https://code.claude.com/docs/en/hooks) | Anthropic | PostToolUse hooks for automated feedback after file edits. |
|
||||
| [ArchUnit](https://www.archunit.org/) | TNG Technology Consulting | Architecture rules as unit tests. Hexagonal architecture enforcement. |
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
### 1. The Backpressure Concept
|
||||
|
||||
In distributed systems, backpressure prevents upstream producers from overwhelming downstream consumers. Applied to agentic coding:
|
||||
|
||||
- **Producer:** The AI agent generating code
|
||||
- **Consumer:** The quality gates (compiler, linter, tests, architecture rules)
|
||||
- **Backpressure:** Automated rejection of output that doesn't pass gates
|
||||
|
||||
Geoffrey Huntley: *"If you aren't capturing your back-pressure then you are failing as a software engineer."*
|
||||
|
||||
The paradigm shift: instead of telling the agent what to do (prompt engineering), **engineer an environment where wrong outputs get rejected automatically** (backpressure engineering).
|
||||
|
||||
### 2. The Verification Hierarchy
|
||||
|
||||
JW's article establishes a strict ordering — deterministic first, agentic last:
|
||||
|
||||
```
|
||||
Layer 1: Type System (hardest constraint, compile-time)
|
||||
Layer 2: Static Analysis (linting rules, pattern enforcement)
|
||||
Layer 3: Architecture Tests (dependency rules, layer violations)
|
||||
Layer 4: Unit/Integration Tests (behavioral correctness)
|
||||
Layer 5: Agentic Review (judgment calls — only after 1-4 pass)
|
||||
```
|
||||
|
||||
**Critical rule:** If a constraint can be checked deterministically, it MUST be checked deterministically. Relying on agentic review for things a linter could catch is "building on sand."
|
||||
|
||||
**Context efficiency:** Don't dump rules into CLAUDE.md that could be expressed as type constraints, lint rules, or tests. Reserve documentation for architectural intent and domain knowledge that genuinely requires natural language.
|
||||
|
||||
### 3. Context-Efficient Output
|
||||
|
||||
HumanLayer's research on context window management for coding agents:
|
||||
|
||||
- **On success:** Show only `✓` — don't waste tokens on 200 lines of passing test output
|
||||
- **On failure:** Show the full error — the agent needs the details to self-correct
|
||||
- **Fail-fast:** Enable `--bail` / `-x` / `-failfast` — one failure at a time prevents context-switching between multiple bugs
|
||||
- **Filter output:** Strip generic stack frames, timing info, and irrelevant details
|
||||
|
||||
**Anti-pattern:** Piping output to `/dev/null` or using `head -n 50` — this hides information the agent might need and can force repeated test runs.
|
||||
|
||||
### 4. Claude Code Hooks
|
||||
|
||||
Hooks are shell commands that execute automatically at specific points in Claude Code's lifecycle:
|
||||
|
||||
| Event | Trigger | Use Case |
|
||||
|-------|---------|----------|
|
||||
| `PreToolUse` | Before a tool runs | Block dangerous operations |
|
||||
| `PostToolUse` | After a tool completes | Run compile/lint/test checks |
|
||||
| `Stop` | Agent finishes response | Final validation |
|
||||
| `UserPromptSubmit` | User sends a prompt | Inject context |
|
||||
| `SessionStart` | Session begins | Setup checks |
|
||||
|
||||
**PostToolUse** is the primary backpressure mechanism: after every file edit, run deterministic checks and feed the result back to the agent.
|
||||
|
||||
**Configuration:** `.claude/settings.json` (project-level, committed) or `.claude/settings.local.json` (personal, gitignored).
|
||||
|
||||
**Hook format example:**
|
||||
|
||||
```json
|
||||
{
|
||||
"hooks": {
|
||||
"PostToolUse": [
|
||||
{
|
||||
"matcher": "Edit:*.java",
|
||||
"hooks": [
|
||||
{
|
||||
"type": "command",
|
||||
"command": "cd backend && ./mvnw compile -q 2>&1 || true"
|
||||
}
|
||||
]
|
||||
}
|
||||
]
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
The hook output is fed back to the agent as context, enabling self-correction in the same conversation turn.
|
||||
|
||||
### 5. Applicable Tools for fete's Tech Stack
|
||||
|
||||
#### 5.1 Java / Maven Backend
|
||||
|
||||
**Checkstyle** (coding conventions)
|
||||
- Maven plugin: `maven-checkstyle-plugin`
|
||||
- Enforces formatting, naming, imports, Javadoc rules
|
||||
- Rulesets: Google Style (most widely adopted), Sun Style (legacy)
|
||||
- Fails build on violation when configured with `<failOnViolation>true</failOnViolation>`
|
||||
- Actively maintained, open source (LGPL-2.1)
|
||||
|
||||
**SpotBugs** (bug detection)
|
||||
- Maven plugin: `spotbugs-maven-plugin`
|
||||
- Successor to FindBugs — finds null pointer dereferences, infinite loops, resource leaks, concurrency bugs
|
||||
- Runs bytecode analysis (requires compilation first)
|
||||
- Configurable effort/threshold levels
|
||||
- Actively maintained, open source (LGPL-2.1)
|
||||
|
||||
**Error Prone** (compile-time bug detection)
|
||||
- Google's javac plugin — catches errors during compilation
|
||||
- Tighter feedback loop than SpotBugs (compile-time vs. post-compile)
|
||||
- Requires `maven-compiler-plugin` configuration with annotation processor
|
||||
- More invasive setup, Java version compatibility can lag
|
||||
- Actively maintained, open source (Apache-2.0)
|
||||
|
||||
**ArchUnit** (architecture enforcement)
|
||||
- Library for writing architecture rules as JUnit tests
|
||||
- Built-in support for onion/hexagonal architecture via `onionArchitecture()`
|
||||
- Dedicated hexagonal ruleset: [archunit-hexagonal](https://github.com/whiskeysierra/archunit-hexagonal)
|
||||
- Rules: "domain must not depend on adapters", "ports are interfaces", "no Spring annotations in domain"
|
||||
- Fails as a normal test — agent sees the failure and can fix it
|
||||
- Actively maintained, open source (Apache-2.0)
|
||||
|
||||
#### 5.2 Vue 3 / TypeScript Frontend
|
||||
|
||||
**TypeScript strict mode** (already configured)
|
||||
- `strict: true` via `@vue/tsconfig`
|
||||
- `noUncheckedIndexedAccess: true` (already in `tsconfig.app.json`)
|
||||
- `vue-tsc --build` for type-checking (already in `package.json` as `type-check`)
|
||||
|
||||
**ESLint + oxlint** (already configured)
|
||||
- ESLint with `@vue/eslint-config-typescript` (recommended rules)
|
||||
- oxlint as fast pre-pass (Rust-based, handles simple rules)
|
||||
- Custom ESLint rules can encode repeated agent mistakes
|
||||
|
||||
**Vitest** (already configured)
|
||||
- `--bail` flag available for fail-fast behavior
|
||||
- `--reporter=verbose` for detailed output on failure
|
||||
|
||||
### 6. Current State Analysis (fete project)
|
||||
|
||||
| Layer | Backend | Frontend |
|
||||
|-------|---------|----------|
|
||||
| Type System | Java 25 (strong, but no extra strictness configured) | TypeScript strict + `noUncheckedIndexedAccess` ✓ |
|
||||
| Static Analysis | **Nothing configured** | ESLint + oxlint + Prettier ✓ |
|
||||
| Architecture Tests | **Nothing configured** | N/A (flat structure) |
|
||||
| Unit Tests | JUnit 5 via `./mvnw test` ✓ | Vitest via `npm run test:unit` ✓ |
|
||||
| Claude Code Hooks | **Not configured** | **Not configured** |
|
||||
| Fail-fast | **Not configured** | **Not configured** |
|
||||
|
||||
**Gaps:** The backend has zero static analysis or architecture enforcement. Claude Code hooks don't exist yet. Neither side has fail-fast configured.
|
||||
|
||||
### 7. Evaluation: What to Implement
|
||||
|
||||
| Measure | Effort | Impact | Privacy OK | Maintained | Recommendation |
|
||||
|---------|--------|--------|------------|------------|----------------|
|
||||
| Claude Code Hooks (PostToolUse) | Low | High | Yes (local) | N/A (config) | **Immediate** |
|
||||
| Fail-fast + output filtering | Low | Medium | Yes (local) | N/A (config) | **Immediate** |
|
||||
| Checkstyle Maven plugin | Low | Medium | Yes (no network) | Yes (LGPL) | **Yes** |
|
||||
| SpotBugs Maven plugin | Low | Medium | Yes (no network) | Yes (LGPL) | **Yes** |
|
||||
| ArchUnit hexagonal tests | Medium | High | Yes (no network) | Yes (Apache) | **Yes** |
|
||||
| Error Prone | Medium | Medium | Yes (no network) | Yes (Apache) | **Defer** — overlaps with SpotBugs, more invasive setup, Java 25 compatibility uncertain |
|
||||
| Custom ESLint rules | Low | Low-Medium | Yes (local) | N/A (project rules) | **As needed** — add rules when recurring agent mistakes are observed |
|
||||
| MCP LSP Server | High | Medium | Yes (local) | Varies | **Defer** — experimental, high setup cost, unclear benefit vs. hooks |
|
||||
|
||||
### 8. Tool Compatibility Notes
|
||||
|
||||
**Java 25 compatibility:**
|
||||
- Checkstyle: Confirmed support for Java 21+, Java 25 should work (runs on source, not bytecode)
|
||||
- SpotBugs: Bytecode analysis — needs ASM version that supports Java 25 classfiles. Latest SpotBugs (4.9.x) supports up to Java 24; Java 25 support may require a newer release. **Verify before adopting.**
|
||||
- ArchUnit: Runs via JUnit, analyzes compiled classes. Similar ASM dependency concern as SpotBugs. **Verify before adopting.**
|
||||
- Error Prone: Tightly coupled to javac internals. Java 25 compatibility typically lags. **Higher risk.**
|
||||
|
||||
**Privacy compliance:** All recommended tools are offline-only. None phone home, none require external services. All are open source with permissive or copyleft licenses compatible with GPL.
|
||||
|
||||
## Decisions Required
|
||||
|
||||
| # | Decision | Options | Recommendation |
|
||||
|---|----------|---------|----------------|
|
||||
| 1 | Hooks in which settings file? | `.claude/settings.json` (project, committed) vs. `.claude/settings.local.json` (personal, gitignored) | **Project-level** — every agent user benefits |
|
||||
| 2 | Checkstyle ruleset | Google Style vs. Sun Style vs. custom | **Google Style** — most widely adopted, well-documented |
|
||||
| 3 | Include Error Prone in plan? | Yes (more coverage) vs. defer (simpler, overlap with SpotBugs) | **Defer** — Java 25 compatibility uncertain, overlaps with SpotBugs |
|
||||
|
||||
## References
|
||||
|
||||
- Geoffrey Huntley: [Don't waste your back pressure](https://ghuntley.com/pressure/)
|
||||
- JW: [If you don't engineer backpressure, you'll get slopped](https://jw.hn/engineering-backpressure)
|
||||
- HumanLayer: [Context-Efficient Backpressure for Coding Agents](https://www.hlyr.dev/blog/context-efficient-backpressure)
|
||||
- Anthropic: [Claude Code Hooks Reference](https://code.claude.com/docs/en/hooks)
|
||||
- Anthropic: [2026 Agentic Coding Trends Report](https://resources.anthropic.com/hubfs/2026%20Agentic%20Coding%20Trends%20Report.pdf)
|
||||
- ArchUnit: [User Guide](https://www.archunit.org/userguide/html/000_Index.html)
|
||||
- ArchUnit Hexagonal: [GitHub](https://github.com/whiskeysierra/archunit-hexagonal)
|
||||
- SpotBugs: [Documentation](https://spotbugs.github.io/)
|
||||
- Checkstyle: [Documentation](https://checkstyle.sourceforge.io/)
|
||||
- Claude Code Hooks Guide: [Luiz Tanure](https://www.letanure.dev/blog/2025-08-06--claude-code-part-8-hooks-automated-quality-checks)
|
||||
- lsp-mcp: [GitHub](https://github.com/jonrad/lsp-mcp)
|
||||
107
.specify/memory/research/datetime-best-practices.md
Normal file
107
.specify/memory/research/datetime-best-practices.md
Normal file
@@ -0,0 +1,107 @@
|
||||
---
|
||||
date: 2026-03-04T21:15:50+00:00
|
||||
git_commit: b8421274b47c6d1778b83c6b0acb70fd82891e71
|
||||
branch: master
|
||||
topic: "Date/Time Handling Best Practices for the fete Stack"
|
||||
tags: [research, datetime, java, postgresql, openapi, typescript]
|
||||
status: complete
|
||||
---
|
||||
|
||||
# Research: Date/Time Handling Best Practices
|
||||
|
||||
## Research Question
|
||||
|
||||
What are the best practices for handling dates and times across the full fete stack (Java 25 / Spring Boot 3.5.x / PostgreSQL / OpenAPI 3.1 / Vue 3 / TypeScript)?
|
||||
|
||||
## Summary
|
||||
|
||||
The project has two distinct date/time concepts: **event date/time** (when something happens) and **expiry date** (after which data is deleted). These map to different types at every layer. The recommendations align Java types, PostgreSQL column types, OpenAPI formats, and TypeScript representations into a consistent stack-wide approach.
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
### Type Mapping Across the Stack
|
||||
|
||||
| Concept | Java | PostgreSQL | OpenAPI | TypeScript | Example |
|
||||
|---------|------|------------|---------|------------|---------|
|
||||
| Event date/time | `OffsetDateTime` | `timestamptz` | `string`, `format: date-time` | `string` | `2026-03-15T20:00:00+01:00` |
|
||||
| Expiry date | `LocalDate` | `date` | `string`, `format: date` | `string` | `2026-06-15` |
|
||||
| Audit timestamps (createdAt, etc.) | `OffsetDateTime` | `timestamptz` | `string`, `format: date-time` | `string` | `2026-03-04T14:22:00Z` |
|
||||
|
||||
### Event Date/Time: `OffsetDateTime` + `timestamptz`
|
||||
|
||||
**Why `OffsetDateTime`, not `LocalDateTime`:**
|
||||
|
||||
- PostgreSQL best practice explicitly recommends `timestamptz` over `timestamp` — the PostgreSQL wiki says ["don't use `timestamp`"](https://wiki.postgresql.org/wiki/Don't_Do_This). `timestamptz` maps naturally to `OffsetDateTime`.
|
||||
- Hibernate 6 (Spring Boot 3.5.x) has native `OffsetDateTime` ↔ `timestamptz` support. `LocalDateTime` requires extra care to avoid silent timezone bugs at the JDBC driver level.
|
||||
- An ISO 8601 string with offset (`2026-03-15T20:00:00+01:00`) is unambiguous in the API. A bare `LocalDateTime` string forces the client to guess the timezone.
|
||||
- The OpenAPI `date-time` format and `openapi-generator` default to `OffsetDateTime` in Java — no custom type mappings needed.
|
||||
|
||||
**Why not `ZonedDateTime`:** Carries IANA zone IDs (e.g. `Europe/Berlin`) which add complexity without value for this use case. Worse JDBC support than `OffsetDateTime`.
|
||||
|
||||
**How PostgreSQL stores it:** `timestamptz` does **not** store the timezone. It converts input to UTC and stores UTC. On retrieval, it converts to the session's timezone setting. The offset is preserved in the Java `OffsetDateTime` via the JDBC driver.
|
||||
|
||||
**Practical flow:** The frontend sends the offset based on the organizer's browser locale. The server stores UTC. Display-side conversion happens in the frontend.
|
||||
|
||||
### Expiry Date: `LocalDate` + `date`
|
||||
|
||||
The expiry date is a calendar-date concept ("after which day should data be deleted"), not a point-in-time. A cleanup job runs periodically and deletes events where `expiryDate < today`. Sub-day precision adds no value and complicates the UX.
|
||||
|
||||
### Jackson Serialization (Spring Boot 3.5.x)
|
||||
|
||||
Spring Boot 3.x auto-configures `jackson-datatype-jsr310` (JavaTimeModule) and disables `WRITE_DATES_AS_TIMESTAMPS` by default:
|
||||
|
||||
- `OffsetDateTime` serializes to `"2026-03-15T20:00:00+01:00"` (ISO 8601 string)
|
||||
- `LocalDate` serializes to `"2026-06-15"`
|
||||
|
||||
No additional configuration needed. For explicitness, can add to `application.properties`:
|
||||
```properties
|
||||
spring.jackson.serialization.write-dates-as-timestamps=false
|
||||
```
|
||||
|
||||
### Hibernate 6 Configuration
|
||||
|
||||
With Hibernate 6, `OffsetDateTime` maps to `timestamptz` using the `NATIVE` timezone storage strategy by default on PostgreSQL. Can be made explicit:
|
||||
|
||||
```properties
|
||||
spring.jpa.properties.hibernate.timezone.default_storage=NATIVE
|
||||
```
|
||||
|
||||
This tells Hibernate to use the database's native `TIMESTAMP WITH TIME ZONE` type directly.
|
||||
|
||||
### OpenAPI Schema Definitions
|
||||
|
||||
```yaml
|
||||
# Event date/time
|
||||
eventDateTime:
|
||||
type: string
|
||||
format: date-time
|
||||
example: "2026-03-15T20:00:00+01:00"
|
||||
|
||||
# Expiry date
|
||||
expiryDate:
|
||||
type: string
|
||||
format: date
|
||||
example: "2026-06-15"
|
||||
```
|
||||
|
||||
**Code-generation mapping (defaults, no customization needed):**
|
||||
|
||||
| OpenAPI format | Java type (openapi-generator) | TypeScript type (openapi-typescript) |
|
||||
|---------------|-------------------------------|--------------------------------------|
|
||||
| `date-time` | `java.time.OffsetDateTime` | `string` |
|
||||
| `date` | `java.time.LocalDate` | `string` |
|
||||
|
||||
### Frontend (TypeScript)
|
||||
|
||||
`openapi-typescript` generates `string` for both `format: date-time` and `format: date`. This is correct — JSON has no native date type, so dates travel as strings. Parsing to `Date` objects happens explicitly at the application boundary when needed (e.g. for display formatting).
|
||||
|
||||
## Sources
|
||||
|
||||
- [PostgreSQL Wiki: Don't Do This](https://wiki.postgresql.org/wiki/Don't_Do_This) — recommends `timestamptz` over `timestamp`
|
||||
- [PostgreSQL Docs: Date/Time Types](https://www.postgresql.org/docs/current/datatype-datetime.html)
|
||||
- [Thorben Janssen: Hibernate 6 OffsetDateTime and ZonedDateTime](https://thorben-janssen.com/hibernate-6-offsetdatetime-and-zoneddatetime/)
|
||||
- [Baeldung: OffsetDateTime Serialization With Jackson](https://www.baeldung.com/java-jackson-offsetdatetime)
|
||||
- [Baeldung: Map Date Types With OpenAPI Generator](https://www.baeldung.com/openapi-map-date-types)
|
||||
- [Baeldung: ZonedDateTime vs OffsetDateTime](https://www.baeldung.com/java-zoneddatetime-offsetdatetime)
|
||||
- [Reflectoring: Handling Timezones in Spring Boot](https://reflectoring.io/spring-timezones/)
|
||||
- [openapi-typescript documentation](https://openapi-ts.dev/)
|
||||
273
.specify/memory/research/e2e-testing-playwright.md
Normal file
273
.specify/memory/research/e2e-testing-playwright.md
Normal file
@@ -0,0 +1,273 @@
|
||||
---
|
||||
date: 2026-03-05T10:14:52+00:00
|
||||
git_commit: ffea279b54ad84be09bd0e82b3ed9c89a95fc606
|
||||
branch: master
|
||||
topic: "End-to-End Testing for Vue 3 with Playwright"
|
||||
tags: [research, e2e, playwright, testing, frontend]
|
||||
status: complete
|
||||
---
|
||||
|
||||
# Research: End-to-End Testing for Vue 3 with Playwright
|
||||
|
||||
## Research Question
|
||||
|
||||
How to set up and structure end-to-end tests for the fete Vue 3 + Vite frontend using Playwright?
|
||||
|
||||
## Summary
|
||||
|
||||
Playwright is Vue 3's officially recommended E2E testing framework. It integrates with Vite projects through a `webServer` config block (no Vite plugin needed), supports Chromium/Firefox/WebKit under a single API, and is fully free including parallelism. The fete project's existing vitest.config.ts already excludes `e2e/**`, making the integration path clean.
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
### 1. Current Frontend Test Infrastructure
|
||||
|
||||
The project uses **Vitest 4.0.18** with jsdom for unit/component tests:
|
||||
|
||||
- **Config:** `frontend/vitest.config.ts` — merges with vite.config, uses jsdom environment, bail on first failure
|
||||
- **Exclusion:** Already excludes `e2e/**` from Vitest's test discovery (`vitest.config.ts:10`)
|
||||
- **Existing tests:** 3 test files with ~25 tests total:
|
||||
- `src/composables/__tests__/useEventStorage.spec.ts` (6 tests)
|
||||
- `src/views/__tests__/EventCreateView.spec.ts` (11 tests)
|
||||
- `src/views/__tests__/EventStubView.spec.ts` (8 tests)
|
||||
- **No E2E framework** is currently configured
|
||||
|
||||
### 2. Why Playwright
|
||||
|
||||
Vue's official testing guide ([vuejs.org/guide/scaling-up/testing](https://vuejs.org/guide/scaling-up/testing)) positions Playwright as the primary E2E recommendation. Key advantages over Cypress:
|
||||
|
||||
| Dimension | Playwright | Cypress |
|
||||
|---|---|---|
|
||||
| Browser support | Chromium, Firefox, WebKit | Chrome-family, Firefox (WebKit experimental) |
|
||||
| Parallelism | Free, native | Requires paid Cypress Cloud |
|
||||
| Architecture | Out-of-process (CDP/BiDi) | In-browser (same process) |
|
||||
| Speed | 35-45% faster in parallel | Slower at scale |
|
||||
| Pricing | 100% free, Apache 2.0 | Cloud features cost money |
|
||||
| Privacy | No account, no cloud dependency | Cloud service integration |
|
||||
|
||||
Playwright aligns with fete's privacy constraints (no cloud dependency, no account required).
|
||||
|
||||
### 3. Playwright + Vite Integration
|
||||
|
||||
Playwright does **not** use a Vite plugin. Integration is purely through process management:
|
||||
|
||||
1. Playwright reads `webServer.command` and spawns the Vite dev server
|
||||
2. Polls `webServer.url` until ready
|
||||
3. Runs tests against `use.baseURL`
|
||||
4. Kills the server after all tests finish
|
||||
|
||||
The existing Vite dev proxy (`/api` → `localhost:8080`) works transparently — E2E tests can hit the real backend or intercept via `page.route()` mocks.
|
||||
|
||||
Note: `@playwright/experimental-ct-vue` exists for component-level testing (mounting individual Vue components without a server), but is still experimental and is a different category from E2E.
|
||||
|
||||
### 4. Installation
|
||||
|
||||
```bash
|
||||
cd frontend
|
||||
npm install --save-dev @playwright/test
|
||||
npx playwright install --with-deps chromium
|
||||
```
|
||||
|
||||
Using `npm init playwright@latest` generates scaffolding automatically, but for an existing project manual setup is cleaner.
|
||||
|
||||
### 5. Project Structure
|
||||
|
||||
```
|
||||
frontend/
|
||||
playwright.config.ts # Playwright config
|
||||
e2e/ # E2E test directory
|
||||
home.spec.ts
|
||||
event-create.spec.ts
|
||||
event-view.spec.ts
|
||||
fixtures/ # shared test fixtures (optional)
|
||||
helpers/ # page object models (optional)
|
||||
playwright-report/ # generated HTML report (gitignored)
|
||||
test-results/ # generated artifacts (gitignored)
|
||||
```
|
||||
|
||||
The `e2e/` directory is already excluded from Vitest via `vitest.config.ts:10`.
|
||||
|
||||
### 6. Recommended playwright.config.ts
|
||||
|
||||
```typescript
|
||||
import { defineConfig, devices } from '@playwright/test'
|
||||
|
||||
export default defineConfig({
|
||||
testDir: './e2e',
|
||||
fullyParallel: true,
|
||||
forbidOnly: !!process.env.CI,
|
||||
retries: process.env.CI ? 2 : 0,
|
||||
workers: process.env.CI ? 1 : undefined,
|
||||
reporter: process.env.CI ? 'github' : 'html',
|
||||
|
||||
use: {
|
||||
baseURL: 'http://localhost:5173',
|
||||
trace: 'on-first-retry',
|
||||
screenshot: 'only-on-failure',
|
||||
},
|
||||
|
||||
projects: [
|
||||
{
|
||||
name: 'chromium',
|
||||
use: { ...devices['Desktop Chrome'] },
|
||||
},
|
||||
// Uncomment for cross-browser coverage:
|
||||
// { name: 'firefox', use: { ...devices['Desktop Firefox'] } },
|
||||
// { name: 'webkit', use: { ...devices['Desktop Safari'] } },
|
||||
],
|
||||
|
||||
webServer: {
|
||||
command: 'npm run dev',
|
||||
url: 'http://localhost:5173',
|
||||
reuseExistingServer: !process.env.CI,
|
||||
timeout: 120_000,
|
||||
stdout: 'pipe',
|
||||
},
|
||||
})
|
||||
```
|
||||
|
||||
Key decisions:
|
||||
- `testDir: './e2e'` — separates E2E from Vitest unit tests
|
||||
- `forbidOnly: !!process.env.CI` — prevents `test.only` from shipping to CI
|
||||
- `workers: process.env.CI ? 1 : undefined` — single worker in CI avoids shared-state flakiness; locally uses all cores
|
||||
- `reporter: 'github'` — GitHub Actions annotations in CI
|
||||
- `command: 'npm run dev'` — runs `generate:api` first (via the existing npm script), then starts Vite
|
||||
- `reuseExistingServer: !process.env.CI` — reuses running dev server locally for fast iteration
|
||||
|
||||
### 7. package.json Scripts
|
||||
|
||||
```json
|
||||
"test:e2e": "playwright test",
|
||||
"test:e2e:ui": "playwright test --ui",
|
||||
"test:e2e:debug": "playwright test --debug"
|
||||
```
|
||||
|
||||
### 8. .gitignore Additions
|
||||
|
||||
```
|
||||
playwright-report/
|
||||
test-results/
|
||||
```
|
||||
|
||||
### 9. TypeScript Configuration
|
||||
|
||||
The existing `tsconfig.app.json` excludes `src/**/__tests__/*`. Since E2E tests live in `e2e/` (outside `src/`), they are already excluded from the app build.
|
||||
|
||||
A separate `tsconfig` for E2E tests is not strictly required — Playwright's own TypeScript support handles it. If needed, a minimal `e2e/tsconfig.json` can extend `tsconfig.node.json`.
|
||||
|
||||
### 10. Vue-Specific Testing Patterns
|
||||
|
||||
**Router navigation:**
|
||||
```typescript
|
||||
await page.goto('/events/abc-123')
|
||||
await page.waitForURL('/events/abc-123') // confirms SPA router resolved
|
||||
```
|
||||
|
||||
**Waiting for reactive content (auto-retry):**
|
||||
```typescript
|
||||
await expect(page.getByRole('heading', { name: 'My Event' })).toBeVisible()
|
||||
// Playwright auto-retries assertions for up to the configured timeout
|
||||
```
|
||||
|
||||
**URL assertions:**
|
||||
```typescript
|
||||
await expect(page).toHaveURL(/\/events\/.+/)
|
||||
```
|
||||
|
||||
**API mocking (for isolated E2E tests):**
|
||||
```typescript
|
||||
await page.route('/api/events/**', async (route) => {
|
||||
await route.fulfill({
|
||||
status: 200,
|
||||
contentType: 'application/json',
|
||||
body: JSON.stringify({ title: 'Test Event', date: '2026-04-01' }),
|
||||
})
|
||||
})
|
||||
```
|
||||
|
||||
**Locator strategy — prefer accessible locators:**
|
||||
```typescript
|
||||
page.getByRole('button', { name: 'RSVP' }) // best
|
||||
page.getByLabel('Event Title') // form fields
|
||||
page.getByTestId('event-card') // data-testid fallback
|
||||
page.locator('.some-class') // last resort
|
||||
```
|
||||
|
||||
### 11. CI Integration
|
||||
|
||||
**GitHub Actions workflow:**
|
||||
```yaml
|
||||
- name: Install Playwright browsers
|
||||
run: npx playwright install --with-deps chromium
|
||||
# --with-deps installs OS-level libraries (libglib, libnss, etc.)
|
||||
# Specify 'chromium' to save ~2min vs installing all browsers
|
||||
|
||||
- name: Run E2E tests
|
||||
run: npx playwright test
|
||||
|
||||
- uses: actions/upload-artifact@v4
|
||||
if: ${{ !cancelled() }}
|
||||
with:
|
||||
name: playwright-report
|
||||
path: frontend/playwright-report/
|
||||
retention-days: 30
|
||||
```
|
||||
|
||||
**Docker:** Use official images `mcr.microsoft.com/playwright:v1.x.x-noble` (Ubuntu 24.04). Alpine is unsupported (browsers need glibc). Key flag: `--ipc=host` prevents Chromium memory exhaustion. The Playwright Docker image version must match the `@playwright/test` package version exactly.
|
||||
|
||||
For the fete project, E2E tests run as a separate CI step, not inside the app's Dockerfile.
|
||||
|
||||
### 12. Integration with Existing Backend
|
||||
|
||||
Two approaches for E2E tests:
|
||||
|
||||
1. **Mocked backend** (via `page.route()`): Fast, isolated, no backend dependency. Good for frontend-only testing.
|
||||
2. **Real backend**: Start Spring Boot alongside Vite. Tests hit `/api` through the Vite proxy. More realistic but requires Java in CI. Could use Docker Compose.
|
||||
|
||||
The Vite proxy config (`vite.config.ts:19-23`) already forwards `/api` to `localhost:8080`, so both approaches work without changes.
|
||||
|
||||
## Code References
|
||||
|
||||
- `frontend/vitest.config.ts:10` — E2E exclusion pattern already in place
|
||||
- `frontend/vite.config.ts:19-23` — API proxy configuration for backend integration
|
||||
- `frontend/package.json:8-9` — `dev` script runs `generate:api` before Vite
|
||||
- `frontend/src/router/index.ts` — Route definitions (Home, Create, Event views)
|
||||
- `frontend/src/api/client.ts` — openapi-fetch client using `/api` base URL
|
||||
- `frontend/tsconfig.app.json` — App TypeScript config (excludes test files)
|
||||
|
||||
## Architecture Documentation
|
||||
|
||||
### Test Pyramid in fete
|
||||
|
||||
| Layer | Framework | Directory | Purpose |
|
||||
|---|---|---|---|
|
||||
| Unit | Vitest + jsdom | `src/**/__tests__/` | Composables, isolated logic |
|
||||
| Component | Vitest + @vue/test-utils | `src/**/__tests__/` | Vue component behavior |
|
||||
| E2E | Playwright (proposed) | `e2e/` | Full browser, user flows |
|
||||
| Visual | browser-interactive-testing skill | `.agent-tests/` | Agent-driven screenshots |
|
||||
|
||||
### Decision Points for Implementation
|
||||
|
||||
1. **Start with Chromium only** — add Firefox/WebKit later if needed
|
||||
2. **Use `npm run dev`** as webServer command (includes API type generation)
|
||||
3. **API mocking by default** — use `page.route()` for E2E isolation; full-stack tests as a separate concern
|
||||
4. **`data-testid` attributes** on key interactive elements for stable selectors
|
||||
5. **Page Object Model** recommended once the test suite grows beyond 5-10 tests
|
||||
|
||||
## Sources
|
||||
|
||||
- [Testing | Vue.js](https://vuejs.org/guide/scaling-up/testing) — official E2E recommendation
|
||||
- [Installation | Playwright](https://playwright.dev/docs/intro)
|
||||
- [webServer | Playwright](https://playwright.dev/docs/test-webserver) — Vite integration
|
||||
- [CI Intro | Playwright](https://playwright.dev/docs/ci-intro)
|
||||
- [Docker | Playwright](https://playwright.dev/docs/docker)
|
||||
- [Cypress vs Playwright 2026 | BugBug](https://bugbug.io/blog/test-automation-tools/cypress-vs-playwright/)
|
||||
- [Playwright vs Cypress | Katalon](https://katalon.com/resources-center/blog/playwright-vs-cypress)
|
||||
|
||||
## Decisions (2026-03-05)
|
||||
|
||||
- **Mocked backend only** — E2E tests use `page.route()` to mock API responses. No real Spring Boot backend in E2E.
|
||||
- **Mocking stack:** `@msw/playwright` + `@msw/source` — reads OpenAPI spec at runtime, generates MSW handlers, per-test overrides via `network.use()`.
|
||||
- **US-1 flows first** — Event creation is the only implemented user story; E2E tests cover that flow.
|
||||
- **No CI caching yet** — Playwright browser binaries are not cached; CI runner needs reconfiguration first.
|
||||
- **E2E tests are part of frontend tasks** — every frontend user story includes E2E test coverage going forward.
|
||||
- **OpenAPI examples mandatory** — all response schemas in the OpenAPI spec must include `example:` fields (required for `@msw/source` mock generation).
|
||||
215
.specify/memory/research/openapi-validation-pipeline.md
Normal file
215
.specify/memory/research/openapi-validation-pipeline.md
Normal file
@@ -0,0 +1,215 @@
|
||||
---
|
||||
date: "2026-03-04T22:27:37.933286+00:00"
|
||||
git_commit: 91e566efea0cbf53ba06a29b63317b7435609bd8
|
||||
branch: master
|
||||
topic: "Automatic OpenAPI Validation Pipelines for Backpressure Hooks"
|
||||
tags: [research, openapi, validation, hooks, backpressure, linting]
|
||||
status: complete
|
||||
---
|
||||
|
||||
# Research: Automatic OpenAPI Validation Pipelines
|
||||
|
||||
## Research Question
|
||||
|
||||
What automatic validation pipelines exist for OpenAPI specs that can be integrated into the current Claude Code backpressure hook setup, running after the OpenAPI spec has been modified?
|
||||
|
||||
## Summary
|
||||
|
||||
The project already has a PostToolUse hook system that runs backend compile checks and frontend lint/type-checks after Edit/Write operations. Adding OpenAPI spec validation requires a new hook script that triggers specifically when `api.yaml` is modified. Several CLI tools support OpenAPI 3.1.0 validation — **Redocly CLI** is the strongest fit given the existing Node.js toolchain, MIT license, active maintenance, and zero-config baseline.
|
||||
|
||||
## Current Backpressure Setup
|
||||
|
||||
### Hook Architecture (`.claude/settings.json`)
|
||||
|
||||
The project uses Claude Code hooks for automated quality gates:
|
||||
|
||||
| Hook Event | Trigger | Scripts |
|
||||
|---|---|---|
|
||||
| `PostToolUse` | `Edit\|Write` tool calls | `backend-compile-check.sh`, `frontend-check.sh` |
|
||||
| `Stop` | Agent attempts to stop | `run-tests.sh` |
|
||||
|
||||
### How Hooks Work
|
||||
|
||||
Each hook script:
|
||||
1. Reads JSON from stdin containing `tool_input.file_path`
|
||||
2. Pattern-matches the file path to decide if it should run
|
||||
3. Executes validation (compile, lint, type-check, test)
|
||||
4. Returns JSON with either success message or failure details
|
||||
5. On failure: outputs `hookSpecificOutput` with error context (PostToolUse) or `{"decision":"block"}` (Stop)
|
||||
|
||||
### Existing Pattern for File Matching
|
||||
|
||||
```bash
|
||||
# backend-compile-check.sh — matches Java files
|
||||
case "$FILE_PATH" in
|
||||
*/backend/src/*.java|backend/src/*.java) ;;
|
||||
*) exit 0 ;;
|
||||
esac
|
||||
|
||||
# frontend-check.sh — matches TS/Vue files
|
||||
case "$FILE_PATH" in
|
||||
*/frontend/src/*.ts|*/frontend/src/*.vue|frontend/src/*.ts|frontend/src/*.vue) ;;
|
||||
*) exit 0 ;;
|
||||
esac
|
||||
```
|
||||
|
||||
An OpenAPI validation hook would use the same pattern:
|
||||
```bash
|
||||
case "$FILE_PATH" in
|
||||
*/openapi/api.yaml|*/openapi/*.yaml) ;;
|
||||
*) exit 0 ;;
|
||||
esac
|
||||
```
|
||||
|
||||
### Existing OpenAPI Tooling in the Project
|
||||
|
||||
- **Backend:** `openapi-generator-maven-plugin` v7.20.0 generates Spring interfaces from `api.yaml` (`pom.xml:149-178`)
|
||||
- **Frontend:** `openapi-typescript` v7.13.0 generates TypeScript types; `openapi-fetch` v0.17.0 provides type-safe client
|
||||
- **No validation/linting tools** currently installed — no Redocly, Spectral, or other linter config exists
|
||||
|
||||
## Tool Evaluation
|
||||
|
||||
### Redocly CLI (`@redocly/cli`)
|
||||
|
||||
| Attribute | Value |
|
||||
|---|---|
|
||||
| OpenAPI 3.1 | Full support |
|
||||
| Install | `npm install -g @redocly/cli` or `npx @redocly/cli@latest` |
|
||||
| CLI | `redocly lint api.yaml` |
|
||||
| License | MIT |
|
||||
| Maintenance | Very active — latest v2.20.3 (2026-03-03), daily/weekly releases |
|
||||
| GitHub | ~1.4k stars (Redocly ecosystem: 24k+ combined) |
|
||||
|
||||
**Checks:** Structural validity against OAS schema, configurable linting rules (naming, descriptions, operation IDs, security), style/consistency enforcement. Built-in rulesets: `minimal`, `recommended`, `recommended-strict`. Zero-config baseline works immediately. Custom rules via `redocly.yaml`.
|
||||
|
||||
**Fit for this project:** Node.js already in the toolchain (frontend). `npx` form requires no permanent install. MIT license compatible with GPL-3.0. The `@redocly/openapi-core` package is already present as a transitive dependency of `openapi-typescript` in `node_modules`.
|
||||
|
||||
### Spectral (`@stoplight/spectral-cli`)
|
||||
|
||||
| Attribute | Value |
|
||||
|---|---|
|
||||
| OpenAPI 3.1 | Full support (since v6.x) |
|
||||
| Install | `npm install -g @stoplight/spectral-cli` |
|
||||
| CLI | `spectral lint api.yaml` |
|
||||
| License | Apache 2.0 |
|
||||
| Maintenance | Active — latest v6.15.0 (2025-04-22), slower cadence |
|
||||
| GitHub | ~3k stars |
|
||||
|
||||
**Checks:** Schema compliance, missing descriptions/tags/operationIds, contact/license metadata. Highly extensible custom rulesets via YAML/JS. Configurable severity levels.
|
||||
|
||||
**Fit for this project:** Well-established industry standard. Apache 2.0 compatible with GPL. Less actively maintained than Redocly (10 months since last release). Heavier custom ruleset system may be over-engineered for current needs.
|
||||
|
||||
### Vacuum (`daveshanley/vacuum`)
|
||||
|
||||
| Attribute | Value |
|
||||
|---|---|
|
||||
| OpenAPI 3.1 | Full support (via libopenapi) |
|
||||
| Install | `brew install daveshanley/vacuum/vacuum` or Go binary |
|
||||
| CLI | `vacuum lint api.yaml` |
|
||||
| License | MIT |
|
||||
| Maintenance | Active — latest release 2025-12-22 |
|
||||
| GitHub | ~1k stars |
|
||||
|
||||
**Checks:** Structural validation, Spectral-compatible rulesets, OWASP security checks, naming conventions, descriptions/examples/tags. Single Go binary — no runtime dependencies.
|
||||
|
||||
**Fit for this project:** Zero-dependency binary is appealing for CI. However, adds a non-Node.js tool dependency when the project already has Node.js. Spectral ruleset compatibility is a plus for portability.
|
||||
|
||||
### oasdiff (`oasdiff/oasdiff`)
|
||||
|
||||
| Attribute | Value |
|
||||
|---|---|
|
||||
| OpenAPI 3.1 | Beta |
|
||||
| Install | `brew install oasdiff` or Go binary |
|
||||
| CLI | `oasdiff breaking base.yaml revision.yaml` |
|
||||
| License | Apache 2.0 |
|
||||
| Maintenance | Active — latest v1.11.10 (2026-02-05) |
|
||||
| GitHub | ~1.1k stars |
|
||||
|
||||
**Checks:** 300+ breaking change detection rules (paths, parameters, schemas, security, headers, enums). Requires two spec versions to compare — not a standalone validator.
|
||||
|
||||
**Fit for this project:** Different category — detects breaking changes between spec versions, not structural validity. Useful as a CI-only check comparing `HEAD~1` vs `HEAD`. OAS 3.1 support is still beta.
|
||||
|
||||
### Not Recommended
|
||||
|
||||
- **swagger-cli:** Abandoned, no OAS 3.1 support
|
||||
- **IBM OpenAPI Validator:** Active but opinionated IBM-specific rules add configuration overhead for no benefit
|
||||
|
||||
## Tool Comparison Matrix
|
||||
|
||||
| Tool | OAS 3.1 | License | Last Release | Stars | Runtime | Category |
|
||||
|---|---|---|---|---|---|---|
|
||||
| **Redocly CLI** | Full | MIT | 2026-03-03 | ~1.4k | Node.js | Lint + validate |
|
||||
| **Spectral** | Full | Apache 2.0 | 2025-04-22 | ~3k | Node.js | Lint |
|
||||
| **Vacuum** | Full | MIT | 2025-12-22 | ~1k | Go binary | Lint + validate |
|
||||
| **oasdiff** | Beta | Apache 2.0 | 2026-02-05 | ~1.1k | Go binary | Breaking changes |
|
||||
|
||||
## Integration Pattern
|
||||
|
||||
### Hook Script Structure
|
||||
|
||||
An OpenAPI validation hook would follow the existing pattern in `.claude/hooks/`:
|
||||
|
||||
```bash
|
||||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
INPUT=$(cat)
|
||||
FILE_PATH=$(echo "$INPUT" | python3 -c "import sys,json; print(json.load(sys.stdin).get('tool_input',{}).get('file_path',''))" 2>/dev/null || echo "")
|
||||
|
||||
# Only run for OpenAPI spec files
|
||||
case "$FILE_PATH" in
|
||||
*/openapi/*.yaml|*/openapi/*.yml) ;;
|
||||
*) exit 0 ;;
|
||||
esac
|
||||
|
||||
cd "$CLAUDE_PROJECT_DIR/backend"
|
||||
|
||||
# Run validation
|
||||
if OUTPUT=$(npx @redocly/cli@latest lint src/main/resources/openapi/api.yaml --format=stylish 2>&1); then
|
||||
echo '{"hookSpecificOutput":{"hookEventName":"PostToolUse","additionalContext":"✓ OpenAPI spec validation passed."}}'
|
||||
else
|
||||
ESCAPED=$(echo "$OUTPUT" | python3 -c "import sys,json; print(json.dumps(sys.stdin.read()))")
|
||||
echo "{\"hookSpecificOutput\":{\"hookEventName\":\"PostToolUse\",\"additionalContext\":$ESCAPED}}"
|
||||
fi
|
||||
```
|
||||
|
||||
### Registration in `.claude/settings.json`
|
||||
|
||||
The hook would be added to the existing `PostToolUse` array alongside the compile and lint hooks:
|
||||
|
||||
```json
|
||||
{
|
||||
"type": "command",
|
||||
"command": "\"$CLAUDE_PROJECT_DIR/.claude/hooks/openapi-validate.sh\"",
|
||||
"timeout": 120
|
||||
}
|
||||
```
|
||||
|
||||
### Configuration (Optional)
|
||||
|
||||
A `redocly.yaml` in the project root or `backend/` directory can customize rules:
|
||||
|
||||
```yaml
|
||||
extends:
|
||||
- recommended
|
||||
|
||||
rules:
|
||||
operation-operationId: error
|
||||
tag-description: warn
|
||||
no-ambiguous-paths: error
|
||||
```
|
||||
|
||||
## Code References
|
||||
|
||||
- `.claude/settings.json:1-32` — Hook configuration (PostToolUse + Stop events)
|
||||
- `.claude/hooks/backend-compile-check.sh` — Java file detection pattern + compile check
|
||||
- `.claude/hooks/frontend-check.sh` — TS/Vue file detection pattern + type-check + lint
|
||||
- `.claude/hooks/run-tests.sh` — Stop hook with test execution and block/approve logic
|
||||
- `backend/pom.xml:149-178` — openapi-generator-maven-plugin configuration
|
||||
- `backend/src/main/resources/openapi/api.yaml` — The OpenAPI 3.1.0 spec to validate
|
||||
|
||||
## Open Questions
|
||||
|
||||
- Should the validation use a pinned version (`npx @redocly/cli@1.x.x`) or latest? Pinned is more reproducible; latest gets rule updates automatically.
|
||||
- Should a `redocly.yaml` config be added immediately with the `recommended` ruleset, or start with zero-config (structural validation only) and add rules incrementally?
|
||||
- Is breaking change detection (oasdiff) desirable as a separate CI check, or is structural validation sufficient for now?
|
||||
202
.specify/memory/research/rfc9457-problem-details.md
Normal file
202
.specify/memory/research/rfc9457-problem-details.md
Normal file
@@ -0,0 +1,202 @@
|
||||
---
|
||||
date: 2026-03-04T21:15:50+00:00
|
||||
git_commit: b8421274b47c6d1778b83c6b0acb70fd82891e71
|
||||
branch: master
|
||||
topic: "RFC 9457 Problem Details for HTTP API Error Responses"
|
||||
tags: [research, error-handling, rfc9457, spring-boot, openapi]
|
||||
status: complete
|
||||
---
|
||||
|
||||
# Research: RFC 9457 Problem Details
|
||||
|
||||
## Research Question
|
||||
|
||||
How should the fete API structure error responses? What does RFC 9457 (Problem Details) specify, and how does it integrate with Spring Boot 3.5.x, OpenAPI 3.1, and openapi-fetch?
|
||||
|
||||
## Summary
|
||||
|
||||
RFC 9457 (successor to RFC 7807) defines a standard JSON format (`application/problem+json`) for machine-readable HTTP API errors. Spring Boot 3.x has first-class support via `ProblemDetail`, `ErrorResponseException`, and `ResponseEntityExceptionHandler`. The recommended approach is a single `@RestControllerAdvice` that handles all exceptions consistently — no `spring.mvc.problemdetails.enabled` property, no fallback to legacy error format.
|
||||
|
||||
## Detailed Findings
|
||||
|
||||
### RFC 9457 Format
|
||||
|
||||
Standard fields:
|
||||
|
||||
| Field | Type | Description |
|
||||
|-------|------|-------------|
|
||||
| `type` | URI | Identifies the problem type. Defaults to `about:blank`. |
|
||||
| `title` | string | Short, human-readable summary. Should not change between occurrences. |
|
||||
| `status` | int | HTTP status code. |
|
||||
| `detail` | string | Human-readable explanation specific to this occurrence. |
|
||||
| `instance` | URI | Identifies the specific occurrence (e.g. correlation ID). |
|
||||
|
||||
Extension members (additional JSON properties) are explicitly permitted. This is the mechanism for validation errors, error codes, etc.
|
||||
|
||||
**Key rule:** With `type: "about:blank"`, the `title` must match the HTTP status phrase exactly. Use a custom `type` URI when providing a custom `title`.
|
||||
|
||||
### Spring Boot 3.x Built-in Support
|
||||
|
||||
- **`ProblemDetail`** — container class for the five standard fields + a `properties` Map for extensions.
|
||||
- **`ErrorResponseException`** — base class for custom exceptions that carry their own `ProblemDetail`.
|
||||
- **`ResponseEntityExceptionHandler`** — `@ControllerAdvice` base class that handles all Spring MVC exceptions and renders them as `application/problem+json`.
|
||||
- **`ProblemDetailJacksonMixin`** — automatically unwraps the `properties` Map as top-level JSON fields during serialization.
|
||||
|
||||
### Recommended Configuration
|
||||
|
||||
Use a single `@RestControllerAdvice` extending `ResponseEntityExceptionHandler`. Do **not** use the `spring.mvc.problemdetails.enabled` property.
|
||||
|
||||
```java
|
||||
@RestControllerAdvice
|
||||
public class GlobalExceptionHandler extends ResponseEntityExceptionHandler {
|
||||
// All Spring MVC exceptions are handled automatically.
|
||||
// Add @ExceptionHandler methods for domain exceptions here.
|
||||
// Add a catch-all for Exception.class to prevent legacy error format.
|
||||
}
|
||||
```
|
||||
|
||||
Reasons to avoid the property-based approach:
|
||||
1. No place to add custom `@ExceptionHandler` methods.
|
||||
2. Having both the property AND a custom `ResponseEntityExceptionHandler` bean causes a conflict.
|
||||
3. The property ignores `server.error.include-*` properties.
|
||||
|
||||
### Validation Errors (Field-Level)
|
||||
|
||||
Spring deliberately does **not** include field-level validation errors in `ProblemDetail` by default (security rationale). Override `handleMethodArgumentNotValid`:
|
||||
|
||||
```java
|
||||
@Override
|
||||
protected ResponseEntity<Object> handleMethodArgumentNotValid(
|
||||
MethodArgumentNotValidException ex,
|
||||
HttpHeaders headers,
|
||||
HttpStatusCode status,
|
||||
WebRequest request) {
|
||||
|
||||
ProblemDetail problemDetail = ex.getBody();
|
||||
problemDetail.setTitle("Validation Failed");
|
||||
problemDetail.setType(URI.create("urn:problem-type:validation-error"));
|
||||
|
||||
List<Map<String, String>> fieldErrors = ex.getBindingResult()
|
||||
.getFieldErrors()
|
||||
.stream()
|
||||
.map(fe -> Map.of(
|
||||
"field", fe.getField(),
|
||||
"message", fe.getDefaultMessage()
|
||||
))
|
||||
.toList();
|
||||
|
||||
problemDetail.setProperty("fieldErrors", fieldErrors);
|
||||
return handleExceptionInternal(ex, problemDetail, headers, status, request);
|
||||
}
|
||||
```
|
||||
|
||||
Resulting response:
|
||||
```json
|
||||
{
|
||||
"type": "urn:problem-type:validation-error",
|
||||
"title": "Validation Failed",
|
||||
"status": 400,
|
||||
"detail": "Invalid request content.",
|
||||
"instance": "/api/events",
|
||||
"fieldErrors": [
|
||||
{ "field": "title", "message": "must not be blank" },
|
||||
{ "field": "expiryDate", "message": "must be a future date" }
|
||||
]
|
||||
}
|
||||
```
|
||||
|
||||
### OpenAPI Schema Definition
|
||||
|
||||
```yaml
|
||||
components:
|
||||
schemas:
|
||||
ProblemDetail:
|
||||
type: object
|
||||
properties:
|
||||
type:
|
||||
type: string
|
||||
format: uri
|
||||
default: "about:blank"
|
||||
title:
|
||||
type: string
|
||||
status:
|
||||
type: integer
|
||||
detail:
|
||||
type: string
|
||||
instance:
|
||||
type: string
|
||||
format: uri
|
||||
additionalProperties: true
|
||||
|
||||
ValidationProblemDetail:
|
||||
allOf:
|
||||
- $ref: '#/components/schemas/ProblemDetail'
|
||||
- type: object
|
||||
properties:
|
||||
fieldErrors:
|
||||
type: array
|
||||
items:
|
||||
type: object
|
||||
properties:
|
||||
field:
|
||||
type: string
|
||||
message:
|
||||
type: string
|
||||
required:
|
||||
- field
|
||||
- message
|
||||
|
||||
responses:
|
||||
BadRequest:
|
||||
description: Validation failed
|
||||
content:
|
||||
application/problem+json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ValidationProblemDetail'
|
||||
NotFound:
|
||||
description: Resource not found
|
||||
content:
|
||||
application/problem+json:
|
||||
schema:
|
||||
$ref: '#/components/schemas/ProblemDetail'
|
||||
```
|
||||
|
||||
Use media type `application/problem+json` in response definitions. Set `additionalProperties: true` on the base schema.
|
||||
|
||||
### Frontend Consumption (openapi-fetch)
|
||||
|
||||
openapi-fetch uses a discriminated union for responses:
|
||||
|
||||
```typescript
|
||||
const { data, error } = await client.POST('/api/events', { body: eventData })
|
||||
|
||||
if (error) {
|
||||
// `error` is typed from the OpenAPI error response schema
|
||||
console.log(error.title) // "Validation Failed"
|
||||
console.log(error.fieldErrors) // [{ field: "title", message: "..." }]
|
||||
return
|
||||
}
|
||||
|
||||
// `data` is the typed success response
|
||||
```
|
||||
|
||||
The `error` object is already typed from the generated schema — no manual type assertions needed for defined error shapes.
|
||||
|
||||
### Known Pitfalls
|
||||
|
||||
| Pitfall | Description | Mitigation |
|
||||
|---------|-------------|------------|
|
||||
| **Inconsistent formats** | Exceptions escaping to Spring Boot's `BasicErrorController` return legacy format (`timestamp`, `error`, `path`), not Problem Details. | Add a catch-all `@ExceptionHandler(Exception.class)` in the `@RestControllerAdvice`. |
|
||||
| **`server.error.include-*` ignored** | When Problem Details is active, these properties have no effect. | Control content via `ProblemDetail` directly. |
|
||||
| **Validation errors hidden by default** | Spring returns only `"Invalid request content."` without field details. | Override `handleMethodArgumentNotValid` explicitly. |
|
||||
| **Content negotiation** | `application/problem+json` is only returned when the client accepts it. `openapi-fetch` sends `Accept: application/json` which Spring considers compatible. | No action needed for SPA clients. |
|
||||
| **`about:blank` semantics** | With `type: "about:blank"`, `title` must match the HTTP status phrase. Custom titles require a custom `type` URI. | Use `urn:problem-type:*` URIs for custom problem types. |
|
||||
|
||||
## Sources
|
||||
|
||||
- [RFC 9457 Full Text](https://www.rfc-editor.org/rfc/rfc9457.html)
|
||||
- [Spring Framework Docs: Error Responses](https://docs.spring.io/spring-framework/reference/web/webmvc/mvc-ann-rest-exceptions.html)
|
||||
- [Swagger Blog: Problem Details RFC 9457](https://swagger.io/blog/problem-details-rfc9457-doing-api-errors-well/)
|
||||
- [Baeldung: Returning Errors Using ProblemDetail](https://www.baeldung.com/spring-boot-return-errors-problemdetail)
|
||||
- [SivaLabs: Spring Boot 3 Error Reporting](https://www.sivalabs.in/blog/spring-boot-3-error-reporting-using-problem-details/)
|
||||
- [Spring Boot Issue #43850: Render global errors as Problem Details](https://github.com/spring-projects/spring-boot/issues/43850)
|
||||
404
.specify/memory/research/sans-serif-fonts.md
Normal file
404
.specify/memory/research/sans-serif-fonts.md
Normal file
@@ -0,0 +1,404 @@
|
||||
# Research: Modern Sans-Serif Fonts for Mobile-First PWA
|
||||
|
||||
**Date:** 2026-03-04
|
||||
**Context:** Selecting a primary typeface for fete, a privacy-focused PWA for event announcements and RSVPs. The font must be open-source with permissive licensing, modern geometric/neo-grotesque style, excellent mobile readability, and strong weight range.
|
||||
|
||||
---
|
||||
|
||||
## Executive Summary
|
||||
|
||||
Based on research of 9 candidate fonts, **6 meet all requirements** for self-hosting and redistribution under permissive licenses. Two do not qualify:
|
||||
|
||||
- **General Sans**: Proprietary (ITF Free Font License, non-commercial personal use only)
|
||||
- **Satoshi**: License ambiguity; sources conflict between full OFL and ITF restrictions
|
||||
|
||||
The remaining **6 fonts are fully open-source** and suitable for the project:
|
||||
|
||||
| Font | License | Design | Weights | Status |
|
||||
|------|---------|--------|---------|--------|
|
||||
| Inter | OFL-1.1 | Neo-grotesque, humanist | 9 (Thin–Black) | ✅ Recommended |
|
||||
| Plus Jakarta Sans | OFL-1.1 | Geometric, modern | 7 (ExtraLight–ExtraBold) | ✅ Recommended |
|
||||
| Outfit | OFL-1.1 | Geometric | 9 (Thin–Black) | ✅ Recommended |
|
||||
| Space Grotesk | OFL-1.1 | Neo-grotesque, distinctive | 5 (Light–Bold) | ✅ Recommended |
|
||||
| Manrope | OFL-1.1 | Geometric, humanist | 7 (ExtraLight–ExtraBold) | ✅ Recommended |
|
||||
| DM Sans | OFL-1.1 | Geometric, low-contrast | 9 (Thin–Black) | ✅ Recommended |
|
||||
| Sora | OFL-1.1 | Geometric | 8 (Thin–ExtraBold) | ✅ Recommended |
|
||||
|
||||
---
|
||||
|
||||
## Detailed Candidate Analysis
|
||||
|
||||
### 1. Inter
|
||||
|
||||
**License:** SIL Open Font License 1.1 (OFL-1.1)
|
||||
|
||||
**Download Location:**
|
||||
- **Official:** https://github.com/rsms/inter (releases page)
|
||||
- **NPM:** `inter-ui` package
|
||||
- **Homebrew:** `font-inter`
|
||||
- **Official CDN:** https://rsms.me/inter/inter.css
|
||||
|
||||
**Design Character:** Neo-grotesque with humanist touches. High x-height for enhanced legibility on screens. Geometric letterforms with open apertures. Designed specifically for UI and on-screen use.
|
||||
|
||||
**Available Weights:** 9 weights from Thin (100) to Black (900), each with italic variant. Also available as a variable font with weight axis.
|
||||
|
||||
**Notable Apps/Products:**
|
||||
- **UX/Design tools:** Figma, Notion, Pixar Presto
|
||||
- **OS:** Elementary OS, GNOME
|
||||
- **Web:** GitLab, ISO, Mozilla, NASA
|
||||
- **Why:** Chosen by product teams valuing clarity and modern minimalism; default choice for UI designers
|
||||
|
||||
**Mobile Suitability:** Excellent. Specifically engineered for screen readability with high x-height and open apertures. Performs well at 14–16px body text.
|
||||
|
||||
**Distinctive Strengths:**
|
||||
- Purpose-built for digital interfaces
|
||||
- Exceptional clarity in dense UI layouts
|
||||
- Strong brand identity (recognizable across tech products)
|
||||
- Extensive OpenType features
|
||||
|
||||
**Weakness:** Very widely used; less distinctive for a bold brand identity. Considered the "safe" choice.
|
||||
|
||||
---
|
||||
|
||||
### 2. Plus Jakarta Sans
|
||||
|
||||
**License:** SIL Open Font License 1.1 (OFL-1.1)
|
||||
|
||||
**Download Location:**
|
||||
- **Official Repository:** https://github.com/tokotype/PlusJakartaSans
|
||||
- **Source Files:** `sources/`, compiled fonts in `fonts/` directory
|
||||
- **Designer Contact:** mail@tokotype.com (Gumpita Rahayu, Tokotype)
|
||||
- **Latest Version:** 2.7.1 (May 2023)
|
||||
- **Build Command:** `gftools builder sources/builder.yaml`
|
||||
|
||||
**Design Character:** Geometric sans-serif with modern, clean-cut forms. Inspired by Neuzeit Grotesk and Futura but with contemporary refinement. Slightly taller x-height for clear spacing between caps and lowercase. Open counters and balanced spacing for legibility across sizes. **Bold, distinctive look** with personality.
|
||||
|
||||
**Available Weights:** 7 weights from ExtraLight (200) to ExtraBold (800), with matching italics.
|
||||
|
||||
**Notable Apps/Products:**
|
||||
- Original commission: Jakarta Provincial Government's "+Jakarta City of Collaboration" program (2020)
|
||||
- Now widely used in: Branding projects, modern web design, UI design
|
||||
- **Why:** Chosen for fresh, contemporary feel without generic blandness
|
||||
|
||||
**Mobile Suitability:** Excellent. Designed with mobile UI in mind. Clean letterforms render crisply on small screens.
|
||||
|
||||
**Distinctive Strengths:**
|
||||
- **Stylistic sets:** Sharp, Straight, and Swirl variants add design flexibility
|
||||
- Modern geometric with Indonesian design heritage (unique perspective)
|
||||
- Excellent for branding (not generic like Inter)
|
||||
- OpenType features for sophisticated typography
|
||||
- Well-maintained, active development
|
||||
|
||||
**Weakness:** Less ubiquitous than Inter; smaller ecosystem of design tool integrations.
|
||||
|
||||
---
|
||||
|
||||
### 3. Outfit
|
||||
|
||||
**License:** SIL Open Font License 1.1 (OFL-1.1)
|
||||
|
||||
**Download Location:**
|
||||
- **Official Repository:** https://github.com/Outfitio/Outfit-Fonts
|
||||
- **Fonts Directory:** `/fonts` in repository
|
||||
- **OFL Text:** `OFL.txt` in repository
|
||||
- **Designer:** Rodrigo Fuenzalida (originally for Outfit.io)
|
||||
- **Status:** Repository archived Feb 25, 2025 (read-only, downloads remain accessible)
|
||||
|
||||
**Design Character:** Geometric sans-serif with warm, friendly appearance. Generous x-height, balanced spacing, low contrast. Nine static weights plus variable font with weight axis.
|
||||
|
||||
**Available Weights:** 9 weights from Thin (100) to Black (900). No italics.
|
||||
|
||||
**Notable Apps/Products:**
|
||||
- Originally created for Outfit.io platform
|
||||
- Good readability for body text (≈16px) and strong headline presence
|
||||
- Used in design tools (Figma integration)
|
||||
|
||||
**Mobile Suitability:** Good. Geometric forms and generous spacing work well on mobile, though low contrast may require careful pairing with sufficient color contrast.
|
||||
|
||||
**Distinctive Strengths:**
|
||||
- Full weight range (Thin–Black)
|
||||
- Variable font option for granular weight control
|
||||
- Stylistic alternates and rare ligatures
|
||||
- Accessible character set
|
||||
|
||||
**Weakness:** Archived repository; no active development. Low contrast design requires careful color/contrast pairing for accessibility.
|
||||
|
||||
---
|
||||
|
||||
### 4. Space Grotesk
|
||||
|
||||
**License:** SIL Open Font License 1.1 (OFL-1.1)
|
||||
|
||||
**Download Location:**
|
||||
- **Official Repository:** https://github.com/floriankarsten/space-grotesk
|
||||
- **Official Site:** https://fonts.floriankarsten.com/space-grotesk
|
||||
- **Designer:** Florian Karsten
|
||||
- **Variants:** Variable font with weight axis
|
||||
|
||||
**Design Character:** Neo-grotesque with distinctive personality. Proportional variant of Space Mono (Colophon Foundry, 2016). Retains Space Mono's idiosyncratic details while optimizing for improved readability. Bold, tech-forward aesthetic with monowidth heritage visible in character design.
|
||||
|
||||
**Available Weights:** 5 weights—Light (300), Regular (400), Medium (500), SemiBold (600), Bold (700). No italics.
|
||||
|
||||
**Notable Apps/Products:**
|
||||
- Modern tech companies and startups seeking distinctive branding
|
||||
- Popular in neo-brutalist web design
|
||||
- Good for headlines and display use
|
||||
|
||||
**Mobile Suitability:** Good. Clean proportional forms with distinctive character. Works well for headlines; body text at 14px+ is readable.
|
||||
|
||||
**Distinctive Strengths:**
|
||||
- **Bold, tech-forward personality** — immediately recognizable
|
||||
- Heritage from Space Mono adds character without looking dated
|
||||
- Excellent OpenType support (old-style figures, tabular figures, superscript, subscript, fractions, stylistic alternates)
|
||||
- **Supports extended language coverage:** Latin, Vietnamese, Pinyin, Central/South-Eastern European
|
||||
|
||||
**Weakness:** Only 5 weights (lightest is 300, no Thin). Fewer weight options than Inter or DM Sans.
|
||||
|
||||
---
|
||||
|
||||
### 5. Manrope
|
||||
|
||||
**License:** SIL Open Font License 1.1 (OFL-1.1)
|
||||
|
||||
**Download Location:**
|
||||
- **Official Repository:** https://github.com/sharanda/manrope
|
||||
- **Designer:** Mikhail Sharanda (2018), converted to variable by Mirko Velimirovic (2019)
|
||||
- **Alternative Sources:** Multiple community forks on GitHub, npm packages
|
||||
- **NPM Package:** `@fontsource/manrope`, `@fontsource-variable/manrope`
|
||||
|
||||
**Design Character:** Modern geometric sans-serif blending geometric shapes with humanistic elements. Semi-condensed structure with clean, contemporary feel. Geometric digits, packed with OpenType features.
|
||||
|
||||
**Available Weights:** 7 weights from ExtraLight (200) to ExtraBold (800). Available as variable font.
|
||||
|
||||
**Notable Apps/Products:**
|
||||
- Widely used in modern design systems
|
||||
- Popular in product/SaaS design
|
||||
- Good for both UI and branding
|
||||
|
||||
**Mobile Suitability:** Excellent. Clean geometric design with humanistic touches; balanced proportions work well on mobile.
|
||||
|
||||
**Distinctive Strengths:**
|
||||
- Geometric + humanistic blend (best of both worlds)
|
||||
- Well-maintained active project
|
||||
- Variable font available
|
||||
- Strong design community around the font
|
||||
|
||||
**Weakness:** None significant; solid all-around choice.
|
||||
|
||||
---
|
||||
|
||||
### 6. DM Sans
|
||||
|
||||
**License:** SIL Open Font License 1.1 (OFL-1.1)
|
||||
|
||||
**Download Location:**
|
||||
- **Official Repository:** https://github.com/googlefonts/dm-fonts
|
||||
- **Releases Page:** https://github.com/googlefonts/dm-fonts/releases
|
||||
- **Google Fonts:** https://fonts.google.com/specimen/DM+Sans
|
||||
- **Design:** Commissioned from Colophon Foundry; Creative Direction: MultiAdaptor & DeepMind
|
||||
|
||||
**Design Character:** Low-contrast geometric sans-serif optimized for text at smaller sizes. Part of the DM suite (DM Sans, DM Serif Text, DM Serif Display). Designed for clarity and efficiency in dense typography.
|
||||
|
||||
**Available Weights:** 9 weights from Thin (100) to Black (900), each with italic variant.
|
||||
|
||||
**Notable Apps/Products:**
|
||||
- DeepMind products (by commission)
|
||||
- Tech companies favoring geometric clarity
|
||||
- Professional and commercial products requiring text legibility
|
||||
|
||||
**Mobile Suitability:** Excellent. Specifically optimized for small text sizes; low contrast minimizes visual noise on mobile screens.
|
||||
|
||||
**Distinctive Strengths:**
|
||||
- **Optimized for small text** — superior at 12–14px
|
||||
- Full weight range (Thin–Black)
|
||||
- Active Google Fonts maintenance
|
||||
- Italic variants (unlike Outfit or Space Grotesk)
|
||||
- Commissioned by reputable team (DeepMind)
|
||||
|
||||
**Weakness:** Low contrast may feel less bold on headlines without careful sizing/weight adjustment.
|
||||
|
||||
---
|
||||
|
||||
### 7. Sora
|
||||
|
||||
**License:** SIL Open Font License 1.1 (OFL-1.1)
|
||||
|
||||
**Download Location:**
|
||||
- **Official Repository:** https://github.com/sora-xor/sora-font
|
||||
- **GitHub Releases:** Direct TTF/OTF downloads available
|
||||
- **NPM Packages:** `@fontsource/sora`, `@fontsource-variable/sora`
|
||||
- **Original Purpose:** Custom typeface for SORA decentralized autonomous economy
|
||||
|
||||
**Design Character:** Geometric sans-serif with contemporary, clean aesthetic. Available as both static fonts and variable font. Designed as a branding solution for decentralized systems.
|
||||
|
||||
**Available Weights:** 8 weights from Thin (100) to ExtraBold (800), each with italic variant. Variable font available.
|
||||
|
||||
**Notable Apps/Products:**
|
||||
- Sora (XOR) decentralized projects
|
||||
- Crypto/blockchain projects using modern typography
|
||||
- Web3 products seeking distinctive branding
|
||||
|
||||
**Mobile Suitability:** Good. Clean geometric forms render well on mobile; italics available for emphasis.
|
||||
|
||||
**Distinctive Strengths:**
|
||||
- Full weight range with italics
|
||||
- Variable font option
|
||||
- Designed for digital-first branding
|
||||
- GitHub-native distribution
|
||||
|
||||
**Weakness:** Less established than Inter or DM Sans in mainstream product design; smaller ecosystem.
|
||||
|
||||
---
|
||||
|
||||
## Rejected Candidates
|
||||
|
||||
### General Sans
|
||||
|
||||
**Status:** ❌ Does not meet licensing requirements
|
||||
|
||||
**License:** ITF Free Font License (proprietary, non-commercial personal use only)
|
||||
|
||||
**Why Rejected:** This is a **paid commercial font** distributed by the Indian Type Foundry (not open-source). The ITF Free Font License permits personal use only; commercial use requires a separate paid license. Does not meet the "open-source with permissive license" requirement.
|
||||
|
||||
**Designer:** Frode Helland (published by Indian Type Foundry)
|
||||
|
||||
---
|
||||
|
||||
### Satoshi
|
||||
|
||||
**Status:** ⚠️ License ambiguity — conflicting sources
|
||||
|
||||
**Documented License:**
|
||||
- Some sources claim SIL Open Font License (OFL-1.1)
|
||||
- Other sources indicate ITF Free Font License (personal use only) similar to General Sans
|
||||
|
||||
**Design:** Swiss-style modernist sans-serif (Light to Black, 5–10 weights)
|
||||
|
||||
**Download:** Fontshare (Indian Type Foundry's free font service)
|
||||
|
||||
**Why Not Recommended:** The license status is unclear. While Fontshare advertises "free for personal and commercial use," the font's origin (Indian Type Foundry) and conflicting license documentation create uncertainty. For a privacy-focused project with clear open-source requirements, Satoshi's ambiguous licensing creates unnecessary legal risk. Better alternatives with unambiguous OFL-1.1 licensing are available.
|
||||
|
||||
**Recommendation:** If clarity is needed, contact Fontshare/ITF directly. For now, exclude from consideration to reduce licensing complexity.
|
||||
|
||||
---
|
||||
|
||||
## Comparative Table: Qualified Fonts
|
||||
|
||||
| Metric | Inter | Plus Jakarta Sans | Outfit | Space Grotesk | Manrope | DM Sans | Sora |
|
||||
|--------|-------|-------------------|--------|---------------|---------|---------|------|
|
||||
| **License** | OFL-1.1 | OFL-1.1 | OFL-1.1 | OFL-1.1 | OFL-1.1 | OFL-1.1 | OFL-1.1 |
|
||||
| **Weights** | 9 | 7 | 9 | 5 | 7 | 9 | 8 |
|
||||
| **Italics** | ✅ Yes | ✅ Yes | ❌ No | ❌ No | ❌ No | ✅ Yes | ✅ Yes |
|
||||
| **Variable Font** | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
|
||||
| **Design** | Neo-grotesque | Geometric | Geometric | Neo-grotesque | Geo + Humanist | Geometric | Geometric |
|
||||
| **Personality** | Generic/Safe | Bold/Fresh | Warm/Friendly | Tech-Forward | Balanced | Efficient/Clean | Contemporary |
|
||||
| **Mobile Text** | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐⭐ | ⭐⭐⭐⭐ |
|
||||
| **Distinctiveness** | Low | High | Medium | High | High | Medium | Medium |
|
||||
| **Ecosystem** | Very Large | Growing | Medium | Growing | Growing | Large | Small |
|
||||
| **Active Dev** | ✅ Yes | ✅ Yes | ❌ Archived | ✅ Yes | ✅ Yes | ✅ Yes | ✅ Yes |
|
||||
|
||||
---
|
||||
|
||||
## Recommendations
|
||||
|
||||
### For Bold App-Native Branding
|
||||
|
||||
**Primary Choice: Plus Jakarta Sans**
|
||||
|
||||
**Rationale:**
|
||||
- Fully open-source (OFL-1.1) with unambiguous licensing
|
||||
- Bold, modern geometric aesthetic suitable for app branding
|
||||
- Stylistic sets (Sharp, Straight, Swirl) provide design flexibility
|
||||
- Well-maintained by Tokotype with clear development history
|
||||
- Strong presence in modern UI/web design
|
||||
- Excellent mobile readability with thoughtful character spacing
|
||||
- Indonesian design heritage adds unique perspective (not generic)
|
||||
|
||||
**Alternative: Space Grotesk**
|
||||
|
||||
If you prefer **even more distinctive character:**
|
||||
- Neo-grotesque with tech-forward personality
|
||||
- Smaller weight range (5 weights) but strong identity
|
||||
- Popular in contemporary design circles
|
||||
- Good for headlines; pair with a more neutral font for body text if needed
|
||||
|
||||
---
|
||||
|
||||
### For Safe, Professional UI
|
||||
|
||||
**Primary Choice: Inter or DM Sans**
|
||||
|
||||
**Inter if:**
|
||||
- Maximum ecosystem and tool support desired
|
||||
- Designing for broad recognition and trust
|
||||
- Team already familiar with Inter (widespread in tech)
|
||||
|
||||
**DM Sans if:**
|
||||
- Emphasis on small text legibility (optimized for 12–14px)
|
||||
- Prefer italic variants
|
||||
- Want active maintenance from Google Fonts community
|
||||
|
||||
---
|
||||
|
||||
### For Balanced Approach
|
||||
|
||||
**Manrope**
|
||||
|
||||
- Geometric + humanistic blend (versatile)
|
||||
- Excellent mobile performance
|
||||
- Strong weight range (7 weights)
|
||||
- Underrated choice; often overlooked for bolder options but delivers polish
|
||||
|
||||
---
|
||||
|
||||
## Implementation Notes for Self-Hosting
|
||||
|
||||
All recommended fonts can be self-hosted:
|
||||
|
||||
1. **Download:** Clone repository or download from releases page
|
||||
2. **Generate Web Formats:** Use FontForge, FontTools, or online converters to generate WOFF2 (required for modern browsers)
|
||||
3. **CSS:** Include via `@font-face` with local file paths
|
||||
4. **License:** Include `LICENSE.txt` or `OFL.txt` in the distribution
|
||||
|
||||
Example self-hosted CSS:
|
||||
```css
|
||||
@font-face {
|
||||
font-family: 'Plus Jakarta Sans';
|
||||
src: url('/fonts/PlusJakartaSans-Regular.woff2') format('woff2');
|
||||
font-weight: 400;
|
||||
font-display: swap;
|
||||
}
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Privacy Considerations
|
||||
|
||||
All selected fonts are self-hosted open-source projects with no telemetry, no external CDN dependencies, and no tracking. Fully compliant with the project's privacy-first principles.
|
||||
|
||||
---
|
||||
|
||||
## Conclusion
|
||||
|
||||
**Inter, Plus Jakarta Sans, and Space Grotesk** are the strongest candidates. The choice depends on brand positioning:
|
||||
|
||||
- **Generic + Safe → Inter**
|
||||
- **Bold + Modern → Plus Jakarta Sans**
|
||||
- **Tech-Forward + Distinctive → Space Grotesk**
|
||||
|
||||
All seven recommended fonts meet the strict licensing, openness, mobile readability, and weight-range requirements. Any of them are viable; the decision is primarily aesthetic.
|
||||
|
||||
---
|
||||
|
||||
## Sources
|
||||
|
||||
- [Inter Font GitHub Repository](https://github.com/rsms/inter)
|
||||
- [Plus Jakarta Sans GitHub Repository](https://github.com/tokotype/PlusJakartaSans)
|
||||
- [Outfit Fonts GitHub Repository](https://github.com/Outfitio/Outfit-Fonts)
|
||||
- [Space Grotesk GitHub Repository](https://github.com/floriankarsten/space-grotesk)
|
||||
- [Manrope GitHub Repository](https://github.com/sharanda/manrope)
|
||||
- [DM Fonts GitHub Repository](https://github.com/googlefonts/dm-fonts)
|
||||
- [Sora Font GitHub Repository](https://github.com/sora-xor/sora-font)
|
||||
- [SIL Open Font License](https://openfontlicense.org/)
|
||||
- [Google Fonts (reference)](https://fonts.google.com)
|
||||
- [Fontshare (reference)](https://www.fontshare.com)
|
||||
166
.specify/scripts/bash/check-prerequisites.sh
Executable file
166
.specify/scripts/bash/check-prerequisites.sh
Executable file
@@ -0,0 +1,166 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Consolidated prerequisite checking script
|
||||
#
|
||||
# This script provides unified prerequisite checking for Spec-Driven Development workflow.
|
||||
# It replaces the functionality previously spread across multiple scripts.
|
||||
#
|
||||
# Usage: ./check-prerequisites.sh [OPTIONS]
|
||||
#
|
||||
# OPTIONS:
|
||||
# --json Output in JSON format
|
||||
# --require-tasks Require tasks.md to exist (for implementation phase)
|
||||
# --include-tasks Include tasks.md in AVAILABLE_DOCS list
|
||||
# --paths-only Only output path variables (no validation)
|
||||
# --help, -h Show help message
|
||||
#
|
||||
# OUTPUTS:
|
||||
# JSON mode: {"FEATURE_DIR":"...", "AVAILABLE_DOCS":["..."]}
|
||||
# Text mode: FEATURE_DIR:... \n AVAILABLE_DOCS: \n ✓/✗ file.md
|
||||
# Paths only: REPO_ROOT: ... \n BRANCH: ... \n FEATURE_DIR: ... etc.
|
||||
|
||||
set -e
|
||||
|
||||
# Parse command line arguments
|
||||
JSON_MODE=false
|
||||
REQUIRE_TASKS=false
|
||||
INCLUDE_TASKS=false
|
||||
PATHS_ONLY=false
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json)
|
||||
JSON_MODE=true
|
||||
;;
|
||||
--require-tasks)
|
||||
REQUIRE_TASKS=true
|
||||
;;
|
||||
--include-tasks)
|
||||
INCLUDE_TASKS=true
|
||||
;;
|
||||
--paths-only)
|
||||
PATHS_ONLY=true
|
||||
;;
|
||||
--help|-h)
|
||||
cat << 'EOF'
|
||||
Usage: check-prerequisites.sh [OPTIONS]
|
||||
|
||||
Consolidated prerequisite checking for Spec-Driven Development workflow.
|
||||
|
||||
OPTIONS:
|
||||
--json Output in JSON format
|
||||
--require-tasks Require tasks.md to exist (for implementation phase)
|
||||
--include-tasks Include tasks.md in AVAILABLE_DOCS list
|
||||
--paths-only Only output path variables (no prerequisite validation)
|
||||
--help, -h Show this help message
|
||||
|
||||
EXAMPLES:
|
||||
# Check task prerequisites (plan.md required)
|
||||
./check-prerequisites.sh --json
|
||||
|
||||
# Check implementation prerequisites (plan.md + tasks.md required)
|
||||
./check-prerequisites.sh --json --require-tasks --include-tasks
|
||||
|
||||
# Get feature paths only (no validation)
|
||||
./check-prerequisites.sh --paths-only
|
||||
|
||||
EOF
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
echo "ERROR: Unknown option '$arg'. Use --help for usage information." >&2
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Source common functions
|
||||
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
# Get feature paths and validate branch
|
||||
eval $(get_feature_paths)
|
||||
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
||||
|
||||
# If paths-only mode, output paths and exit (support JSON + paths-only combined)
|
||||
if $PATHS_ONLY; then
|
||||
if $JSON_MODE; then
|
||||
# Minimal JSON paths payload (no validation performed)
|
||||
printf '{"REPO_ROOT":"%s","BRANCH":"%s","FEATURE_DIR":"%s","FEATURE_SPEC":"%s","IMPL_PLAN":"%s","TASKS":"%s"}\n' \
|
||||
"$REPO_ROOT" "$CURRENT_BRANCH" "$FEATURE_DIR" "$FEATURE_SPEC" "$IMPL_PLAN" "$TASKS"
|
||||
else
|
||||
echo "REPO_ROOT: $REPO_ROOT"
|
||||
echo "BRANCH: $CURRENT_BRANCH"
|
||||
echo "FEATURE_DIR: $FEATURE_DIR"
|
||||
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
||||
echo "IMPL_PLAN: $IMPL_PLAN"
|
||||
echo "TASKS: $TASKS"
|
||||
fi
|
||||
exit 0
|
||||
fi
|
||||
|
||||
# Validate required directories and files
|
||||
if [[ ! -d "$FEATURE_DIR" ]]; then
|
||||
echo "ERROR: Feature directory not found: $FEATURE_DIR" >&2
|
||||
echo "Run /speckit.specify first to create the feature structure." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
if [[ ! -f "$IMPL_PLAN" ]]; then
|
||||
echo "ERROR: plan.md not found in $FEATURE_DIR" >&2
|
||||
echo "Run /speckit.plan first to create the implementation plan." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check for tasks.md if required
|
||||
if $REQUIRE_TASKS && [[ ! -f "$TASKS" ]]; then
|
||||
echo "ERROR: tasks.md not found in $FEATURE_DIR" >&2
|
||||
echo "Run /speckit.tasks first to create the task list." >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Build list of available documents
|
||||
docs=()
|
||||
|
||||
# Always check these optional docs
|
||||
[[ -f "$RESEARCH" ]] && docs+=("research.md")
|
||||
[[ -f "$DATA_MODEL" ]] && docs+=("data-model.md")
|
||||
|
||||
# Check contracts directory (only if it exists and has files)
|
||||
if [[ -d "$CONTRACTS_DIR" ]] && [[ -n "$(ls -A "$CONTRACTS_DIR" 2>/dev/null)" ]]; then
|
||||
docs+=("contracts/")
|
||||
fi
|
||||
|
||||
[[ -f "$QUICKSTART" ]] && docs+=("quickstart.md")
|
||||
|
||||
# Include tasks.md if requested and it exists
|
||||
if $INCLUDE_TASKS && [[ -f "$TASKS" ]]; then
|
||||
docs+=("tasks.md")
|
||||
fi
|
||||
|
||||
# Output results
|
||||
if $JSON_MODE; then
|
||||
# Build JSON array of documents
|
||||
if [[ ${#docs[@]} -eq 0 ]]; then
|
||||
json_docs="[]"
|
||||
else
|
||||
json_docs=$(printf '"%s",' "${docs[@]}")
|
||||
json_docs="[${json_docs%,}]"
|
||||
fi
|
||||
|
||||
printf '{"FEATURE_DIR":"%s","AVAILABLE_DOCS":%s}\n' "$FEATURE_DIR" "$json_docs"
|
||||
else
|
||||
# Text output
|
||||
echo "FEATURE_DIR:$FEATURE_DIR"
|
||||
echo "AVAILABLE_DOCS:"
|
||||
|
||||
# Show status of each potential document
|
||||
check_file "$RESEARCH" "research.md"
|
||||
check_file "$DATA_MODEL" "data-model.md"
|
||||
check_dir "$CONTRACTS_DIR" "contracts/"
|
||||
check_file "$QUICKSTART" "quickstart.md"
|
||||
|
||||
if $INCLUDE_TASKS; then
|
||||
check_file "$TASKS" "tasks.md"
|
||||
fi
|
||||
fi
|
||||
156
.specify/scripts/bash/common.sh
Executable file
156
.specify/scripts/bash/common.sh
Executable file
@@ -0,0 +1,156 @@
|
||||
#!/usr/bin/env bash
|
||||
# Common functions and variables for all scripts
|
||||
|
||||
# Get repository root, with fallback for non-git repositories
|
||||
get_repo_root() {
|
||||
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||
git rev-parse --show-toplevel
|
||||
else
|
||||
# Fall back to script location for non-git repos
|
||||
local script_dir="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
(cd "$script_dir/../../.." && pwd)
|
||||
fi
|
||||
}
|
||||
|
||||
# Get current branch, with fallback for non-git repositories
|
||||
get_current_branch() {
|
||||
# First check if SPECIFY_FEATURE environment variable is set
|
||||
if [[ -n "${SPECIFY_FEATURE:-}" ]]; then
|
||||
echo "$SPECIFY_FEATURE"
|
||||
return
|
||||
fi
|
||||
|
||||
# Then check git if available
|
||||
if git rev-parse --abbrev-ref HEAD >/dev/null 2>&1; then
|
||||
git rev-parse --abbrev-ref HEAD
|
||||
return
|
||||
fi
|
||||
|
||||
# For non-git repos, try to find the latest feature directory
|
||||
local repo_root=$(get_repo_root)
|
||||
local specs_dir="$repo_root/specs"
|
||||
|
||||
if [[ -d "$specs_dir" ]]; then
|
||||
local latest_feature=""
|
||||
local highest=0
|
||||
|
||||
for dir in "$specs_dir"/*; do
|
||||
if [[ -d "$dir" ]]; then
|
||||
local dirname=$(basename "$dir")
|
||||
if [[ "$dirname" =~ ^([0-9]{3})- ]]; then
|
||||
local number=${BASH_REMATCH[1]}
|
||||
number=$((10#$number))
|
||||
if [[ "$number" -gt "$highest" ]]; then
|
||||
highest=$number
|
||||
latest_feature=$dirname
|
||||
fi
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
if [[ -n "$latest_feature" ]]; then
|
||||
echo "$latest_feature"
|
||||
return
|
||||
fi
|
||||
fi
|
||||
|
||||
echo "main" # Final fallback
|
||||
}
|
||||
|
||||
# Check if we have git available
|
||||
has_git() {
|
||||
git rev-parse --show-toplevel >/dev/null 2>&1
|
||||
}
|
||||
|
||||
check_feature_branch() {
|
||||
local branch="$1"
|
||||
local has_git_repo="$2"
|
||||
|
||||
# For non-git repos, we can't enforce branch naming but still provide output
|
||||
if [[ "$has_git_repo" != "true" ]]; then
|
||||
echo "[specify] Warning: Git repository not detected; skipped branch validation" >&2
|
||||
return 0
|
||||
fi
|
||||
|
||||
if [[ ! "$branch" =~ ^[0-9]{3}- ]]; then
|
||||
echo "ERROR: Not on a feature branch. Current branch: $branch" >&2
|
||||
echo "Feature branches should be named like: 001-feature-name" >&2
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
get_feature_dir() { echo "$1/specs/$2"; }
|
||||
|
||||
# Find feature directory by numeric prefix instead of exact branch match
|
||||
# This allows multiple branches to work on the same spec (e.g., 004-fix-bug, 004-add-feature)
|
||||
find_feature_dir_by_prefix() {
|
||||
local repo_root="$1"
|
||||
local branch_name="$2"
|
||||
local specs_dir="$repo_root/specs"
|
||||
|
||||
# Extract numeric prefix from branch (e.g., "004" from "004-whatever")
|
||||
if [[ ! "$branch_name" =~ ^([0-9]{3})- ]]; then
|
||||
# If branch doesn't have numeric prefix, fall back to exact match
|
||||
echo "$specs_dir/$branch_name"
|
||||
return
|
||||
fi
|
||||
|
||||
local prefix="${BASH_REMATCH[1]}"
|
||||
|
||||
# Search for directories in specs/ that start with this prefix
|
||||
local matches=()
|
||||
if [[ -d "$specs_dir" ]]; then
|
||||
for dir in "$specs_dir"/"$prefix"-*; do
|
||||
if [[ -d "$dir" ]]; then
|
||||
matches+=("$(basename "$dir")")
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
# Handle results
|
||||
if [[ ${#matches[@]} -eq 0 ]]; then
|
||||
# No match found - return the branch name path (will fail later with clear error)
|
||||
echo "$specs_dir/$branch_name"
|
||||
elif [[ ${#matches[@]} -eq 1 ]]; then
|
||||
# Exactly one match - perfect!
|
||||
echo "$specs_dir/${matches[0]}"
|
||||
else
|
||||
# Multiple matches - this shouldn't happen with proper naming convention
|
||||
echo "ERROR: Multiple spec directories found with prefix '$prefix': ${matches[*]}" >&2
|
||||
echo "Please ensure only one spec directory exists per numeric prefix." >&2
|
||||
echo "$specs_dir/$branch_name" # Return something to avoid breaking the script
|
||||
fi
|
||||
}
|
||||
|
||||
get_feature_paths() {
|
||||
local repo_root=$(get_repo_root)
|
||||
local current_branch=$(get_current_branch)
|
||||
local has_git_repo="false"
|
||||
|
||||
if has_git; then
|
||||
has_git_repo="true"
|
||||
fi
|
||||
|
||||
# Use prefix-based lookup to support multiple branches per spec
|
||||
local feature_dir=$(find_feature_dir_by_prefix "$repo_root" "$current_branch")
|
||||
|
||||
cat <<EOF
|
||||
REPO_ROOT='$repo_root'
|
||||
CURRENT_BRANCH='$current_branch'
|
||||
HAS_GIT='$has_git_repo'
|
||||
FEATURE_DIR='$feature_dir'
|
||||
FEATURE_SPEC='$feature_dir/spec.md'
|
||||
IMPL_PLAN='$feature_dir/plan.md'
|
||||
TASKS='$feature_dir/tasks.md'
|
||||
RESEARCH='$feature_dir/research.md'
|
||||
DATA_MODEL='$feature_dir/data-model.md'
|
||||
QUICKSTART='$feature_dir/quickstart.md'
|
||||
CONTRACTS_DIR='$feature_dir/contracts'
|
||||
EOF
|
||||
}
|
||||
|
||||
check_file() { [[ -f "$1" ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
||||
check_dir() { [[ -d "$1" && -n $(ls -A "$1" 2>/dev/null) ]] && echo " ✓ $2" || echo " ✗ $2"; }
|
||||
|
||||
313
.specify/scripts/bash/create-new-feature.sh
Executable file
313
.specify/scripts/bash/create-new-feature.sh
Executable file
@@ -0,0 +1,313 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
JSON_MODE=false
|
||||
SHORT_NAME=""
|
||||
BRANCH_NUMBER=""
|
||||
ARGS=()
|
||||
i=1
|
||||
while [ $i -le $# ]; do
|
||||
arg="${!i}"
|
||||
case "$arg" in
|
||||
--json)
|
||||
JSON_MODE=true
|
||||
;;
|
||||
--short-name)
|
||||
if [ $((i + 1)) -gt $# ]; then
|
||||
echo 'Error: --short-name requires a value' >&2
|
||||
exit 1
|
||||
fi
|
||||
i=$((i + 1))
|
||||
next_arg="${!i}"
|
||||
# Check if the next argument is another option (starts with --)
|
||||
if [[ "$next_arg" == --* ]]; then
|
||||
echo 'Error: --short-name requires a value' >&2
|
||||
exit 1
|
||||
fi
|
||||
SHORT_NAME="$next_arg"
|
||||
;;
|
||||
--number)
|
||||
if [ $((i + 1)) -gt $# ]; then
|
||||
echo 'Error: --number requires a value' >&2
|
||||
exit 1
|
||||
fi
|
||||
i=$((i + 1))
|
||||
next_arg="${!i}"
|
||||
if [[ "$next_arg" == --* ]]; then
|
||||
echo 'Error: --number requires a value' >&2
|
||||
exit 1
|
||||
fi
|
||||
BRANCH_NUMBER="$next_arg"
|
||||
;;
|
||||
--help|-h)
|
||||
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>"
|
||||
echo ""
|
||||
echo "Options:"
|
||||
echo " --json Output in JSON format"
|
||||
echo " --short-name <name> Provide a custom short name (2-4 words) for the branch"
|
||||
echo " --number N Specify branch number manually (overrides auto-detection)"
|
||||
echo " --help, -h Show this help message"
|
||||
echo ""
|
||||
echo "Examples:"
|
||||
echo " $0 'Add user authentication system' --short-name 'user-auth'"
|
||||
echo " $0 'Implement OAuth2 integration for API' --number 5"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
ARGS+=("$arg")
|
||||
;;
|
||||
esac
|
||||
i=$((i + 1))
|
||||
done
|
||||
|
||||
FEATURE_DESCRIPTION="${ARGS[*]}"
|
||||
if [ -z "$FEATURE_DESCRIPTION" ]; then
|
||||
echo "Usage: $0 [--json] [--short-name <name>] [--number N] <feature_description>" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Trim whitespace and validate description is not empty (e.g., user passed only whitespace)
|
||||
FEATURE_DESCRIPTION=$(echo "$FEATURE_DESCRIPTION" | xargs)
|
||||
if [ -z "$FEATURE_DESCRIPTION" ]; then
|
||||
echo "Error: Feature description cannot be empty or contain only whitespace" >&2
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Function to find the repository root by searching for existing project markers
|
||||
find_repo_root() {
|
||||
local dir="$1"
|
||||
while [ "$dir" != "/" ]; do
|
||||
if [ -d "$dir/.git" ] || [ -d "$dir/.specify" ]; then
|
||||
echo "$dir"
|
||||
return 0
|
||||
fi
|
||||
dir="$(dirname "$dir")"
|
||||
done
|
||||
return 1
|
||||
}
|
||||
|
||||
# Function to get highest number from specs directory
|
||||
get_highest_from_specs() {
|
||||
local specs_dir="$1"
|
||||
local highest=0
|
||||
|
||||
if [ -d "$specs_dir" ]; then
|
||||
for dir in "$specs_dir"/*; do
|
||||
[ -d "$dir" ] || continue
|
||||
dirname=$(basename "$dir")
|
||||
number=$(echo "$dirname" | grep -o '^[0-9]\+' || echo "0")
|
||||
number=$((10#$number))
|
||||
if [ "$number" -gt "$highest" ]; then
|
||||
highest=$number
|
||||
fi
|
||||
done
|
||||
fi
|
||||
|
||||
echo "$highest"
|
||||
}
|
||||
|
||||
# Function to get highest number from git branches
|
||||
get_highest_from_branches() {
|
||||
local highest=0
|
||||
|
||||
# Get all branches (local and remote)
|
||||
branches=$(git branch -a 2>/dev/null || echo "")
|
||||
|
||||
if [ -n "$branches" ]; then
|
||||
while IFS= read -r branch; do
|
||||
# Clean branch name: remove leading markers and remote prefixes
|
||||
clean_branch=$(echo "$branch" | sed 's/^[* ]*//; s|^remotes/[^/]*/||')
|
||||
|
||||
# Extract feature number if branch matches pattern ###-*
|
||||
if echo "$clean_branch" | grep -q '^[0-9]\{3\}-'; then
|
||||
number=$(echo "$clean_branch" | grep -o '^[0-9]\{3\}' || echo "0")
|
||||
number=$((10#$number))
|
||||
if [ "$number" -gt "$highest" ]; then
|
||||
highest=$number
|
||||
fi
|
||||
fi
|
||||
done <<< "$branches"
|
||||
fi
|
||||
|
||||
echo "$highest"
|
||||
}
|
||||
|
||||
# Function to check existing branches (local and remote) and return next available number
|
||||
check_existing_branches() {
|
||||
local specs_dir="$1"
|
||||
|
||||
# Fetch all remotes to get latest branch info (suppress errors if no remotes)
|
||||
git fetch --all --prune 2>/dev/null || true
|
||||
|
||||
# Get highest number from ALL branches (not just matching short name)
|
||||
local highest_branch=$(get_highest_from_branches)
|
||||
|
||||
# Get highest number from ALL specs (not just matching short name)
|
||||
local highest_spec=$(get_highest_from_specs "$specs_dir")
|
||||
|
||||
# Take the maximum of both
|
||||
local max_num=$highest_branch
|
||||
if [ "$highest_spec" -gt "$max_num" ]; then
|
||||
max_num=$highest_spec
|
||||
fi
|
||||
|
||||
# Return next number
|
||||
echo $((max_num + 1))
|
||||
}
|
||||
|
||||
# Function to clean and format a branch name
|
||||
clean_branch_name() {
|
||||
local name="$1"
|
||||
echo "$name" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/-/g' | sed 's/-\+/-/g' | sed 's/^-//' | sed 's/-$//'
|
||||
}
|
||||
|
||||
# Resolve repository root. Prefer git information when available, but fall back
|
||||
# to searching for repository markers so the workflow still functions in repositories that
|
||||
# were initialised with --no-git.
|
||||
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
|
||||
if git rev-parse --show-toplevel >/dev/null 2>&1; then
|
||||
REPO_ROOT=$(git rev-parse --show-toplevel)
|
||||
HAS_GIT=true
|
||||
else
|
||||
REPO_ROOT="$(find_repo_root "$SCRIPT_DIR")"
|
||||
if [ -z "$REPO_ROOT" ]; then
|
||||
echo "Error: Could not determine repository root. Please run this script from within the repository." >&2
|
||||
exit 1
|
||||
fi
|
||||
HAS_GIT=false
|
||||
fi
|
||||
|
||||
cd "$REPO_ROOT"
|
||||
|
||||
SPECS_DIR="$REPO_ROOT/specs"
|
||||
mkdir -p "$SPECS_DIR"
|
||||
|
||||
# Function to generate branch name with stop word filtering and length filtering
|
||||
generate_branch_name() {
|
||||
local description="$1"
|
||||
|
||||
# Common stop words to filter out
|
||||
local stop_words="^(i|a|an|the|to|for|of|in|on|at|by|with|from|is|are|was|were|be|been|being|have|has|had|do|does|did|will|would|should|could|can|may|might|must|shall|this|that|these|those|my|your|our|their|want|need|add|get|set)$"
|
||||
|
||||
# Convert to lowercase and split into words
|
||||
local clean_name=$(echo "$description" | tr '[:upper:]' '[:lower:]' | sed 's/[^a-z0-9]/ /g')
|
||||
|
||||
# Filter words: remove stop words and words shorter than 3 chars (unless they're uppercase acronyms in original)
|
||||
local meaningful_words=()
|
||||
for word in $clean_name; do
|
||||
# Skip empty words
|
||||
[ -z "$word" ] && continue
|
||||
|
||||
# Keep words that are NOT stop words AND (length >= 3 OR are potential acronyms)
|
||||
if ! echo "$word" | grep -qiE "$stop_words"; then
|
||||
if [ ${#word} -ge 3 ]; then
|
||||
meaningful_words+=("$word")
|
||||
elif echo "$description" | grep -q "\b${word^^}\b"; then
|
||||
# Keep short words if they appear as uppercase in original (likely acronyms)
|
||||
meaningful_words+=("$word")
|
||||
fi
|
||||
fi
|
||||
done
|
||||
|
||||
# If we have meaningful words, use first 3-4 of them
|
||||
if [ ${#meaningful_words[@]} -gt 0 ]; then
|
||||
local max_words=3
|
||||
if [ ${#meaningful_words[@]} -eq 4 ]; then max_words=4; fi
|
||||
|
||||
local result=""
|
||||
local count=0
|
||||
for word in "${meaningful_words[@]}"; do
|
||||
if [ $count -ge $max_words ]; then break; fi
|
||||
if [ -n "$result" ]; then result="$result-"; fi
|
||||
result="$result$word"
|
||||
count=$((count + 1))
|
||||
done
|
||||
echo "$result"
|
||||
else
|
||||
# Fallback to original logic if no meaningful words found
|
||||
local cleaned=$(clean_branch_name "$description")
|
||||
echo "$cleaned" | tr '-' '\n' | grep -v '^$' | head -3 | tr '\n' '-' | sed 's/-$//'
|
||||
fi
|
||||
}
|
||||
|
||||
# Generate branch name
|
||||
if [ -n "$SHORT_NAME" ]; then
|
||||
# Use provided short name, just clean it up
|
||||
BRANCH_SUFFIX=$(clean_branch_name "$SHORT_NAME")
|
||||
else
|
||||
# Generate from description with smart filtering
|
||||
BRANCH_SUFFIX=$(generate_branch_name "$FEATURE_DESCRIPTION")
|
||||
fi
|
||||
|
||||
# Determine branch number
|
||||
if [ -z "$BRANCH_NUMBER" ]; then
|
||||
if [ "$HAS_GIT" = true ]; then
|
||||
# Check existing branches on remotes
|
||||
BRANCH_NUMBER=$(check_existing_branches "$SPECS_DIR")
|
||||
else
|
||||
# Fall back to local directory check
|
||||
HIGHEST=$(get_highest_from_specs "$SPECS_DIR")
|
||||
BRANCH_NUMBER=$((HIGHEST + 1))
|
||||
fi
|
||||
fi
|
||||
|
||||
# Force base-10 interpretation to prevent octal conversion (e.g., 010 → 8 in octal, but should be 10 in decimal)
|
||||
FEATURE_NUM=$(printf "%03d" "$((10#$BRANCH_NUMBER))")
|
||||
BRANCH_NAME="${FEATURE_NUM}-${BRANCH_SUFFIX}"
|
||||
|
||||
# GitHub enforces a 244-byte limit on branch names
|
||||
# Validate and truncate if necessary
|
||||
MAX_BRANCH_LENGTH=244
|
||||
if [ ${#BRANCH_NAME} -gt $MAX_BRANCH_LENGTH ]; then
|
||||
# Calculate how much we need to trim from suffix
|
||||
# Account for: feature number (3) + hyphen (1) = 4 chars
|
||||
MAX_SUFFIX_LENGTH=$((MAX_BRANCH_LENGTH - 4))
|
||||
|
||||
# Truncate suffix at word boundary if possible
|
||||
TRUNCATED_SUFFIX=$(echo "$BRANCH_SUFFIX" | cut -c1-$MAX_SUFFIX_LENGTH)
|
||||
# Remove trailing hyphen if truncation created one
|
||||
TRUNCATED_SUFFIX=$(echo "$TRUNCATED_SUFFIX" | sed 's/-$//')
|
||||
|
||||
ORIGINAL_BRANCH_NAME="$BRANCH_NAME"
|
||||
BRANCH_NAME="${FEATURE_NUM}-${TRUNCATED_SUFFIX}"
|
||||
|
||||
>&2 echo "[specify] Warning: Branch name exceeded GitHub's 244-byte limit"
|
||||
>&2 echo "[specify] Original: $ORIGINAL_BRANCH_NAME (${#ORIGINAL_BRANCH_NAME} bytes)"
|
||||
>&2 echo "[specify] Truncated to: $BRANCH_NAME (${#BRANCH_NAME} bytes)"
|
||||
fi
|
||||
|
||||
if [ "$HAS_GIT" = true ]; then
|
||||
if ! git checkout -b "$BRANCH_NAME" 2>/dev/null; then
|
||||
# Check if branch already exists
|
||||
if git branch --list "$BRANCH_NAME" | grep -q .; then
|
||||
>&2 echo "Error: Branch '$BRANCH_NAME' already exists. Please use a different feature name or specify a different number with --number."
|
||||
exit 1
|
||||
else
|
||||
>&2 echo "Error: Failed to create git branch '$BRANCH_NAME'. Please check your git configuration and try again."
|
||||
exit 1
|
||||
fi
|
||||
fi
|
||||
else
|
||||
>&2 echo "[specify] Warning: Git repository not detected; skipped branch creation for $BRANCH_NAME"
|
||||
fi
|
||||
|
||||
FEATURE_DIR="$SPECS_DIR/$BRANCH_NAME"
|
||||
mkdir -p "$FEATURE_DIR"
|
||||
|
||||
TEMPLATE="$REPO_ROOT/.specify/templates/spec-template.md"
|
||||
SPEC_FILE="$FEATURE_DIR/spec.md"
|
||||
if [ -f "$TEMPLATE" ]; then cp "$TEMPLATE" "$SPEC_FILE"; else touch "$SPEC_FILE"; fi
|
||||
|
||||
# Set the SPECIFY_FEATURE environment variable for the current session
|
||||
export SPECIFY_FEATURE="$BRANCH_NAME"
|
||||
|
||||
if $JSON_MODE; then
|
||||
printf '{"BRANCH_NAME":"%s","SPEC_FILE":"%s","FEATURE_NUM":"%s"}\n' "$BRANCH_NAME" "$SPEC_FILE" "$FEATURE_NUM"
|
||||
else
|
||||
echo "BRANCH_NAME: $BRANCH_NAME"
|
||||
echo "SPEC_FILE: $SPEC_FILE"
|
||||
echo "FEATURE_NUM: $FEATURE_NUM"
|
||||
echo "SPECIFY_FEATURE environment variable set to: $BRANCH_NAME"
|
||||
fi
|
||||
61
.specify/scripts/bash/setup-plan.sh
Executable file
61
.specify/scripts/bash/setup-plan.sh
Executable file
@@ -0,0 +1,61 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
set -e
|
||||
|
||||
# Parse command line arguments
|
||||
JSON_MODE=false
|
||||
ARGS=()
|
||||
|
||||
for arg in "$@"; do
|
||||
case "$arg" in
|
||||
--json)
|
||||
JSON_MODE=true
|
||||
;;
|
||||
--help|-h)
|
||||
echo "Usage: $0 [--json]"
|
||||
echo " --json Output results in JSON format"
|
||||
echo " --help Show this help message"
|
||||
exit 0
|
||||
;;
|
||||
*)
|
||||
ARGS+=("$arg")
|
||||
;;
|
||||
esac
|
||||
done
|
||||
|
||||
# Get script directory and load common functions
|
||||
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
# Get all paths and variables from common functions
|
||||
eval $(get_feature_paths)
|
||||
|
||||
# Check if we're on a proper feature branch (only for git repos)
|
||||
check_feature_branch "$CURRENT_BRANCH" "$HAS_GIT" || exit 1
|
||||
|
||||
# Ensure the feature directory exists
|
||||
mkdir -p "$FEATURE_DIR"
|
||||
|
||||
# Copy plan template if it exists
|
||||
TEMPLATE="$REPO_ROOT/.specify/templates/plan-template.md"
|
||||
if [[ -f "$TEMPLATE" ]]; then
|
||||
cp "$TEMPLATE" "$IMPL_PLAN"
|
||||
echo "Copied plan template to $IMPL_PLAN"
|
||||
else
|
||||
echo "Warning: Plan template not found at $TEMPLATE"
|
||||
# Create a basic plan file if template doesn't exist
|
||||
touch "$IMPL_PLAN"
|
||||
fi
|
||||
|
||||
# Output results
|
||||
if $JSON_MODE; then
|
||||
printf '{"FEATURE_SPEC":"%s","IMPL_PLAN":"%s","SPECS_DIR":"%s","BRANCH":"%s","HAS_GIT":"%s"}\n' \
|
||||
"$FEATURE_SPEC" "$IMPL_PLAN" "$FEATURE_DIR" "$CURRENT_BRANCH" "$HAS_GIT"
|
||||
else
|
||||
echo "FEATURE_SPEC: $FEATURE_SPEC"
|
||||
echo "IMPL_PLAN: $IMPL_PLAN"
|
||||
echo "SPECS_DIR: $FEATURE_DIR"
|
||||
echo "BRANCH: $CURRENT_BRANCH"
|
||||
echo "HAS_GIT: $HAS_GIT"
|
||||
fi
|
||||
|
||||
829
.specify/scripts/bash/update-agent-context.sh
Executable file
829
.specify/scripts/bash/update-agent-context.sh
Executable file
@@ -0,0 +1,829 @@
|
||||
#!/usr/bin/env bash
|
||||
|
||||
# Update agent context files with information from plan.md
|
||||
#
|
||||
# This script maintains AI agent context files by parsing feature specifications
|
||||
# and updating agent-specific configuration files with project information.
|
||||
#
|
||||
# MAIN FUNCTIONS:
|
||||
# 1. Environment Validation
|
||||
# - Verifies git repository structure and branch information
|
||||
# - Checks for required plan.md files and templates
|
||||
# - Validates file permissions and accessibility
|
||||
#
|
||||
# 2. Plan Data Extraction
|
||||
# - Parses plan.md files to extract project metadata
|
||||
# - Identifies language/version, frameworks, databases, and project types
|
||||
# - Handles missing or incomplete specification data gracefully
|
||||
#
|
||||
# 3. Agent File Management
|
||||
# - Creates new agent context files from templates when needed
|
||||
# - Updates existing agent files with new project information
|
||||
# - Preserves manual additions and custom configurations
|
||||
# - Supports multiple AI agent formats and directory structures
|
||||
#
|
||||
# 4. Content Generation
|
||||
# - Generates language-specific build/test commands
|
||||
# - Creates appropriate project directory structures
|
||||
# - Updates technology stacks and recent changes sections
|
||||
# - Maintains consistent formatting and timestamps
|
||||
#
|
||||
# 5. Multi-Agent Support
|
||||
# - Handles agent-specific file paths and naming conventions
|
||||
# - Supports: Claude, Gemini, Copilot, Cursor, Qwen, opencode, Codex, Windsurf, Kilo Code, Auggie CLI, Roo Code, CodeBuddy CLI, Qoder CLI, Amp, SHAI, Kiro CLI, or Antigravity
|
||||
# - Can update single agents or all existing agent files
|
||||
# - Creates default Claude file if no agent files exist
|
||||
#
|
||||
# Usage: ./update-agent-context.sh [agent_type]
|
||||
# Agent types: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|kiro-cli|agy|bob|qodercli
|
||||
# Leave empty to update all existing agent files
|
||||
|
||||
set -e
|
||||
|
||||
# Enable strict error handling
|
||||
set -u
|
||||
set -o pipefail
|
||||
|
||||
#==============================================================================
|
||||
# Configuration and Global Variables
|
||||
#==============================================================================
|
||||
|
||||
# Get script directory and load common functions
|
||||
SCRIPT_DIR="$(CDPATH="" cd "$(dirname "${BASH_SOURCE[0]}")" && pwd)"
|
||||
source "$SCRIPT_DIR/common.sh"
|
||||
|
||||
# Get all paths and variables from common functions
|
||||
eval $(get_feature_paths)
|
||||
|
||||
NEW_PLAN="$IMPL_PLAN" # Alias for compatibility with existing code
|
||||
AGENT_TYPE="${1:-}"
|
||||
|
||||
# Agent-specific file paths
|
||||
CLAUDE_FILE="$REPO_ROOT/CLAUDE.md"
|
||||
GEMINI_FILE="$REPO_ROOT/GEMINI.md"
|
||||
COPILOT_FILE="$REPO_ROOT/.github/agents/copilot-instructions.md"
|
||||
CURSOR_FILE="$REPO_ROOT/.cursor/rules/specify-rules.mdc"
|
||||
QWEN_FILE="$REPO_ROOT/QWEN.md"
|
||||
AGENTS_FILE="$REPO_ROOT/AGENTS.md"
|
||||
WINDSURF_FILE="$REPO_ROOT/.windsurf/rules/specify-rules.md"
|
||||
KILOCODE_FILE="$REPO_ROOT/.kilocode/rules/specify-rules.md"
|
||||
AUGGIE_FILE="$REPO_ROOT/.augment/rules/specify-rules.md"
|
||||
ROO_FILE="$REPO_ROOT/.roo/rules/specify-rules.md"
|
||||
CODEBUDDY_FILE="$REPO_ROOT/CODEBUDDY.md"
|
||||
QODER_FILE="$REPO_ROOT/QODER.md"
|
||||
AMP_FILE="$REPO_ROOT/AGENTS.md"
|
||||
SHAI_FILE="$REPO_ROOT/SHAI.md"
|
||||
KIRO_FILE="$REPO_ROOT/AGENTS.md"
|
||||
AGY_FILE="$REPO_ROOT/.agent/rules/specify-rules.md"
|
||||
BOB_FILE="$REPO_ROOT/AGENTS.md"
|
||||
|
||||
# Template file
|
||||
TEMPLATE_FILE="$REPO_ROOT/.specify/templates/agent-file-template.md"
|
||||
|
||||
# Global variables for parsed plan data
|
||||
NEW_LANG=""
|
||||
NEW_FRAMEWORK=""
|
||||
NEW_DB=""
|
||||
NEW_PROJECT_TYPE=""
|
||||
|
||||
#==============================================================================
|
||||
# Utility Functions
|
||||
#==============================================================================
|
||||
|
||||
log_info() {
|
||||
echo "INFO: $1"
|
||||
}
|
||||
|
||||
log_success() {
|
||||
echo "✓ $1"
|
||||
}
|
||||
|
||||
log_error() {
|
||||
echo "ERROR: $1" >&2
|
||||
}
|
||||
|
||||
log_warning() {
|
||||
echo "WARNING: $1" >&2
|
||||
}
|
||||
|
||||
# Cleanup function for temporary files
|
||||
cleanup() {
|
||||
local exit_code=$?
|
||||
rm -f /tmp/agent_update_*_$$
|
||||
rm -f /tmp/manual_additions_$$
|
||||
exit $exit_code
|
||||
}
|
||||
|
||||
# Set up cleanup trap
|
||||
trap cleanup EXIT INT TERM
|
||||
|
||||
#==============================================================================
|
||||
# Validation Functions
|
||||
#==============================================================================
|
||||
|
||||
validate_environment() {
|
||||
# Check if we have a current branch/feature (git or non-git)
|
||||
if [[ -z "$CURRENT_BRANCH" ]]; then
|
||||
log_error "Unable to determine current feature"
|
||||
if [[ "$HAS_GIT" == "true" ]]; then
|
||||
log_info "Make sure you're on a feature branch"
|
||||
else
|
||||
log_info "Set SPECIFY_FEATURE environment variable or create a feature first"
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if plan.md exists
|
||||
if [[ ! -f "$NEW_PLAN" ]]; then
|
||||
log_error "No plan.md found at $NEW_PLAN"
|
||||
log_info "Make sure you're working on a feature with a corresponding spec directory"
|
||||
if [[ "$HAS_GIT" != "true" ]]; then
|
||||
log_info "Use: export SPECIFY_FEATURE=your-feature-name or create a new feature first"
|
||||
fi
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Check if template exists (needed for new files)
|
||||
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||
log_warning "Template file not found at $TEMPLATE_FILE"
|
||||
log_warning "Creating new agent files will fail"
|
||||
fi
|
||||
}
|
||||
|
||||
#==============================================================================
|
||||
# Plan Parsing Functions
|
||||
#==============================================================================
|
||||
|
||||
extract_plan_field() {
|
||||
local field_pattern="$1"
|
||||
local plan_file="$2"
|
||||
|
||||
grep "^\*\*${field_pattern}\*\*: " "$plan_file" 2>/dev/null | \
|
||||
head -1 | \
|
||||
sed "s|^\*\*${field_pattern}\*\*: ||" | \
|
||||
sed 's/^[ \t]*//;s/[ \t]*$//' | \
|
||||
grep -v "NEEDS CLARIFICATION" | \
|
||||
grep -v "^N/A$" || echo ""
|
||||
}
|
||||
|
||||
parse_plan_data() {
|
||||
local plan_file="$1"
|
||||
|
||||
if [[ ! -f "$plan_file" ]]; then
|
||||
log_error "Plan file not found: $plan_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -r "$plan_file" ]]; then
|
||||
log_error "Plan file is not readable: $plan_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Parsing plan data from $plan_file"
|
||||
|
||||
NEW_LANG=$(extract_plan_field "Language/Version" "$plan_file")
|
||||
NEW_FRAMEWORK=$(extract_plan_field "Primary Dependencies" "$plan_file")
|
||||
NEW_DB=$(extract_plan_field "Storage" "$plan_file")
|
||||
NEW_PROJECT_TYPE=$(extract_plan_field "Project Type" "$plan_file")
|
||||
|
||||
# Log what we found
|
||||
if [[ -n "$NEW_LANG" ]]; then
|
||||
log_info "Found language: $NEW_LANG"
|
||||
else
|
||||
log_warning "No language information found in plan"
|
||||
fi
|
||||
|
||||
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
||||
log_info "Found framework: $NEW_FRAMEWORK"
|
||||
fi
|
||||
|
||||
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
||||
log_info "Found database: $NEW_DB"
|
||||
fi
|
||||
|
||||
if [[ -n "$NEW_PROJECT_TYPE" ]]; then
|
||||
log_info "Found project type: $NEW_PROJECT_TYPE"
|
||||
fi
|
||||
}
|
||||
|
||||
format_technology_stack() {
|
||||
local lang="$1"
|
||||
local framework="$2"
|
||||
local parts=()
|
||||
|
||||
# Add non-empty parts
|
||||
[[ -n "$lang" && "$lang" != "NEEDS CLARIFICATION" ]] && parts+=("$lang")
|
||||
[[ -n "$framework" && "$framework" != "NEEDS CLARIFICATION" && "$framework" != "N/A" ]] && parts+=("$framework")
|
||||
|
||||
# Join with proper formatting
|
||||
if [[ ${#parts[@]} -eq 0 ]]; then
|
||||
echo ""
|
||||
elif [[ ${#parts[@]} -eq 1 ]]; then
|
||||
echo "${parts[0]}"
|
||||
else
|
||||
# Join multiple parts with " + "
|
||||
local result="${parts[0]}"
|
||||
for ((i=1; i<${#parts[@]}; i++)); do
|
||||
result="$result + ${parts[i]}"
|
||||
done
|
||||
echo "$result"
|
||||
fi
|
||||
}
|
||||
|
||||
#==============================================================================
|
||||
# Template and Content Generation Functions
|
||||
#==============================================================================
|
||||
|
||||
get_project_structure() {
|
||||
local project_type="$1"
|
||||
|
||||
if [[ "$project_type" == *"web"* ]]; then
|
||||
echo "backend/\\nfrontend/\\ntests/"
|
||||
else
|
||||
echo "src/\\ntests/"
|
||||
fi
|
||||
}
|
||||
|
||||
get_commands_for_language() {
|
||||
local lang="$1"
|
||||
|
||||
case "$lang" in
|
||||
*"Python"*)
|
||||
echo "cd src && pytest && ruff check ."
|
||||
;;
|
||||
*"Rust"*)
|
||||
echo "cargo test && cargo clippy"
|
||||
;;
|
||||
*"JavaScript"*|*"TypeScript"*)
|
||||
echo "npm test \\&\\& npm run lint"
|
||||
;;
|
||||
*)
|
||||
echo "# Add commands for $lang"
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
get_language_conventions() {
|
||||
local lang="$1"
|
||||
echo "$lang: Follow standard conventions"
|
||||
}
|
||||
|
||||
create_new_agent_file() {
|
||||
local target_file="$1"
|
||||
local temp_file="$2"
|
||||
local project_name="$3"
|
||||
local current_date="$4"
|
||||
|
||||
if [[ ! -f "$TEMPLATE_FILE" ]]; then
|
||||
log_error "Template not found at $TEMPLATE_FILE"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -r "$TEMPLATE_FILE" ]]; then
|
||||
log_error "Template file is not readable: $TEMPLATE_FILE"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Creating new agent context file from template..."
|
||||
|
||||
if ! cp "$TEMPLATE_FILE" "$temp_file"; then
|
||||
log_error "Failed to copy template file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
# Replace template placeholders
|
||||
local project_structure
|
||||
project_structure=$(get_project_structure "$NEW_PROJECT_TYPE")
|
||||
|
||||
local commands
|
||||
commands=$(get_commands_for_language "$NEW_LANG")
|
||||
|
||||
local language_conventions
|
||||
language_conventions=$(get_language_conventions "$NEW_LANG")
|
||||
|
||||
# Perform substitutions with error checking using safer approach
|
||||
# Escape special characters for sed by using a different delimiter or escaping
|
||||
local escaped_lang=$(printf '%s\n' "$NEW_LANG" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||
local escaped_framework=$(printf '%s\n' "$NEW_FRAMEWORK" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||
local escaped_branch=$(printf '%s\n' "$CURRENT_BRANCH" | sed 's/[\[\.*^$()+{}|]/\\&/g')
|
||||
|
||||
# Build technology stack and recent change strings conditionally
|
||||
local tech_stack
|
||||
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
||||
tech_stack="- $escaped_lang + $escaped_framework ($escaped_branch)"
|
||||
elif [[ -n "$escaped_lang" ]]; then
|
||||
tech_stack="- $escaped_lang ($escaped_branch)"
|
||||
elif [[ -n "$escaped_framework" ]]; then
|
||||
tech_stack="- $escaped_framework ($escaped_branch)"
|
||||
else
|
||||
tech_stack="- ($escaped_branch)"
|
||||
fi
|
||||
|
||||
local recent_change
|
||||
if [[ -n "$escaped_lang" && -n "$escaped_framework" ]]; then
|
||||
recent_change="- $escaped_branch: Added $escaped_lang + $escaped_framework"
|
||||
elif [[ -n "$escaped_lang" ]]; then
|
||||
recent_change="- $escaped_branch: Added $escaped_lang"
|
||||
elif [[ -n "$escaped_framework" ]]; then
|
||||
recent_change="- $escaped_branch: Added $escaped_framework"
|
||||
else
|
||||
recent_change="- $escaped_branch: Added"
|
||||
fi
|
||||
|
||||
local substitutions=(
|
||||
"s|\[PROJECT NAME\]|$project_name|"
|
||||
"s|\[DATE\]|$current_date|"
|
||||
"s|\[EXTRACTED FROM ALL PLAN.MD FILES\]|$tech_stack|"
|
||||
"s|\[ACTUAL STRUCTURE FROM PLANS\]|$project_structure|g"
|
||||
"s|\[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES\]|$commands|"
|
||||
"s|\[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE\]|$language_conventions|"
|
||||
"s|\[LAST 3 FEATURES AND WHAT THEY ADDED\]|$recent_change|"
|
||||
)
|
||||
|
||||
for substitution in "${substitutions[@]}"; do
|
||||
if ! sed -i.bak -e "$substitution" "$temp_file"; then
|
||||
log_error "Failed to perform substitution: $substitution"
|
||||
rm -f "$temp_file" "$temp_file.bak"
|
||||
return 1
|
||||
fi
|
||||
done
|
||||
|
||||
# Convert \n sequences to actual newlines
|
||||
newline=$(printf '\n')
|
||||
sed -i.bak2 "s/\\\\n/${newline}/g" "$temp_file"
|
||||
|
||||
# Clean up backup files
|
||||
rm -f "$temp_file.bak" "$temp_file.bak2"
|
||||
|
||||
# Prepend Cursor frontmatter for .mdc files so rules are auto-included
|
||||
if [[ "$target_file" == *.mdc ]]; then
|
||||
local frontmatter_file
|
||||
frontmatter_file=$(mktemp) || return 1
|
||||
printf '%s\n' "---" "description: Project Development Guidelines" "globs: [\"**/*\"]" "alwaysApply: true" "---" "" > "$frontmatter_file"
|
||||
cat "$temp_file" >> "$frontmatter_file"
|
||||
mv "$frontmatter_file" "$temp_file"
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
|
||||
|
||||
|
||||
update_existing_agent_file() {
|
||||
local target_file="$1"
|
||||
local current_date="$2"
|
||||
|
||||
log_info "Updating existing agent context file..."
|
||||
|
||||
# Use a single temporary file for atomic update
|
||||
local temp_file
|
||||
temp_file=$(mktemp) || {
|
||||
log_error "Failed to create temporary file"
|
||||
return 1
|
||||
}
|
||||
|
||||
# Process the file in one pass
|
||||
local tech_stack=$(format_technology_stack "$NEW_LANG" "$NEW_FRAMEWORK")
|
||||
local new_tech_entries=()
|
||||
local new_change_entry=""
|
||||
|
||||
# Prepare new technology entries
|
||||
if [[ -n "$tech_stack" ]] && ! grep -q "$tech_stack" "$target_file"; then
|
||||
new_tech_entries+=("- $tech_stack ($CURRENT_BRANCH)")
|
||||
fi
|
||||
|
||||
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]] && ! grep -q "$NEW_DB" "$target_file"; then
|
||||
new_tech_entries+=("- $NEW_DB ($CURRENT_BRANCH)")
|
||||
fi
|
||||
|
||||
# Prepare new change entry
|
||||
if [[ -n "$tech_stack" ]]; then
|
||||
new_change_entry="- $CURRENT_BRANCH: Added $tech_stack"
|
||||
elif [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]] && [[ "$NEW_DB" != "NEEDS CLARIFICATION" ]]; then
|
||||
new_change_entry="- $CURRENT_BRANCH: Added $NEW_DB"
|
||||
fi
|
||||
|
||||
# Check if sections exist in the file
|
||||
local has_active_technologies=0
|
||||
local has_recent_changes=0
|
||||
|
||||
if grep -q "^## Active Technologies" "$target_file" 2>/dev/null; then
|
||||
has_active_technologies=1
|
||||
fi
|
||||
|
||||
if grep -q "^## Recent Changes" "$target_file" 2>/dev/null; then
|
||||
has_recent_changes=1
|
||||
fi
|
||||
|
||||
# Process file line by line
|
||||
local in_tech_section=false
|
||||
local in_changes_section=false
|
||||
local tech_entries_added=false
|
||||
local changes_entries_added=false
|
||||
local existing_changes_count=0
|
||||
local file_ended=false
|
||||
|
||||
while IFS= read -r line || [[ -n "$line" ]]; do
|
||||
# Handle Active Technologies section
|
||||
if [[ "$line" == "## Active Technologies" ]]; then
|
||||
echo "$line" >> "$temp_file"
|
||||
in_tech_section=true
|
||||
continue
|
||||
elif [[ $in_tech_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
||||
# Add new tech entries before closing the section
|
||||
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||
tech_entries_added=true
|
||||
fi
|
||||
echo "$line" >> "$temp_file"
|
||||
in_tech_section=false
|
||||
continue
|
||||
elif [[ $in_tech_section == true ]] && [[ -z "$line" ]]; then
|
||||
# Add new tech entries before empty line in tech section
|
||||
if [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||
tech_entries_added=true
|
||||
fi
|
||||
echo "$line" >> "$temp_file"
|
||||
continue
|
||||
fi
|
||||
|
||||
# Handle Recent Changes section
|
||||
if [[ "$line" == "## Recent Changes" ]]; then
|
||||
echo "$line" >> "$temp_file"
|
||||
# Add new change entry right after the heading
|
||||
if [[ -n "$new_change_entry" ]]; then
|
||||
echo "$new_change_entry" >> "$temp_file"
|
||||
fi
|
||||
in_changes_section=true
|
||||
changes_entries_added=true
|
||||
continue
|
||||
elif [[ $in_changes_section == true ]] && [[ "$line" =~ ^##[[:space:]] ]]; then
|
||||
echo "$line" >> "$temp_file"
|
||||
in_changes_section=false
|
||||
continue
|
||||
elif [[ $in_changes_section == true ]] && [[ "$line" == "- "* ]]; then
|
||||
# Keep only first 2 existing changes
|
||||
if [[ $existing_changes_count -lt 2 ]]; then
|
||||
echo "$line" >> "$temp_file"
|
||||
((existing_changes_count++))
|
||||
fi
|
||||
continue
|
||||
fi
|
||||
|
||||
# Update timestamp
|
||||
if [[ "$line" =~ \*\*Last\ updated\*\*:.*[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9] ]]; then
|
||||
echo "$line" | sed "s/[0-9][0-9][0-9][0-9]-[0-9][0-9]-[0-9][0-9]/$current_date/" >> "$temp_file"
|
||||
else
|
||||
echo "$line" >> "$temp_file"
|
||||
fi
|
||||
done < "$target_file"
|
||||
|
||||
# Post-loop check: if we're still in the Active Technologies section and haven't added new entries
|
||||
if [[ $in_tech_section == true ]] && [[ $tech_entries_added == false ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||
tech_entries_added=true
|
||||
fi
|
||||
|
||||
# If sections don't exist, add them at the end of the file
|
||||
if [[ $has_active_technologies -eq 0 ]] && [[ ${#new_tech_entries[@]} -gt 0 ]]; then
|
||||
echo "" >> "$temp_file"
|
||||
echo "## Active Technologies" >> "$temp_file"
|
||||
printf '%s\n' "${new_tech_entries[@]}" >> "$temp_file"
|
||||
tech_entries_added=true
|
||||
fi
|
||||
|
||||
if [[ $has_recent_changes -eq 0 ]] && [[ -n "$new_change_entry" ]]; then
|
||||
echo "" >> "$temp_file"
|
||||
echo "## Recent Changes" >> "$temp_file"
|
||||
echo "$new_change_entry" >> "$temp_file"
|
||||
changes_entries_added=true
|
||||
fi
|
||||
|
||||
# Ensure Cursor .mdc files have YAML frontmatter for auto-inclusion
|
||||
if [[ "$target_file" == *.mdc ]]; then
|
||||
if ! head -1 "$temp_file" | grep -q '^---'; then
|
||||
local frontmatter_file
|
||||
frontmatter_file=$(mktemp) || { rm -f "$temp_file"; return 1; }
|
||||
printf '%s\n' "---" "description: Project Development Guidelines" "globs: [\"**/*\"]" "alwaysApply: true" "---" "" > "$frontmatter_file"
|
||||
cat "$temp_file" >> "$frontmatter_file"
|
||||
mv "$frontmatter_file" "$temp_file"
|
||||
fi
|
||||
fi
|
||||
|
||||
# Move temp file to target atomically
|
||||
if ! mv "$temp_file" "$target_file"; then
|
||||
log_error "Failed to update target file"
|
||||
rm -f "$temp_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
#==============================================================================
|
||||
# Main Agent File Update Function
|
||||
#==============================================================================
|
||||
|
||||
update_agent_file() {
|
||||
local target_file="$1"
|
||||
local agent_name="$2"
|
||||
|
||||
if [[ -z "$target_file" ]] || [[ -z "$agent_name" ]]; then
|
||||
log_error "update_agent_file requires target_file and agent_name parameters"
|
||||
return 1
|
||||
fi
|
||||
|
||||
log_info "Updating $agent_name context file: $target_file"
|
||||
|
||||
local project_name
|
||||
project_name=$(basename "$REPO_ROOT")
|
||||
local current_date
|
||||
current_date=$(date +%Y-%m-%d)
|
||||
|
||||
# Create directory if it doesn't exist
|
||||
local target_dir
|
||||
target_dir=$(dirname "$target_file")
|
||||
if [[ ! -d "$target_dir" ]]; then
|
||||
if ! mkdir -p "$target_dir"; then
|
||||
log_error "Failed to create directory: $target_dir"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
if [[ ! -f "$target_file" ]]; then
|
||||
# Create new file from template
|
||||
local temp_file
|
||||
temp_file=$(mktemp) || {
|
||||
log_error "Failed to create temporary file"
|
||||
return 1
|
||||
}
|
||||
|
||||
if create_new_agent_file "$target_file" "$temp_file" "$project_name" "$current_date"; then
|
||||
if mv "$temp_file" "$target_file"; then
|
||||
log_success "Created new $agent_name context file"
|
||||
else
|
||||
log_error "Failed to move temporary file to $target_file"
|
||||
rm -f "$temp_file"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
log_error "Failed to create new agent file"
|
||||
rm -f "$temp_file"
|
||||
return 1
|
||||
fi
|
||||
else
|
||||
# Update existing file
|
||||
if [[ ! -r "$target_file" ]]; then
|
||||
log_error "Cannot read existing file: $target_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if [[ ! -w "$target_file" ]]; then
|
||||
log_error "Cannot write to existing file: $target_file"
|
||||
return 1
|
||||
fi
|
||||
|
||||
if update_existing_agent_file "$target_file" "$current_date"; then
|
||||
log_success "Updated existing $agent_name context file"
|
||||
else
|
||||
log_error "Failed to update existing agent file"
|
||||
return 1
|
||||
fi
|
||||
fi
|
||||
|
||||
return 0
|
||||
}
|
||||
|
||||
#==============================================================================
|
||||
# Agent Selection and Processing
|
||||
#==============================================================================
|
||||
|
||||
update_specific_agent() {
|
||||
local agent_type="$1"
|
||||
|
||||
case "$agent_type" in
|
||||
claude)
|
||||
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||
;;
|
||||
gemini)
|
||||
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
||||
;;
|
||||
copilot)
|
||||
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
||||
;;
|
||||
cursor-agent)
|
||||
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
||||
;;
|
||||
qwen)
|
||||
update_agent_file "$QWEN_FILE" "Qwen Code"
|
||||
;;
|
||||
opencode)
|
||||
update_agent_file "$AGENTS_FILE" "opencode"
|
||||
;;
|
||||
codex)
|
||||
update_agent_file "$AGENTS_FILE" "Codex CLI"
|
||||
;;
|
||||
windsurf)
|
||||
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
||||
;;
|
||||
kilocode)
|
||||
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
||||
;;
|
||||
auggie)
|
||||
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
||||
;;
|
||||
roo)
|
||||
update_agent_file "$ROO_FILE" "Roo Code"
|
||||
;;
|
||||
codebuddy)
|
||||
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
|
||||
;;
|
||||
qodercli)
|
||||
update_agent_file "$QODER_FILE" "Qoder CLI"
|
||||
;;
|
||||
amp)
|
||||
update_agent_file "$AMP_FILE" "Amp"
|
||||
;;
|
||||
shai)
|
||||
update_agent_file "$SHAI_FILE" "SHAI"
|
||||
;;
|
||||
kiro-cli)
|
||||
update_agent_file "$KIRO_FILE" "Kiro CLI"
|
||||
;;
|
||||
agy)
|
||||
update_agent_file "$AGY_FILE" "Antigravity"
|
||||
;;
|
||||
bob)
|
||||
update_agent_file "$BOB_FILE" "IBM Bob"
|
||||
;;
|
||||
generic)
|
||||
log_info "Generic agent: no predefined context file. Use the agent-specific update script for your agent."
|
||||
;;
|
||||
*)
|
||||
log_error "Unknown agent type '$agent_type'"
|
||||
log_error "Expected: claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|kiro-cli|agy|bob|qodercli|generic"
|
||||
exit 1
|
||||
;;
|
||||
esac
|
||||
}
|
||||
|
||||
update_all_existing_agents() {
|
||||
local found_agent=false
|
||||
|
||||
# Check each possible agent file and update if it exists
|
||||
if [[ -f "$CLAUDE_FILE" ]]; then
|
||||
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$GEMINI_FILE" ]]; then
|
||||
update_agent_file "$GEMINI_FILE" "Gemini CLI"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$COPILOT_FILE" ]]; then
|
||||
update_agent_file "$COPILOT_FILE" "GitHub Copilot"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$CURSOR_FILE" ]]; then
|
||||
update_agent_file "$CURSOR_FILE" "Cursor IDE"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$QWEN_FILE" ]]; then
|
||||
update_agent_file "$QWEN_FILE" "Qwen Code"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$AGENTS_FILE" ]]; then
|
||||
update_agent_file "$AGENTS_FILE" "Codex/opencode"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$WINDSURF_FILE" ]]; then
|
||||
update_agent_file "$WINDSURF_FILE" "Windsurf"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$KILOCODE_FILE" ]]; then
|
||||
update_agent_file "$KILOCODE_FILE" "Kilo Code"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$AUGGIE_FILE" ]]; then
|
||||
update_agent_file "$AUGGIE_FILE" "Auggie CLI"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$ROO_FILE" ]]; then
|
||||
update_agent_file "$ROO_FILE" "Roo Code"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$CODEBUDDY_FILE" ]]; then
|
||||
update_agent_file "$CODEBUDDY_FILE" "CodeBuddy CLI"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$SHAI_FILE" ]]; then
|
||||
update_agent_file "$SHAI_FILE" "SHAI"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$QODER_FILE" ]]; then
|
||||
update_agent_file "$QODER_FILE" "Qoder CLI"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$KIRO_FILE" ]]; then
|
||||
update_agent_file "$KIRO_FILE" "Kiro CLI"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
if [[ -f "$AGY_FILE" ]]; then
|
||||
update_agent_file "$AGY_FILE" "Antigravity"
|
||||
found_agent=true
|
||||
fi
|
||||
if [[ -f "$BOB_FILE" ]]; then
|
||||
update_agent_file "$BOB_FILE" "IBM Bob"
|
||||
found_agent=true
|
||||
fi
|
||||
|
||||
# If no agent files exist, create a default Claude file
|
||||
if [[ "$found_agent" == false ]]; then
|
||||
log_info "No existing agent files found, creating default Claude file..."
|
||||
update_agent_file "$CLAUDE_FILE" "Claude Code"
|
||||
fi
|
||||
}
|
||||
print_summary() {
|
||||
echo
|
||||
log_info "Summary of changes:"
|
||||
|
||||
if [[ -n "$NEW_LANG" ]]; then
|
||||
echo " - Added language: $NEW_LANG"
|
||||
fi
|
||||
|
||||
if [[ -n "$NEW_FRAMEWORK" ]]; then
|
||||
echo " - Added framework: $NEW_FRAMEWORK"
|
||||
fi
|
||||
|
||||
if [[ -n "$NEW_DB" ]] && [[ "$NEW_DB" != "N/A" ]]; then
|
||||
echo " - Added database: $NEW_DB"
|
||||
fi
|
||||
|
||||
echo
|
||||
|
||||
log_info "Usage: $0 [claude|gemini|copilot|cursor-agent|qwen|opencode|codex|windsurf|kilocode|auggie|roo|codebuddy|amp|shai|kiro-cli|agy|bob|qodercli]"
|
||||
}
|
||||
|
||||
#==============================================================================
|
||||
# Main Execution
|
||||
#==============================================================================
|
||||
|
||||
main() {
|
||||
# Validate environment before proceeding
|
||||
validate_environment
|
||||
|
||||
log_info "=== Updating agent context files for feature $CURRENT_BRANCH ==="
|
||||
|
||||
# Parse the plan file to extract project information
|
||||
if ! parse_plan_data "$NEW_PLAN"; then
|
||||
log_error "Failed to parse plan data"
|
||||
exit 1
|
||||
fi
|
||||
|
||||
# Process based on agent type argument
|
||||
local success=true
|
||||
|
||||
if [[ -z "$AGENT_TYPE" ]]; then
|
||||
# No specific agent provided - update all existing agent files
|
||||
log_info "No agent specified, updating all existing agent files..."
|
||||
if ! update_all_existing_agents; then
|
||||
success=false
|
||||
fi
|
||||
else
|
||||
# Specific agent provided - update only that agent
|
||||
log_info "Updating specific agent: $AGENT_TYPE"
|
||||
if ! update_specific_agent "$AGENT_TYPE"; then
|
||||
success=false
|
||||
fi
|
||||
fi
|
||||
|
||||
# Print summary
|
||||
print_summary
|
||||
|
||||
if [[ "$success" == true ]]; then
|
||||
log_success "Agent context update completed successfully"
|
||||
exit 0
|
||||
else
|
||||
log_error "Agent context update completed with errors"
|
||||
exit 1
|
||||
fi
|
||||
}
|
||||
|
||||
# Execute main function if script is run directly
|
||||
if [[ "${BASH_SOURCE[0]}" == "${0}" ]]; then
|
||||
main "$@"
|
||||
fi
|
||||
28
.specify/templates/agent-file-template.md
Normal file
28
.specify/templates/agent-file-template.md
Normal file
@@ -0,0 +1,28 @@
|
||||
# [PROJECT NAME] Development Guidelines
|
||||
|
||||
Auto-generated from all feature plans. Last updated: [DATE]
|
||||
|
||||
## Active Technologies
|
||||
|
||||
[EXTRACTED FROM ALL PLAN.MD FILES]
|
||||
|
||||
## Project Structure
|
||||
|
||||
```text
|
||||
[ACTUAL STRUCTURE FROM PLANS]
|
||||
```
|
||||
|
||||
## Commands
|
||||
|
||||
[ONLY COMMANDS FOR ACTIVE TECHNOLOGIES]
|
||||
|
||||
## Code Style
|
||||
|
||||
[LANGUAGE-SPECIFIC, ONLY FOR LANGUAGES IN USE]
|
||||
|
||||
## Recent Changes
|
||||
|
||||
[LAST 3 FEATURES AND WHAT THEY ADDED]
|
||||
|
||||
<!-- MANUAL ADDITIONS START -->
|
||||
<!-- MANUAL ADDITIONS END -->
|
||||
40
.specify/templates/checklist-template.md
Normal file
40
.specify/templates/checklist-template.md
Normal file
@@ -0,0 +1,40 @@
|
||||
# [CHECKLIST TYPE] Checklist: [FEATURE NAME]
|
||||
|
||||
**Purpose**: [Brief description of what this checklist covers]
|
||||
**Created**: [DATE]
|
||||
**Feature**: [Link to spec.md or relevant documentation]
|
||||
|
||||
**Note**: This checklist is generated by the `/speckit.checklist` command based on feature context and requirements.
|
||||
|
||||
<!--
|
||||
============================================================================
|
||||
IMPORTANT: The checklist items below are SAMPLE ITEMS for illustration only.
|
||||
|
||||
The /speckit.checklist command MUST replace these with actual items based on:
|
||||
- User's specific checklist request
|
||||
- Feature requirements from spec.md
|
||||
- Technical context from plan.md
|
||||
- Implementation details from tasks.md
|
||||
|
||||
DO NOT keep these sample items in the generated checklist file.
|
||||
============================================================================
|
||||
-->
|
||||
|
||||
## [Category 1]
|
||||
|
||||
- [ ] CHK001 First checklist item with clear action
|
||||
- [ ] CHK002 Second checklist item
|
||||
- [ ] CHK003 Third checklist item
|
||||
|
||||
## [Category 2]
|
||||
|
||||
- [ ] CHK004 Another category item
|
||||
- [ ] CHK005 Item with specific criteria
|
||||
- [ ] CHK006 Final item in this category
|
||||
|
||||
## Notes
|
||||
|
||||
- Check items off as completed: `[x]`
|
||||
- Add comments or findings inline
|
||||
- Link to relevant resources or documentation
|
||||
- Items are numbered sequentially for easy reference
|
||||
50
.specify/templates/constitution-template.md
Normal file
50
.specify/templates/constitution-template.md
Normal file
@@ -0,0 +1,50 @@
|
||||
# [PROJECT_NAME] Constitution
|
||||
<!-- Example: Spec Constitution, TaskFlow Constitution, etc. -->
|
||||
|
||||
## Core Principles
|
||||
|
||||
### [PRINCIPLE_1_NAME]
|
||||
<!-- Example: I. Library-First -->
|
||||
[PRINCIPLE_1_DESCRIPTION]
|
||||
<!-- Example: Every feature starts as a standalone library; Libraries must be self-contained, independently testable, documented; Clear purpose required - no organizational-only libraries -->
|
||||
|
||||
### [PRINCIPLE_2_NAME]
|
||||
<!-- Example: II. CLI Interface -->
|
||||
[PRINCIPLE_2_DESCRIPTION]
|
||||
<!-- Example: Every library exposes functionality via CLI; Text in/out protocol: stdin/args → stdout, errors → stderr; Support JSON + human-readable formats -->
|
||||
|
||||
### [PRINCIPLE_3_NAME]
|
||||
<!-- Example: III. Test-First (NON-NEGOTIABLE) -->
|
||||
[PRINCIPLE_3_DESCRIPTION]
|
||||
<!-- Example: TDD mandatory: Tests written → User approved → Tests fail → Then implement; Red-Green-Refactor cycle strictly enforced -->
|
||||
|
||||
### [PRINCIPLE_4_NAME]
|
||||
<!-- Example: IV. Integration Testing -->
|
||||
[PRINCIPLE_4_DESCRIPTION]
|
||||
<!-- Example: Focus areas requiring integration tests: New library contract tests, Contract changes, Inter-service communication, Shared schemas -->
|
||||
|
||||
### [PRINCIPLE_5_NAME]
|
||||
<!-- Example: V. Observability, VI. Versioning & Breaking Changes, VII. Simplicity -->
|
||||
[PRINCIPLE_5_DESCRIPTION]
|
||||
<!-- Example: Text I/O ensures debuggability; Structured logging required; Or: MAJOR.MINOR.BUILD format; Or: Start simple, YAGNI principles -->
|
||||
|
||||
## [SECTION_2_NAME]
|
||||
<!-- Example: Additional Constraints, Security Requirements, Performance Standards, etc. -->
|
||||
|
||||
[SECTION_2_CONTENT]
|
||||
<!-- Example: Technology stack requirements, compliance standards, deployment policies, etc. -->
|
||||
|
||||
## [SECTION_3_NAME]
|
||||
<!-- Example: Development Workflow, Review Process, Quality Gates, etc. -->
|
||||
|
||||
[SECTION_3_CONTENT]
|
||||
<!-- Example: Code review requirements, testing gates, deployment approval process, etc. -->
|
||||
|
||||
## Governance
|
||||
<!-- Example: Constitution supersedes all other practices; Amendments require documentation, approval, migration plan -->
|
||||
|
||||
[GOVERNANCE_RULES]
|
||||
<!-- Example: All PRs/reviews must verify compliance; Complexity must be justified; Use [GUIDANCE_FILE] for runtime development guidance -->
|
||||
|
||||
**Version**: [CONSTITUTION_VERSION] | **Ratified**: [RATIFICATION_DATE] | **Last Amended**: [LAST_AMENDED_DATE]
|
||||
<!-- Example: Version: 2.1.1 | Ratified: 2025-06-13 | Last Amended: 2025-07-16 -->
|
||||
104
.specify/templates/plan-template.md
Normal file
104
.specify/templates/plan-template.md
Normal file
@@ -0,0 +1,104 @@
|
||||
# Implementation Plan: [FEATURE]
|
||||
|
||||
**Branch**: `[###-feature-name]` | **Date**: [DATE] | **Spec**: [link]
|
||||
**Input**: Feature specification from `/specs/[###-feature-name]/spec.md`
|
||||
|
||||
**Note**: This template is filled in by the `/speckit.plan` command. See `.specify/templates/plan-template.md` for the execution workflow.
|
||||
|
||||
## Summary
|
||||
|
||||
[Extract from feature spec: primary requirement + technical approach from research]
|
||||
|
||||
## Technical Context
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: Replace the content in this section with the technical details
|
||||
for the project. The structure here is presented in advisory capacity to guide
|
||||
the iteration process.
|
||||
-->
|
||||
|
||||
**Language/Version**: [e.g., Python 3.11, Swift 5.9, Rust 1.75 or NEEDS CLARIFICATION]
|
||||
**Primary Dependencies**: [e.g., FastAPI, UIKit, LLVM or NEEDS CLARIFICATION]
|
||||
**Storage**: [if applicable, e.g., PostgreSQL, CoreData, files or N/A]
|
||||
**Testing**: [e.g., pytest, XCTest, cargo test or NEEDS CLARIFICATION]
|
||||
**Target Platform**: [e.g., Linux server, iOS 15+, WASM or NEEDS CLARIFICATION]
|
||||
**Project Type**: [e.g., library/cli/web-service/mobile-app/compiler/desktop-app or NEEDS CLARIFICATION]
|
||||
**Performance Goals**: [domain-specific, e.g., 1000 req/s, 10k lines/sec, 60 fps or NEEDS CLARIFICATION]
|
||||
**Constraints**: [domain-specific, e.g., <200ms p95, <100MB memory, offline-capable or NEEDS CLARIFICATION]
|
||||
**Scale/Scope**: [domain-specific, e.g., 10k users, 1M LOC, 50 screens or NEEDS CLARIFICATION]
|
||||
|
||||
## Constitution Check
|
||||
|
||||
*GATE: Must pass before Phase 0 research. Re-check after Phase 1 design.*
|
||||
|
||||
[Gates determined based on constitution file]
|
||||
|
||||
## Project Structure
|
||||
|
||||
### Documentation (this feature)
|
||||
|
||||
```text
|
||||
specs/[###-feature]/
|
||||
├── plan.md # This file (/speckit.plan command output)
|
||||
├── research.md # Phase 0 output (/speckit.plan command)
|
||||
├── data-model.md # Phase 1 output (/speckit.plan command)
|
||||
├── quickstart.md # Phase 1 output (/speckit.plan command)
|
||||
├── contracts/ # Phase 1 output (/speckit.plan command)
|
||||
└── tasks.md # Phase 2 output (/speckit.tasks command - NOT created by /speckit.plan)
|
||||
```
|
||||
|
||||
### Source Code (repository root)
|
||||
<!--
|
||||
ACTION REQUIRED: Replace the placeholder tree below with the concrete layout
|
||||
for this feature. Delete unused options and expand the chosen structure with
|
||||
real paths (e.g., apps/admin, packages/something). The delivered plan must
|
||||
not include Option labels.
|
||||
-->
|
||||
|
||||
```text
|
||||
# [REMOVE IF UNUSED] Option 1: Single project (DEFAULT)
|
||||
src/
|
||||
├── models/
|
||||
├── services/
|
||||
├── cli/
|
||||
└── lib/
|
||||
|
||||
tests/
|
||||
├── contract/
|
||||
├── integration/
|
||||
└── unit/
|
||||
|
||||
# [REMOVE IF UNUSED] Option 2: Web application (when "frontend" + "backend" detected)
|
||||
backend/
|
||||
├── src/
|
||||
│ ├── models/
|
||||
│ ├── services/
|
||||
│ └── api/
|
||||
└── tests/
|
||||
|
||||
frontend/
|
||||
├── src/
|
||||
│ ├── components/
|
||||
│ ├── pages/
|
||||
│ └── services/
|
||||
└── tests/
|
||||
|
||||
# [REMOVE IF UNUSED] Option 3: Mobile + API (when "iOS/Android" detected)
|
||||
api/
|
||||
└── [same as backend above]
|
||||
|
||||
ios/ or android/
|
||||
└── [platform-specific structure: feature modules, UI flows, platform tests]
|
||||
```
|
||||
|
||||
**Structure Decision**: [Document the selected structure and reference the real
|
||||
directories captured above]
|
||||
|
||||
## Complexity Tracking
|
||||
|
||||
> **Fill ONLY if Constitution Check has violations that must be justified**
|
||||
|
||||
| Violation | Why Needed | Simpler Alternative Rejected Because |
|
||||
|-----------|------------|-------------------------------------|
|
||||
| [e.g., 4th project] | [current need] | [why 3 projects insufficient] |
|
||||
| [e.g., Repository pattern] | [specific problem] | [why direct DB access insufficient] |
|
||||
115
.specify/templates/spec-template.md
Normal file
115
.specify/templates/spec-template.md
Normal file
@@ -0,0 +1,115 @@
|
||||
# Feature Specification: [FEATURE NAME]
|
||||
|
||||
**Feature Branch**: `[###-feature-name]`
|
||||
**Created**: [DATE]
|
||||
**Status**: Draft
|
||||
**Input**: User description: "$ARGUMENTS"
|
||||
|
||||
## User Scenarios & Testing *(mandatory)*
|
||||
|
||||
<!--
|
||||
IMPORTANT: User stories should be PRIORITIZED as user journeys ordered by importance.
|
||||
Each user story/journey must be INDEPENDENTLY TESTABLE - meaning if you implement just ONE of them,
|
||||
you should still have a viable MVP (Minimum Viable Product) that delivers value.
|
||||
|
||||
Assign priorities (P1, P2, P3, etc.) to each story, where P1 is the most critical.
|
||||
Think of each story as a standalone slice of functionality that can be:
|
||||
- Developed independently
|
||||
- Tested independently
|
||||
- Deployed independently
|
||||
- Demonstrated to users independently
|
||||
-->
|
||||
|
||||
### User Story 1 - [Brief Title] (Priority: P1)
|
||||
|
||||
[Describe this user journey in plain language]
|
||||
|
||||
**Why this priority**: [Explain the value and why it has this priority level]
|
||||
|
||||
**Independent Test**: [Describe how this can be tested independently - e.g., "Can be fully tested by [specific action] and delivers [specific value]"]
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||
2. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||
|
||||
---
|
||||
|
||||
### User Story 2 - [Brief Title] (Priority: P2)
|
||||
|
||||
[Describe this user journey in plain language]
|
||||
|
||||
**Why this priority**: [Explain the value and why it has this priority level]
|
||||
|
||||
**Independent Test**: [Describe how this can be tested independently]
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||
|
||||
---
|
||||
|
||||
### User Story 3 - [Brief Title] (Priority: P3)
|
||||
|
||||
[Describe this user journey in plain language]
|
||||
|
||||
**Why this priority**: [Explain the value and why it has this priority level]
|
||||
|
||||
**Independent Test**: [Describe how this can be tested independently]
|
||||
|
||||
**Acceptance Scenarios**:
|
||||
|
||||
1. **Given** [initial state], **When** [action], **Then** [expected outcome]
|
||||
|
||||
---
|
||||
|
||||
[Add more user stories as needed, each with an assigned priority]
|
||||
|
||||
### Edge Cases
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: The content in this section represents placeholders.
|
||||
Fill them out with the right edge cases.
|
||||
-->
|
||||
|
||||
- What happens when [boundary condition]?
|
||||
- How does system handle [error scenario]?
|
||||
|
||||
## Requirements *(mandatory)*
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: The content in this section represents placeholders.
|
||||
Fill them out with the right functional requirements.
|
||||
-->
|
||||
|
||||
### Functional Requirements
|
||||
|
||||
- **FR-001**: System MUST [specific capability, e.g., "allow users to create accounts"]
|
||||
- **FR-002**: System MUST [specific capability, e.g., "validate email addresses"]
|
||||
- **FR-003**: Users MUST be able to [key interaction, e.g., "reset their password"]
|
||||
- **FR-004**: System MUST [data requirement, e.g., "persist user preferences"]
|
||||
- **FR-005**: System MUST [behavior, e.g., "log all security events"]
|
||||
|
||||
*Example of marking unclear requirements:*
|
||||
|
||||
- **FR-006**: System MUST authenticate users via [NEEDS CLARIFICATION: auth method not specified - email/password, SSO, OAuth?]
|
||||
- **FR-007**: System MUST retain user data for [NEEDS CLARIFICATION: retention period not specified]
|
||||
|
||||
### Key Entities *(include if feature involves data)*
|
||||
|
||||
- **[Entity 1]**: [What it represents, key attributes without implementation]
|
||||
- **[Entity 2]**: [What it represents, relationships to other entities]
|
||||
|
||||
## Success Criteria *(mandatory)*
|
||||
|
||||
<!--
|
||||
ACTION REQUIRED: Define measurable success criteria.
|
||||
These must be technology-agnostic and measurable.
|
||||
-->
|
||||
|
||||
### Measurable Outcomes
|
||||
|
||||
- **SC-001**: [Measurable metric, e.g., "Users can complete account creation in under 2 minutes"]
|
||||
- **SC-002**: [Measurable metric, e.g., "System handles 1000 concurrent users without degradation"]
|
||||
- **SC-003**: [User satisfaction metric, e.g., "90% of users successfully complete primary task on first attempt"]
|
||||
- **SC-004**: [Business metric, e.g., "Reduce support tickets related to [X] by 50%"]
|
||||
251
.specify/templates/tasks-template.md
Normal file
251
.specify/templates/tasks-template.md
Normal file
@@ -0,0 +1,251 @@
|
||||
---
|
||||
|
||||
description: "Task list template for feature implementation"
|
||||
---
|
||||
|
||||
# Tasks: [FEATURE NAME]
|
||||
|
||||
**Input**: Design documents from `/specs/[###-feature-name]/`
|
||||
**Prerequisites**: plan.md (required), spec.md (required for user stories), research.md, data-model.md, contracts/
|
||||
|
||||
**Tests**: The examples below include test tasks. Tests are OPTIONAL - only include them if explicitly requested in the feature specification.
|
||||
|
||||
**Organization**: Tasks are grouped by user story to enable independent implementation and testing of each story.
|
||||
|
||||
## Format: `[ID] [P?] [Story] Description`
|
||||
|
||||
- **[P]**: Can run in parallel (different files, no dependencies)
|
||||
- **[Story]**: Which user story this task belongs to (e.g., US1, US2, US3)
|
||||
- Include exact file paths in descriptions
|
||||
|
||||
## Path Conventions
|
||||
|
||||
- **Single project**: `src/`, `tests/` at repository root
|
||||
- **Web app**: `backend/src/`, `frontend/src/`
|
||||
- **Mobile**: `api/src/`, `ios/src/` or `android/src/`
|
||||
- Paths shown below assume single project - adjust based on plan.md structure
|
||||
|
||||
<!--
|
||||
============================================================================
|
||||
IMPORTANT: The tasks below are SAMPLE TASKS for illustration purposes only.
|
||||
|
||||
The /speckit.tasks command MUST replace these with actual tasks based on:
|
||||
- User stories from spec.md (with their priorities P1, P2, P3...)
|
||||
- Feature requirements from plan.md
|
||||
- Entities from data-model.md
|
||||
- Endpoints from contracts/
|
||||
|
||||
Tasks MUST be organized by user story so each story can be:
|
||||
- Implemented independently
|
||||
- Tested independently
|
||||
- Delivered as an MVP increment
|
||||
|
||||
DO NOT keep these sample tasks in the generated tasks.md file.
|
||||
============================================================================
|
||||
-->
|
||||
|
||||
## Phase 1: Setup (Shared Infrastructure)
|
||||
|
||||
**Purpose**: Project initialization and basic structure
|
||||
|
||||
- [ ] T001 Create project structure per implementation plan
|
||||
- [ ] T002 Initialize [language] project with [framework] dependencies
|
||||
- [ ] T003 [P] Configure linting and formatting tools
|
||||
|
||||
---
|
||||
|
||||
## Phase 2: Foundational (Blocking Prerequisites)
|
||||
|
||||
**Purpose**: Core infrastructure that MUST be complete before ANY user story can be implemented
|
||||
|
||||
**⚠️ CRITICAL**: No user story work can begin until this phase is complete
|
||||
|
||||
Examples of foundational tasks (adjust based on your project):
|
||||
|
||||
- [ ] T004 Setup database schema and migrations framework
|
||||
- [ ] T005 [P] Implement authentication/authorization framework
|
||||
- [ ] T006 [P] Setup API routing and middleware structure
|
||||
- [ ] T007 Create base models/entities that all stories depend on
|
||||
- [ ] T008 Configure error handling and logging infrastructure
|
||||
- [ ] T009 Setup environment configuration management
|
||||
|
||||
**Checkpoint**: Foundation ready - user story implementation can now begin in parallel
|
||||
|
||||
---
|
||||
|
||||
## Phase 3: User Story 1 - [Title] (Priority: P1) 🎯 MVP
|
||||
|
||||
**Goal**: [Brief description of what this story delivers]
|
||||
|
||||
**Independent Test**: [How to verify this story works on its own]
|
||||
|
||||
### Tests for User Story 1 (OPTIONAL - only if tests requested) ⚠️
|
||||
|
||||
> **NOTE: Write these tests FIRST, ensure they FAIL before implementation**
|
||||
|
||||
- [ ] T010 [P] [US1] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||
- [ ] T011 [P] [US1] Integration test for [user journey] in tests/integration/test_[name].py
|
||||
|
||||
### Implementation for User Story 1
|
||||
|
||||
- [ ] T012 [P] [US1] Create [Entity1] model in src/models/[entity1].py
|
||||
- [ ] T013 [P] [US1] Create [Entity2] model in src/models/[entity2].py
|
||||
- [ ] T014 [US1] Implement [Service] in src/services/[service].py (depends on T012, T013)
|
||||
- [ ] T015 [US1] Implement [endpoint/feature] in src/[location]/[file].py
|
||||
- [ ] T016 [US1] Add validation and error handling
|
||||
- [ ] T017 [US1] Add logging for user story 1 operations
|
||||
|
||||
**Checkpoint**: At this point, User Story 1 should be fully functional and testable independently
|
||||
|
||||
---
|
||||
|
||||
## Phase 4: User Story 2 - [Title] (Priority: P2)
|
||||
|
||||
**Goal**: [Brief description of what this story delivers]
|
||||
|
||||
**Independent Test**: [How to verify this story works on its own]
|
||||
|
||||
### Tests for User Story 2 (OPTIONAL - only if tests requested) ⚠️
|
||||
|
||||
- [ ] T018 [P] [US2] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||
- [ ] T019 [P] [US2] Integration test for [user journey] in tests/integration/test_[name].py
|
||||
|
||||
### Implementation for User Story 2
|
||||
|
||||
- [ ] T020 [P] [US2] Create [Entity] model in src/models/[entity].py
|
||||
- [ ] T021 [US2] Implement [Service] in src/services/[service].py
|
||||
- [ ] T022 [US2] Implement [endpoint/feature] in src/[location]/[file].py
|
||||
- [ ] T023 [US2] Integrate with User Story 1 components (if needed)
|
||||
|
||||
**Checkpoint**: At this point, User Stories 1 AND 2 should both work independently
|
||||
|
||||
---
|
||||
|
||||
## Phase 5: User Story 3 - [Title] (Priority: P3)
|
||||
|
||||
**Goal**: [Brief description of what this story delivers]
|
||||
|
||||
**Independent Test**: [How to verify this story works on its own]
|
||||
|
||||
### Tests for User Story 3 (OPTIONAL - only if tests requested) ⚠️
|
||||
|
||||
- [ ] T024 [P] [US3] Contract test for [endpoint] in tests/contract/test_[name].py
|
||||
- [ ] T025 [P] [US3] Integration test for [user journey] in tests/integration/test_[name].py
|
||||
|
||||
### Implementation for User Story 3
|
||||
|
||||
- [ ] T026 [P] [US3] Create [Entity] model in src/models/[entity].py
|
||||
- [ ] T027 [US3] Implement [Service] in src/services/[service].py
|
||||
- [ ] T028 [US3] Implement [endpoint/feature] in src/[location]/[file].py
|
||||
|
||||
**Checkpoint**: All user stories should now be independently functional
|
||||
|
||||
---
|
||||
|
||||
[Add more user story phases as needed, following the same pattern]
|
||||
|
||||
---
|
||||
|
||||
## Phase N: Polish & Cross-Cutting Concerns
|
||||
|
||||
**Purpose**: Improvements that affect multiple user stories
|
||||
|
||||
- [ ] TXXX [P] Documentation updates in docs/
|
||||
- [ ] TXXX Code cleanup and refactoring
|
||||
- [ ] TXXX Performance optimization across all stories
|
||||
- [ ] TXXX [P] Additional unit tests (if requested) in tests/unit/
|
||||
- [ ] TXXX Security hardening
|
||||
- [ ] TXXX Run quickstart.md validation
|
||||
|
||||
---
|
||||
|
||||
## Dependencies & Execution Order
|
||||
|
||||
### Phase Dependencies
|
||||
|
||||
- **Setup (Phase 1)**: No dependencies - can start immediately
|
||||
- **Foundational (Phase 2)**: Depends on Setup completion - BLOCKS all user stories
|
||||
- **User Stories (Phase 3+)**: All depend on Foundational phase completion
|
||||
- User stories can then proceed in parallel (if staffed)
|
||||
- Or sequentially in priority order (P1 → P2 → P3)
|
||||
- **Polish (Final Phase)**: Depends on all desired user stories being complete
|
||||
|
||||
### User Story Dependencies
|
||||
|
||||
- **User Story 1 (P1)**: Can start after Foundational (Phase 2) - No dependencies on other stories
|
||||
- **User Story 2 (P2)**: Can start after Foundational (Phase 2) - May integrate with US1 but should be independently testable
|
||||
- **User Story 3 (P3)**: Can start after Foundational (Phase 2) - May integrate with US1/US2 but should be independently testable
|
||||
|
||||
### Within Each User Story
|
||||
|
||||
- Tests (if included) MUST be written and FAIL before implementation
|
||||
- Models before services
|
||||
- Services before endpoints
|
||||
- Core implementation before integration
|
||||
- Story complete before moving to next priority
|
||||
|
||||
### Parallel Opportunities
|
||||
|
||||
- All Setup tasks marked [P] can run in parallel
|
||||
- All Foundational tasks marked [P] can run in parallel (within Phase 2)
|
||||
- Once Foundational phase completes, all user stories can start in parallel (if team capacity allows)
|
||||
- All tests for a user story marked [P] can run in parallel
|
||||
- Models within a story marked [P] can run in parallel
|
||||
- Different user stories can be worked on in parallel by different team members
|
||||
|
||||
---
|
||||
|
||||
## Parallel Example: User Story 1
|
||||
|
||||
```bash
|
||||
# Launch all tests for User Story 1 together (if tests requested):
|
||||
Task: "Contract test for [endpoint] in tests/contract/test_[name].py"
|
||||
Task: "Integration test for [user journey] in tests/integration/test_[name].py"
|
||||
|
||||
# Launch all models for User Story 1 together:
|
||||
Task: "Create [Entity1] model in src/models/[entity1].py"
|
||||
Task: "Create [Entity2] model in src/models/[entity2].py"
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Implementation Strategy
|
||||
|
||||
### MVP First (User Story 1 Only)
|
||||
|
||||
1. Complete Phase 1: Setup
|
||||
2. Complete Phase 2: Foundational (CRITICAL - blocks all stories)
|
||||
3. Complete Phase 3: User Story 1
|
||||
4. **STOP and VALIDATE**: Test User Story 1 independently
|
||||
5. Deploy/demo if ready
|
||||
|
||||
### Incremental Delivery
|
||||
|
||||
1. Complete Setup + Foundational → Foundation ready
|
||||
2. Add User Story 1 → Test independently → Deploy/Demo (MVP!)
|
||||
3. Add User Story 2 → Test independently → Deploy/Demo
|
||||
4. Add User Story 3 → Test independently → Deploy/Demo
|
||||
5. Each story adds value without breaking previous stories
|
||||
|
||||
### Parallel Team Strategy
|
||||
|
||||
With multiple developers:
|
||||
|
||||
1. Team completes Setup + Foundational together
|
||||
2. Once Foundational is done:
|
||||
- Developer A: User Story 1
|
||||
- Developer B: User Story 2
|
||||
- Developer C: User Story 3
|
||||
3. Stories complete and integrate independently
|
||||
|
||||
---
|
||||
|
||||
## Notes
|
||||
|
||||
- [P] tasks = different files, no dependencies
|
||||
- [Story] label maps task to specific user story for traceability
|
||||
- Each user story should be independently completable and testable
|
||||
- Verify tests fail before implementing
|
||||
- Commit after each task or logical group
|
||||
- Stop at any checkpoint to validate story independently
|
||||
- Avoid: vague tasks, same file conflicts, cross-story dependencies that break independence
|
||||
Reference in New Issue
Block a user