PostgreSQL schema migration for periodic_summaries
epic-periodic-summaries-foundation-task-001 — Create and apply a PostgreSQL migration that adds the periodic_summaries table with columns for organisation_id, period_type (weekly/monthly/quarterly), period_start, period_end, session_count, total_hours, generated_at, and a JSONB payload column for flexible aggregation data. Include appropriate indexes on organisation_id and period boundaries for query performance.
Acceptance Criteria
Technical Requirements
Implementation Notes
Use `supabase migration new periodic_summaries_schema` to generate the file. Place all DDL inside a single transaction block (`BEGIN; ... COMMIT;`) for atomicity. Use `TIMESTAMPTZ` (not `TIMESTAMP`) for all time columns to ensure UTC storage regardless of server locale — critical for Norwegian locale handling in task-003.
The JSONB payload is intentionally schema-less to accommodate future aggregation dimensions (e.g., activity_type breakdown) without further migrations. Do not add application-level columns (e.g., status, is_published) at this stage — keep the schema minimal and extend via payload. Reference the `organisations` table UUID type to confirm the FK type matches exactly.
Testing Requirements
Write SQL-level integration tests using pgTAP or Supabase's built-in test runner: (1) verify all columns exist with correct types, (2) verify CHECK constraints reject invalid period_type values and negative counts/hours, (3) verify UNIQUE constraint rejects duplicate (org, type, start, end) combinations, (4) verify FK cascade deletes orphaned summaries when an organisation is removed, (5) run EXPLAIN ANALYZE on the composite index to confirm index-only scan. All tests must pass in a clean `supabase db reset` environment.
Supabase RLS policies for aggregation views are more complex than single-table policies. A misconfigured policy could silently allow a coordinator in one organisation to see data from another, causing a data breach and breaking trust with participating organisations.
Mitigation & Contingency
Mitigation: Write automated RLS integration tests that create two separate organisations with distinct data, then assert that queries authenticated as org-A users return only org-A rows. Run these tests in CI on every PR touching the database layer.
Contingency: If an RLS bypass is discovered post-deployment, immediately disable the periodic summaries feature flag, revoke affected sessions, audit access logs, notify affected organisations, and patch the policy before re-enabling.
Activity records may span multiple sessions types, proxy registrations, and coordinator bulk entries. Incorrect JOIN logic or missing filters in the aggregation query could double-count sessions or omit activity types, producing inaccurate summaries that erode user trust.
Mitigation & Contingency
Mitigation: Build a fixture dataset covering all activity registration paths (direct, proxy, bulk) and assert expected aggregated counts in integration tests before any UI consumes the repository.
Contingency: If inaccurate counts are reported post-launch, mark affected summaries as invalidated in the database and re-trigger generation once the query is corrected. Communicate transparently to affected users via an in-app banner.
The local cache must be invalidated when a new summary arrives via push notification. If the push token is stale or the FCM/APNs delivery is delayed, the device may show an outdated summary for an extended period, confusing users who see different numbers online versus offline.
Mitigation & Contingency
Mitigation: Implement a TTL on cached summaries (max 48 hours) so stale data is auto-cleared even without a push notification. Also trigger cache refresh on app foreground if the current period's summary is older than 24 hours.
Contingency: Provide a manual pull-to-refresh on the summary card that bypasses the cache and fetches directly from Supabase when a network connection is available.