critical priority medium complexity database pending database specialist Tier 0

Acceptance Criteria

Table scenario_notification_records exists with: id (uuid pk default gen_random_uuid()), user_id (uuid not null), scenario_type (text not null), sent_at (timestamptz not null default now()), delivery_status (text not null default 'pending'), payload_json (jsonb), retry_count (int not null default 0)
delivery_status has a CHECK constraint limiting values to ('pending', 'delivered', 'failed', 'retrying')
Table scenario_notification_cooldowns exists with: user_id (uuid not null), scenario_type (text not null), last_triggered_at (timestamptz not null), PRIMARY KEY (user_id, scenario_type)
RLS enabled on both tables
Policy on scenario_notification_records: users can SELECT their own rows (auth.uid() = user_id); no INSERT/UPDATE/DELETE from client — service role only
Policy on scenario_notification_cooldowns: users can SELECT their own row; service role handles all writes
Index on scenario_notification_records(user_id, scenario_type) for pending retry queries
Index on scenario_notification_cooldowns(user_id, scenario_type) — covered by PK, verify query planner uses it
Migration script is idempotent and runs without error on a clean test project

Technical Requirements

frameworks
Supabase Postgres migrations
pgSQL
apis
Supabase CLI migrations
Supabase service role (writes from Edge Functions only)
data models
scenario_notification_records (id, user_id, scenario_type, sent_at, delivery_status, payload_json, retry_count)
scenario_notification_cooldowns (user_id, scenario_type, last_triggered_at)
performance requirements
Composite index on (user_id, scenario_type) on records table for retry queue queries
getPendingRetries() query must complete in under 100ms for up to 10,000 rows
cooldown lookup by (user_id, scenario_type) uses PK index — no sequential scan
security requirements
Client users must NOT be able to INSERT or UPDATE notification records — only service role (scheduler Edge Function) may write
Client users may SELECT their own records for display purposes only
payload_json must not contain sensitive PII — store only scenario metadata
RLS must prevent cross-user access on both tables

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

The scenario_notification_cooldowns table uses a composite PK (user_id, scenario_type) instead of a surrogate id — this enforces the one-cooldown-per-user-per-scenario constraint at the database level and simplifies upsert logic (ON CONFLICT (user_id, scenario_type) DO UPDATE SET last_triggered_at = EXCLUDED.last_triggered_at). For the records table, avoid a foreign key to auth.users on user_id since the scheduler runs as service role and auth.users is in a separate schema — use a soft reference with the application enforcing user existence. The payload_json column should store only the notification metadata (title, body template variables, deep link) not any personal health or contact data, in line with the project's privacy requirements for Norwegian users (GDPR).

Testing Requirements

Verification checklist for PR: (1) Peer mentor user can SELECT their own rows from scenario_notification_records. (2) Peer mentor user cannot INSERT into scenario_notification_records — expect RLS violation error. (3) Peer mentor user cannot SELECT another user's records. (4) Service role can INSERT, UPDATE, and SELECT on both tables without restriction.

(5) INSERT with delivery_status = 'unknown' fails with CHECK constraint violation. (6) Cooldown table PK constraint prevents duplicate (user_id, scenario_type) rows — second INSERT with same PK must fail or use ON CONFLICT DO UPDATE. Run EXPLAIN ANALYZE on the pending retries query to confirm index usage.

Component
Scenario Notification Repository
data medium
Epic Risks (3)
high impact medium prob dependency

FCM service account key and APNs certificate configuration may be missing or misconfigured in the Supabase Edge Function secrets store, blocking end-to-end push delivery testing until resolved by the infrastructure owner.

Mitigation & Contingency

Mitigation: Raise a credentials-setup task in the project board at epic start; document the exact secret names required in scenario-evaluation-config so the infrastructure owner can provision them independently of development work.

Contingency: Implement a mock push-notification-dispatcher stub that records payloads to the database for local testing, allowing the rest of the feature to proceed while credentials are obtained.

high impact low prob security

Incorrect RLS policies on the scenario_notifications or notification_preferences tables could allow one user to read or modify another user's notification records, constituting a data privacy breach.

Mitigation & Contingency

Mitigation: Write dedicated RLS policy tests using Supabase's built-in test framework before any application code touches the tables; require a peer security review of all policy definitions before merging.

Contingency: If a policy gap is discovered post-merge, immediately disable the affected table's read policy, notify the security lead, and deploy a hotfix with corrected policies before re-enabling access.

medium impact medium prob dependency

Norwegian Bokmål ARB localisation strings for all scenario message templates may not be available at implementation time, causing content-builder tests to fail and delaying integration.

Mitigation & Contingency

Mitigation: Define all required ARB message keys in a tracked document shared with the content owner at epic kickoff; use English placeholder strings that follow the final format so template injection logic can be tested independently.

Contingency: Ship with English-only strings in the first release and gate Norwegian strings behind a feature flag that is enabled once translations are reviewed and approved.