critical priority medium complexity backend pending backend specialist Tier 5

Acceptance Criteria

In-app notification records are inserted into the notifications table for both coordinator and mentor when coordinator resolution succeeded.
When coordinator was not_found, only the mentor in-app record is inserted — no null-reference errors occur.
Each inserted record contains: recipient_user_id, notification_type = 'pause_status_change', entity_type = 'assignment', entity_id = mentor assignment UUID, transition_direction ('paused'|'resumed'), is_read = false, organization_id, created_at (server-generated timestamp).
In-app dispatch and FCM dispatch are initiated concurrently using Promise.allSettled — neither channel blocks the other.
In-app insert failure is caught, logged with error details, and does not cause the orchestrator to return HTTP 500 (it is a non-blocking failure).
Inserted records are immediately visible via Supabase Realtime to subscribed Flutter clients without additional polling.
Bulk insert (single INSERT with multiple rows) is used when both coordinator and mentor records exist to minimize database round-trips.
RLS on the notifications table is respected: the service role insert bypasses RLS but records are readable by recipient users only via their JWT-scoped RLS policy.
In-app dispatch completes within 300ms (single bulk INSERT).
Dispatch result (rows inserted, any errors) is included in the orchestrator's structured log output.

Technical Requirements

frameworks
Supabase Edge Functions (Deno)
Supabase PostgreSQL 15
Supabase Realtime
apis
Supabase PostgREST REST API
data models
assignment
performance requirements
Single bulk INSERT for both notification records — not two sequential inserts.
In-app dispatch budget: 300ms maximum.
Realtime delivery to subscribed clients expected within 500ms of insert commit.
security requirements
Service role client used for insert to bypass RLS; explicit organization_id and recipient_user_id must be set on every row.
RLS SELECT policy on notifications table must restrict reads to recipient_user_id = auth.uid() — verify policy exists before deployment.
No PII in notification body/title fields stored in database — only display-safe strings and UUIDs.
Realtime channel subscriptions validated by RLS — users only receive events for their own notification rows.

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Implement as `dispatchInAppNotifications(supabase, records: InAppNotificationRecord[]): Promise`. Use Supabase client `.from('notifications').insert(records)` with an array for bulk insert. Wrap in try/catch and return a typed result. For parallel execution with FCM, use `Promise.allSettled([dispatchFcm(...), dispatchInApp(...)])` in the orchestrator's main flow — both channels start simultaneously.

The Supabase Realtime integration is automatic once the row is inserted (assuming the Flutter client has a live `notifications` channel subscription filtered by `recipient_user_id`). Confirm with the frontend team that the Flutter BLoC listening to the notifications Realtime channel handles the `pause_status_change` notification_type and routes to the correct UI state.

Testing Requirements

Unit tests: mock Supabase client and verify (a) bulk INSERT called with both records when coordinator found, (b) single INSERT called with mentor-only record when coordinator not_found, (c) insert failure caught and logged without throwing, (d) correct field values for PAUSE vs RESUME transitions. Integration tests: insert records into a test Supabase instance and verify via SELECT that records exist with correct fields. Verify Realtime event fires on a subscribed test client within 1 second of insert. Assert no RLS violation when reading back with recipient user's JWT.

Component
Pause Notification Orchestrator
service medium
Epic Risks (3)
medium impact medium prob technical

Supabase Edge Functions have cold start latency that may push coordinator notification delivery beyond the 5-second SLA, particularly during low-traffic periods when the function is not warm.

Mitigation & Contingency

Mitigation: Keep the Edge Function lightweight — delegate all heavy logic to the orchestrator layer and avoid large dependency bundles. Measure p95 end-to-end latency in staging and document actual SLA achievable.

Contingency: If cold start latency consistently breaches 5 seconds, introduce a keep-warm ping from the nightly-scheduler or document the actual p95 latency in the feature spec and adjust the acceptance criterion to reflect the realistic bound.

medium impact medium prob technical

Supabase database webhooks may fire duplicate events for a single status change under retry conditions, causing coordinators to receive multiple identical notifications for one pause event.

Mitigation & Contingency

Mitigation: Add idempotency checking in the webhook handler using the event timestamp and peer mentor ID. Store a notification dispatch record in the pause-status-record-repository and skip dispatch if a record for the same event already exists.

Contingency: If duplicates slip through in production, add a de-duplication filter in the notification centre UI layer so the coordinator sees at most one card per event, and implement a cleanup job for the notifications table.

medium impact low prob scope

A peer mentor with multi-chapter membership may have more than one responsible coordinator. The orchestrator design currently targets a single coordinator, and resolving multiple recipients may require schema changes to the org membership query.

Mitigation & Contingency

Mitigation: Review the multi-chapter-membership-service patterns before implementing the orchestrator's coordinator resolution. Design the dispatcher call to accept an array of coordinator IDs from the outset so adding multiple recipients is non-breaking.

Contingency: If multi-coordinator dispatch is out of scope for this epic, document the limitation and create a follow-up task. Default to the primary coordinator (lowest chapter hierarchy level) as the single recipient in the interim.