critical priority medium complexity integration pending integration specialist Tier 4

Acceptance Criteria

FCM tokens are retrieved from the device_token table for both coordinator and mentor user_ids before dispatch.
If a user has multiple device tokens (multi-device), a dispatch attempt is made for each token.
If a user has no registered FCM token, the FCM dispatch is skipped for that recipient with a WARN log — pipeline does not fail.
FCM dispatch for coordinator and mentor is initiated in parallel (Promise.allSettled) to minimize latency contribution.
FCM delivery failure (HTTP 4xx/5xx from FCM API) is caught, logged with response code and recipient role, and does NOT block in-app notification dispatch.
Stale FCM tokens (FCM returns UNREGISTERED error) are deleted from the device_token table as part of the cleanup step.
Total FCM dispatch phase (both recipients) completes within 2 seconds to remain within the 5-second SLA.
All FCM API calls are made server-side from within the Edge Function — never from the mobile client.
FCM server key / service account credentials are read from Deno environment variables — not hardcoded.
Dispatch results (per recipient, per token) are returned to orchestrator as a structured summary for logging.

Technical Requirements

frameworks
Supabase Edge Functions (Deno)
Firebase Cloud Messaging (FCM) API v1
apis
Firebase Cloud Messaging (FCM) API v1
Supabase PostgREST REST API
data models
device_token
performance requirements
Parallel dispatch for coordinator and mentor using Promise.allSettled — sequential dispatch is not acceptable.
Token retrieval query must use index on device_token(user_id, platform) for sub-100ms lookups.
Total FCM phase budget: 2000ms maximum.
security requirements
FCM service account JSON key stored in Deno environment variable (FIREBASE_SERVICE_ACCOUNT_JSON) — never committed to source.
FCM server key never distributed to mobile app binary — all dispatch via Edge Function.
FCM tokens stored in device_token table are treated as sensitive identifiers — never logged in full, only last 8 chars for debug.
Stale token cleanup prevents accumulation of orphaned device tokens tied to former users.

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Use FCM HTTP v1 API (not legacy FCM API which is deprecated). Authentication requires a short-lived OAuth2 access token derived from the service account JSON — implement `getAccessToken(serviceAccountJson)` using the google-auth-library pattern adapted for Deno fetch. Token cache the access token for the duration of the function invocation (it's valid for 1 hour). Structure as `dispatchFcmNotifications(tokens: string[], payload: FcmPayload): Promise` returning per-token results.

Use `Promise.allSettled` not `Promise.all` so one token failure doesn't abort others. For stale token cleanup, batch the DELETE query rather than one per token. Coordinate with mobile team to confirm the FCM data payload keys match the Flutter notification router's expected schema.

Testing Requirements

Unit tests: mock FCM HTTP client and verify (a) parallel dispatch invoked for both recipients, (b) missing token skips dispatch without error, (c) FCM UNREGISTERED response triggers token deletion, (d) FCM HTTP 500 is caught and logged without throwing. Integration tests: use a FCM test project or mock server to verify correct HTTP request shape (Authorization header, payload structure). Verify Promise.allSettled semantics — one rejection must not prevent the other from completing. Performance test: assert both dispatches complete within 2000ms under mocked latency of 800ms per call.

Component
Pause Notification Orchestrator
service medium
Epic Risks (3)
medium impact medium prob technical

Supabase Edge Functions have cold start latency that may push coordinator notification delivery beyond the 5-second SLA, particularly during low-traffic periods when the function is not warm.

Mitigation & Contingency

Mitigation: Keep the Edge Function lightweight — delegate all heavy logic to the orchestrator layer and avoid large dependency bundles. Measure p95 end-to-end latency in staging and document actual SLA achievable.

Contingency: If cold start latency consistently breaches 5 seconds, introduce a keep-warm ping from the nightly-scheduler or document the actual p95 latency in the feature spec and adjust the acceptance criterion to reflect the realistic bound.

medium impact medium prob technical

Supabase database webhooks may fire duplicate events for a single status change under retry conditions, causing coordinators to receive multiple identical notifications for one pause event.

Mitigation & Contingency

Mitigation: Add idempotency checking in the webhook handler using the event timestamp and peer mentor ID. Store a notification dispatch record in the pause-status-record-repository and skip dispatch if a record for the same event already exists.

Contingency: If duplicates slip through in production, add a de-duplication filter in the notification centre UI layer so the coordinator sees at most one card per event, and implement a cleanup job for the notifications table.

medium impact low prob scope

A peer mentor with multi-chapter membership may have more than one responsible coordinator. The orchestrator design currently targets a single coordinator, and resolving multiple recipients may require schema changes to the org membership query.

Mitigation & Contingency

Mitigation: Review the multi-chapter-membership-service patterns before implementing the orchestrator's coordinator resolution. Design the dispatcher call to accept an array of coordinator IDs from the outset so adding multiple recipients is non-breaking.

Contingency: If multi-coordinator dispatch is out of scope for this epic, document the limitation and create a follow-up task. Default to the primary coordinator (lowest chapter hierarchy level) as the single recipient in the interim.