high priority medium complexity backend pending backend specialist Tier 4

Acceptance Criteria

dispatchPushNotification(mentorId, event, payload) resolves all active coordinator recipients before dispatching
Each coordinator's FCM token is fetched from the notification repository — stale or missing tokens are skipped with a warning log, not an exception
Push notifications are dispatched to all resolved coordinator recipients — partial failure for one coordinator does not prevent dispatch to others
Each failed dispatch is retried up to 3 times with exponential backoff (1s, 2s, 4s) before being marked as failed
Per-coordinator dispatch result (success / failed_after_retries / no_token) is logged with coordinator_id and event type — no PII in logs
The method returns a DispatchResult object summarising total_recipients, successful_dispatches, and failed_dispatches
Dispatching to a mentor with no coordinator recipients completes without error and returns DispatchResult with zero recipients
The FCM payload includes the event type, mentor_id, and the deep-link path for the pause status view
The method completes within 10 seconds for up to 20 coordinator recipients

Technical Requirements

frameworks
Flutter
Supabase Dart client (supabase_flutter)
firebase_messaging (FCM)
apis
FCM HTTP v1 API via Supabase Edge Function or direct HTTP call
Supabase REST — fcm_tokens / notification_tokens table
CoordinatorNotificationService.resolveCoordinatorsForMentor (task-006)
data models
FCMToken
NotificationPayload
DispatchResult
CoordinatorRecipient
performance requirements
Use Future.wait for parallel FCM dispatch across all recipients — do not dispatch sequentially
Total method duration < 10 seconds for 20 recipients (network included)
Retry backoff must be implemented with non-blocking Future.delayed — no thread sleeps
security requirements
FCM tokens must never be logged, even in verbose/debug mode — log only token_hash or coordinator_id
The dispatch call must authenticate via Supabase service-role key on the Edge Function side — the mobile client must not hold FCM server credentials
Notification payload must not contain sensitive personal data (name, diagnosis, address) — include only entity IDs and deep-link paths

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

FCM dispatch should be implemented via a Supabase Edge Function (Deno/TypeScript) that accepts a list of FCM tokens and a payload, then calls the FCM HTTP v1 API using the Firebase Admin SDK. The Flutter client calls this via supabase.functions.invoke('send-push-notifications', body: {...}). This keeps FCM server credentials server-side. Retry logic in the Flutter service: implement a _dispatchWithRetry(token, payload, maxAttempts: 3) helper that catches FCM errors and uses Future.delayed(Duration(seconds: pow(2, attempt).toInt())) before retrying.

Collect results with a List and aggregate into DispatchResult. For the event enum, define MentorPauseEvent { mentorPaused, mentorReactivated } and map to FCM notification titles/bodies in a separate NotificationTemplateProvider.

Testing Requirements

Unit tests (flutter_test + mocktail): mock the coordinator resolution service, FCM token repository, and FCM sender. Verify correct FCM payload structure for each event type (mentor_paused, mentor_reactivated). Test that a missing FCM token for one coordinator does not abort dispatch to others. Test retry logic: mock FCM sender to fail twice then succeed — verify exactly 3 calls and ultimate success.

Test exponential backoff timing with a fake clock. Test DispatchResult values for all-success, partial-failure, and no-recipients scenarios. Do not make real FCM calls in tests.

Component
Coordinator Notification Service
service medium
Epic Risks (3)
high impact medium prob security

Supabase RLS policies for status reads and writes must correctly distinguish between a mentor editing their own status and a coordinator editing another mentor's status within the same chapter. Incorrect policies could allow cross-chapter data leakage or silently block legitimate status updates, causing hard-to-diagnose runtime failures.

Mitigation & Contingency

Mitigation: Write RLS policies with explicit role checks (auth.uid() = mentor_id OR chapter_coordinator_check()) and verify with integration tests that cover same-chapter coordinator access, cross-chapter denial, and self-access. Review policies with a second developer before merging.

Contingency: If policy errors surface after merge, temporarily widen policy to coordinator role globally while a targeted fix is authored; use Supabase audit logs to trace any unauthorised access during the interim.

medium impact medium prob integration

CoordinatorNotificationService must correctly resolve which coordinator(s) are responsible for a given mentor's chapter. If the chapter-coordinator mapping is incomplete or a mentor belongs to multiple chapters (as with NHF multi-chapter memberships), the service could fail to notify or duplicate notifications to the wrong coordinators.

Mitigation & Contingency

Mitigation: Use the existing chapter membership data model and query all active coordinator roles for each of the mentor's chapters. Add a de-duplication step before dispatch. Write integration tests with fixtures covering single-chapter, multi-chapter, and no-coordinator edge cases.

Contingency: If resolution logic proves too complex at this stage, fall back to notifying all coordinators in the organisation until a proper chapter-scoped resolver can be delivered in a follow-up task.

high impact low prob technical

Adding new columns to peer_mentors in production could conflict with existing application code that does SELECT * queries if new non-nullable columns without defaults are introduced, causing unexpected failures in unrelated screens.

Mitigation & Contingency

Mitigation: Make all new columns nullable or provide safe defaults. Use additive migration strategy with no column renames or drops. Run migration against a staging copy of production data before applying to live.

Contingency: Prepare a rollback migration script that drops only the new columns; coordinate with the team to deploy the rollback and hotfix immediately if production issues are detected.