Implement push notification dispatch for coordinators
epic-peer-mentor-pause-foundation-task-007 — Implement CoordinatorNotificationService.dispatchPushNotification(mentorId, event, payload) that resolves coordinator recipients via the resolution logic, retrieves their FCM tokens from the notification repository, and dispatches push notifications through the FCM push notification sender. Include retry logic and failure logging per coordinator recipient.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
FCM dispatch should be implemented via a Supabase Edge Function (Deno/TypeScript) that accepts a list of FCM tokens and a payload, then calls the FCM HTTP v1 API using the Firebase Admin SDK. The Flutter client calls this via supabase.functions.invoke('send-push-notifications', body: {...}). This keeps FCM server credentials server-side. Retry logic in the Flutter service: implement a _dispatchWithRetry(token, payload, maxAttempts: 3) helper that catches FCM errors and uses Future.delayed(Duration(seconds: pow(2, attempt).toInt())) before retrying.
Collect results with a List
Testing Requirements
Unit tests (flutter_test + mocktail): mock the coordinator resolution service, FCM token repository, and FCM sender. Verify correct FCM payload structure for each event type (mentor_paused, mentor_reactivated). Test that a missing FCM token for one coordinator does not abort dispatch to others. Test retry logic: mock FCM sender to fail twice then succeed — verify exactly 3 calls and ultimate success.
Test exponential backoff timing with a fake clock. Test DispatchResult values for all-success, partial-failure, and no-recipients scenarios. Do not make real FCM calls in tests.
Supabase RLS policies for status reads and writes must correctly distinguish between a mentor editing their own status and a coordinator editing another mentor's status within the same chapter. Incorrect policies could allow cross-chapter data leakage or silently block legitimate status updates, causing hard-to-diagnose runtime failures.
Mitigation & Contingency
Mitigation: Write RLS policies with explicit role checks (auth.uid() = mentor_id OR chapter_coordinator_check()) and verify with integration tests that cover same-chapter coordinator access, cross-chapter denial, and self-access. Review policies with a second developer before merging.
Contingency: If policy errors surface after merge, temporarily widen policy to coordinator role globally while a targeted fix is authored; use Supabase audit logs to trace any unauthorised access during the interim.
CoordinatorNotificationService must correctly resolve which coordinator(s) are responsible for a given mentor's chapter. If the chapter-coordinator mapping is incomplete or a mentor belongs to multiple chapters (as with NHF multi-chapter memberships), the service could fail to notify or duplicate notifications to the wrong coordinators.
Mitigation & Contingency
Mitigation: Use the existing chapter membership data model and query all active coordinator roles for each of the mentor's chapters. Add a de-duplication step before dispatch. Write integration tests with fixtures covering single-chapter, multi-chapter, and no-coordinator edge cases.
Contingency: If resolution logic proves too complex at this stage, fall back to notifying all coordinators in the organisation until a proper chapter-scoped resolver can be delivered in a follow-up task.
Adding new columns to peer_mentors in production could conflict with existing application code that does SELECT * queries if new non-nullable columns without defaults are introduced, causing unexpected failures in unrelated screens.
Mitigation & Contingency
Mitigation: Make all new columns nullable or provide safe defaults. Use additive migration strategy with no column renames or drops. Run migration against a staging copy of production data before applying to live.
Contingency: Prepare a rollback migration script that drops only the new columns; coordinate with the team to deploy the rollback and hotfix immediately if production issues are detected.