high priority low complexity backend pending backend specialist Tier 5

Acceptance Criteria

dispatchInAppNotification(mentorId, event, payload) inserts one notification record per resolved coordinator recipient
Each notification record contains: recipient_user_id, event_type, title, body, deep_link_path, is_read (default false), created_at
deep_link_path points to the pause status view for the specific mentor (e.g., /mentors/{mentorId}/pause-status)
All notification records are inserted in a single Supabase batch INSERT — not one INSERT per coordinator
If coordinator resolution returns zero recipients, the method completes without inserting any rows and returns successfully
A failure in batch INSERT throws a typed NotificationDispatchException — partial row insertion must not occur (use Supabase transaction or upsert)
Notification records are visible via the in-app notification feed for each coordinator immediately after insert
The method returns the count of notification records successfully inserted
Notification title and body are generated from a template keyed by event type (mentor_paused, mentor_reactivated) — no hardcoded strings in the dispatch method

Technical Requirements

frameworks
Flutter
Supabase Dart client (supabase_flutter)
apis
Supabase REST — in_app_notifications table (batch INSERT)
CoordinatorNotificationService.resolveCoordinatorsForMentor (task-006)
data models
InAppNotification
NotificationRecord
MentorPauseEvent
performance requirements
Batch INSERT must complete in < 500 ms for up to 20 coordinator recipients
Use a single Supabase .from('in_app_notifications').insert(List<Map>) call — not a loop of individual inserts
security requirements
RLS on in_app_notifications must ensure coordinators can only read their own notifications (recipient_user_id = auth.uid())
The insert must be performed with appropriate permissions — if coordinator resolution requires service-role context, use a Supabase Edge Function rather than client-side insert
deep_link_path must be a relative internal path only — no external URLs

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Build the notification record list as List> records = coordinatorIds.map((id) => { 'recipient_user_id': id, 'event_type': event.name, 'title': _template.title(event), 'body': _template.body(event, payload), 'deep_link_path': '/mentors/${mentorId}/pause-status', 'is_read': false, }).toList() and then call await supabase.from('in_app_notifications').insert(records). Define a NotificationTemplate class with title(MentorPauseEvent) and body(MentorPauseEvent, Map payload) methods — keep all user-facing strings in this class for future localisation. Reuse the same resolveCoordinatorsForMentor from task-006 — do not duplicate resolution logic. This method is intentionally simpler than the push dispatch — resist adding retry logic here since in-app notification delivery is synchronous via Supabase.

Testing Requirements

Unit tests (flutter_test + mocktail): mock the Supabase client and verify a single batch insert is called with the correct number of records matching the resolved coordinator list. Verify deep_link_path format for mentor_paused and mentor_reactivated events. Test that zero coordinators results in zero inserts and no exception. Test that PostgrestException during insert is wrapped in NotificationDispatchException.

Test notification title/body template rendering for each event type. Integration test: verify records appear in in_app_notifications table with correct recipient_user_ids after a real insert against local Supabase.

Component
Coordinator Notification Service
service medium
Epic Risks (3)
high impact medium prob security

Supabase RLS policies for status reads and writes must correctly distinguish between a mentor editing their own status and a coordinator editing another mentor's status within the same chapter. Incorrect policies could allow cross-chapter data leakage or silently block legitimate status updates, causing hard-to-diagnose runtime failures.

Mitigation & Contingency

Mitigation: Write RLS policies with explicit role checks (auth.uid() = mentor_id OR chapter_coordinator_check()) and verify with integration tests that cover same-chapter coordinator access, cross-chapter denial, and self-access. Review policies with a second developer before merging.

Contingency: If policy errors surface after merge, temporarily widen policy to coordinator role globally while a targeted fix is authored; use Supabase audit logs to trace any unauthorised access during the interim.

medium impact medium prob integration

CoordinatorNotificationService must correctly resolve which coordinator(s) are responsible for a given mentor's chapter. If the chapter-coordinator mapping is incomplete or a mentor belongs to multiple chapters (as with NHF multi-chapter memberships), the service could fail to notify or duplicate notifications to the wrong coordinators.

Mitigation & Contingency

Mitigation: Use the existing chapter membership data model and query all active coordinator roles for each of the mentor's chapters. Add a de-duplication step before dispatch. Write integration tests with fixtures covering single-chapter, multi-chapter, and no-coordinator edge cases.

Contingency: If resolution logic proves too complex at this stage, fall back to notifying all coordinators in the organisation until a proper chapter-scoped resolver can be delivered in a follow-up task.

high impact low prob technical

Adding new columns to peer_mentors in production could conflict with existing application code that does SELECT * queries if new non-nullable columns without defaults are introduced, causing unexpected failures in unrelated screens.

Mitigation & Contingency

Mitigation: Make all new columns nullable or provide safe defaults. Use additive migration strategy with no column renames or drops. Run migration against a staging copy of production data before applying to live.

Contingency: Prepare a rollback migration script that drops only the new columns; coordinate with the team to deploy the rollback and hotfix immediately if production issues are detected.