high priority medium complexity backend pending backend specialist Tier 3

Acceptance Criteria

Given a notification_type and org_node_id, the dispatcher resolves all users with the template's `target_role` within the org subtree and dispatches notifications to all of them
FCM dispatch calls are batched in groups of 500 (FCM batch limit) — a subtree with 1200 coordinators results in 3 batches, not 1200 individual API calls
Each dispatch event (one notification_type to one org_node_id) creates a single `notification_event` record before dispatch begins, with status='pending'
Each recipient's delivery outcome is recorded in `notification_deliveries` table: event_id, recipient_id, fcm_token, status (delivered/failed/no_token), error_message, dispatched_at
If a recipient has no FCM token registered, they are skipped without error but the delivery record is written with status='no_token'
If an FCM batch call returns partial failures, successful deliveries and failed deliveries are each recorded correctly — partial batch failure does not mark all deliveries as failed
After all batches complete, the `notification_event` record is updated to status='completed' with total_recipients, delivered_count, and failed_count
For in-app notifications, a record is inserted into the `in_app_notifications` table per recipient, regardless of FCM dispatch outcome
The dispatcher is callable with an optional `dry_run=true` parameter that resolves recipients and logs the would-be dispatch but performs no actual FCM calls or DB inserts — used for testing and previewing recipient lists
Dispatch for different org nodes is independent: a failure dispatching to org node A does not prevent dispatch to org node B when bulk-dispatching to multiple nodes

Technical Requirements

frameworks
Flutter
Dart
Riverpod
apis
FCM (Firebase Cloud Messaging) via Supabase Edge Function
Supabase Postgres (notification_events, notification_deliveries, in_app_notifications tables)
OrgHierarchyService.getSubtreeIds()
AdminNotificationDispatcher template registry (task-011)
data models
NotificationEvent
NotificationDelivery
InAppNotification
NotificationTemplate
UserFcmToken
OrgNode
performance requirements
Full dispatch cycle for 1200 recipients (3 FCM batches) completes in under 10 seconds
Recipient lookup and FCM token resolution uses a single JOIN query — no N+1 queries per recipient
notification_deliveries bulk insert uses Supabase batch insert (single API call per 500 records)
security requirements
FCM dispatch is performed exclusively via Supabase Edge Function using a server-side FCM service account key — the key is never exposed to the Flutter client
Recipients are validated against org-scope before dispatch — no cross-org notification leakage
FCM tokens are stored encrypted at rest in the database; decrypted only within the Edge Function at dispatch time
notification_events and notification_deliveries tables are insert-only from the client role; only the Edge Function service role can update status fields

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Separate the dispatch logic into three clean phases: (1) Resolution — `List resolveRecipients(templateType, orgNodeId)` using the template's role filter and OrgHierarchyService; (2) Batching — `List> batchRecipients(recipients, batchSize: 500)`; (3) Dispatch — iterate batches, call FCM, collect outcomes. This separation makes each phase independently testable. For the notification_event status lifecycle, use a DB transaction for the initial 'pending' insert to ensure it always exists before any delivery records reference it. Use `Future.wait(batches.map(dispatchBatch))` to dispatch multiple batches concurrently where FCM rate limits permit — but add a configurable `maxConcurrentBatches` parameter defaulting to 3 to avoid overwhelming FCM.

For delivery outcome recording, collect all outcomes in memory during dispatch, then bulk-insert them in a single Supabase `upsert` call after all batches complete — this minimizes round-trips. Implement `dry_run` by injecting a `NotificationDispatchStrategy` interface with real and dry-run implementations — this avoids sprinkling `if (dryRun)` conditionals throughout the dispatch logic.

Testing Requirements

Unit tests: mock OrgHierarchyService, FCM client, and Supabase DB client; assert that recipients are resolved from the correct subtree; assert FCM calls are batched at 500 recipients per batch; assert notification_event is created with status='pending' before dispatch and updated to 'completed' after; assert partial FCM failure records correct delivered/failed counts. Test dry_run=true: assert no FCM calls are made and no DB inserts occur. Test no_token scenario: recipient without FCM token results in delivery record with status='no_token'. Integration tests against Supabase local + FCM emulator: seed 600 coordinator users with FCM tokens across 2 org nodes, dispatch a USER_STATUS_CHANGED notification, assert 2 FCM batch calls are made, assert 600 delivery records are created with correct statuses, assert notification_event has delivered_count=600.

Error recovery test: mock FCM to fail for tokens 400-500 in a 600-recipient dispatch, assert records 1-399 and 501-600 are marked 'delivered' and 400-500 are marked 'failed'.

Component
Admin Notification Dispatcher
infrastructure medium
Epic Risks (4)
medium impact high prob technical

OrgHierarchyNavigator rendering NHF's full 1,400-chapter tree in a single widget may cause Flutter frame-rate drops below 60 fps on mid-range devices, making the navigator unusable for NHF national admins.

Mitigation & Contingency

Mitigation: Implement lazy expansion: only load immediate children on node expand rather than the full tree upfront. Use virtual scrolling for long sibling lists. Test with a synthetic 1,400-node dataset on a low-end Android device during development.

Contingency: If lazy expansion is insufficient, replace the tree widget with a paginated drill-down navigator (select level → select child) that avoids rendering more than 50 nodes at a time.

medium impact medium prob dependency

Bufdir may update their required export column structure or file format during or after development. If the AdminExportService hardcodes the current Bufdir schema, any format change requires a code release rather than a config update.

Mitigation & Contingency

Mitigation: Drive the Bufdir column mapping from a configuration repository rather than hardcoded constants. Abstract column definitions into a named schema config so that format changes require only a config update and re-deployment without service logic changes.

Contingency: If Bufdir format changes post-launch, release a config update within one sprint. If the change is structural (new required sections), scope a targeted service update and communicate timeline to partner organisations.

high impact medium prob integration

Role transition side-effects in UserManagementService (e.g., certification expiry removing mentor from chapter listing, pause triggering coordinator notification) may interact with external services like HLF's website sync. Incomplete side-effect handling could leave the system in an inconsistent state.

Mitigation & Contingency

Mitigation: Model side-effects as explicit domain events published after the primary state change is persisted. Implement event handlers as idempotent operations so re-processing is safe. Write integration tests that assert all side-effects fire correctly for each role transition type.

Contingency: If a side-effect fails after the primary change is persisted, log the failure with full context and trigger a manual reconciliation alert to the on-call team. Provide an admin-accessible re-trigger action for failed side-effects.

medium impact medium prob scope

If AdminStatisticsService cache TTL is set too long, org_admin may see significantly stale KPI values (e.g., a mentor newly paused an hour ago still appears as active), undermining trust in the dashboard.

Mitigation & Contingency

Mitigation: Default cache TTL to 5 minutes with a manual refresh action on the dashboard. Implement cache invalidation triggered by UserManagementService write operations that affect counted entities.

Contingency: If staleness causes org admin complaints post-launch, reduce TTL to 60 seconds and introduce a real-time Supabase subscription for high-impact counters (paused mentors, expiring certifications).