Implement coordinator resolution in orchestrator
epic-pause-status-notifications-backend-pipeline-task-006 — Implement the coordinator resolution step within the Pause Notification Orchestrator. Query the database to find the coordinator responsible for the pausing mentor using the peer_mentor_profiles and coordinator_assignments tables. Handle cases where no coordinator is assigned and log resolution failures without crashing the pipeline.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
Implement as a pure async function `resolveCoordinator(supabase, mentorId, organizationId): Promise
Log at WARN level (not ERROR) for not_found to avoid alert fatigue on legitimate unassigned mentors.
Testing Requirements
Unit tests: mock the Supabase client and verify (a) correct JOIN query construction, (b) correct handling of empty result set returning not_found discriminated union, (c) deactivated coordinator treated as not_found, (d) multi-assignment scenario selecting is_primary=true row. Integration tests: run against a test Supabase instance with seeded coordinator_assignments data, verify resolution succeeds with known mentor_id, verify not_found path with an unassigned mentor. Assert logs contain required structured fields. All tests must run in Deno test runner.
Supabase Edge Functions have cold start latency that may push coordinator notification delivery beyond the 5-second SLA, particularly during low-traffic periods when the function is not warm.
Mitigation & Contingency
Mitigation: Keep the Edge Function lightweight — delegate all heavy logic to the orchestrator layer and avoid large dependency bundles. Measure p95 end-to-end latency in staging and document actual SLA achievable.
Contingency: If cold start latency consistently breaches 5 seconds, introduce a keep-warm ping from the nightly-scheduler or document the actual p95 latency in the feature spec and adjust the acceptance criterion to reflect the realistic bound.
Supabase database webhooks may fire duplicate events for a single status change under retry conditions, causing coordinators to receive multiple identical notifications for one pause event.
Mitigation & Contingency
Mitigation: Add idempotency checking in the webhook handler using the event timestamp and peer mentor ID. Store a notification dispatch record in the pause-status-record-repository and skip dispatch if a record for the same event already exists.
Contingency: If duplicates slip through in production, add a de-duplication filter in the notification centre UI layer so the coordinator sees at most one card per event, and implement a cleanup job for the notifications table.
A peer mentor with multi-chapter membership may have more than one responsible coordinator. The orchestrator design currently targets a single coordinator, and resolving multiple recipients may require schema changes to the org membership query.
Mitigation & Contingency
Mitigation: Review the multi-chapter-membership-service patterns before implementing the orchestrator's coordinator resolution. Design the dispatcher call to accept an array of coordinator IDs from the outset so adding multiple recipients is non-breaking.
Contingency: If multi-coordinator dispatch is out of scope for this epic, document the limitation and create a follow-up task. Default to the primary coordinator (lowest chapter hierarchy level) as the single recipient in the interim.