Connect webhook handler to orchestrator
epic-pause-status-notifications-backend-pipeline-task-010 — Wire the pause status webhook handler to invoke the Pause Notification Orchestrator after successful payload validation and transition detection. Pass the resolved TransitionEvent and mentor ID, handle orchestrator errors by returning HTTP 500 with structured error details, and confirm the synchronous response contract satisfies Supabase webhook retry semantics.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 6 - 158 tasks
Can start after Tier 5 completes
Implementation Notes
The wiring is minimal — this task is essentially two lines of code plus error handling: `const result = await orchestrator.run(event, mentorId); return new Response(JSON.stringify({ success: true, ...result }), { status: 200 })` wrapped in try/catch. The critical discipline here is the HTTP contract: return 500 on any orchestrator failure to leverage Supabase's built-in retry, and return 200 only on genuine success or intentional skip. Document this contract in a comment in the handler. For idempotency concerns, note that Supabase webhook retries are rare but possible — the orchestrator should be designed to handle duplicate invocations gracefully (duplicate in-app records are acceptable; a deduplication key on notifications table is a future enhancement).
Testing Requirements
Unit tests: mock orchestrator and verify (a) successful orchestration returns HTTP 200 with correct body, (b) orchestrator exception returns HTTP 500 with sanitized error body, (c) non-transition skipped event returns HTTP 200 with skipped=true and does not call orchestrator. Integration test: fire a real database update via Supabase test client and verify the Edge Function is triggered, executes end-to-end, and returns HTTP 200 within 5 seconds. Verify retry behavior by returning HTTP 500 once and confirming Supabase retries the webhook. Assert no stack traces appear in error response bodies.
Supabase Edge Functions have cold start latency that may push coordinator notification delivery beyond the 5-second SLA, particularly during low-traffic periods when the function is not warm.
Mitigation & Contingency
Mitigation: Keep the Edge Function lightweight — delegate all heavy logic to the orchestrator layer and avoid large dependency bundles. Measure p95 end-to-end latency in staging and document actual SLA achievable.
Contingency: If cold start latency consistently breaches 5 seconds, introduce a keep-warm ping from the nightly-scheduler or document the actual p95 latency in the feature spec and adjust the acceptance criterion to reflect the realistic bound.
Supabase database webhooks may fire duplicate events for a single status change under retry conditions, causing coordinators to receive multiple identical notifications for one pause event.
Mitigation & Contingency
Mitigation: Add idempotency checking in the webhook handler using the event timestamp and peer mentor ID. Store a notification dispatch record in the pause-status-record-repository and skip dispatch if a record for the same event already exists.
Contingency: If duplicates slip through in production, add a de-duplication filter in the notification centre UI layer so the coordinator sees at most one card per event, and implement a cleanup job for the notifications table.
A peer mentor with multi-chapter membership may have more than one responsible coordinator. The orchestrator design currently targets a single coordinator, and resolving multiple recipients may require schema changes to the org membership query.
Mitigation & Contingency
Mitigation: Review the multi-chapter-membership-service patterns before implementing the orchestrator's coordinator resolution. Design the dispatcher call to accept an array of coordinator IDs from the outset so adding multiple recipients is non-breaking.
Contingency: If multi-coordinator dispatch is out of scope for this epic, document the limitation and create a follow-up task. Default to the primary coordinator (lowest chapter hierarchy level) as the single recipient in the interim.