Integrate HLFDynamicsSyncService for portal visibility updates
epic-certification-management-core-logic-task-006 — Wire HLFDynamicsSyncService into CertificationManagementService so that after any expiry state change the Dynamics portal visibility is updated synchronously. On expiry: suppress the mentor from the HLF public portal. On valid renewal: restore visibility. Handle sync failures with retry logic and surface errors to the caller without aborting the local state update.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 5 - 253 tasks
Can start after Tier 4 completes
Implementation Notes
Implement `HLFDynamicsSyncService` as a thin client that calls a Supabase Edge Function (`hlf-dynamics-sync`) rather than hitting the Dynamics REST API directly from Dart. The Edge Function holds the Azure AD token lifecycle and retries at the server level for network-layer failures. The Dart-side retry in `CertificationManagementService` is the application-layer retry for transient Edge Function availability issues (cold starts, rate limits). Use a helper `RetryPolicy` class with configurable maxAttempts and backoff strategy to keep the retry logic reusable across other sync services (Xledger, Bufdir).
The `dynamics_sync_failures` table should have a `resolved_at` nullable column — set it when the next successful sync clears the failure. Consider using Supabase Realtime to listen for `dynamics_sync_failures` inserts on the coordinator dashboard so they can be notified of pending visibility issues without polling.
Testing Requirements
Unit tests: (1) verify `updatePortalVisibility(false)` is called on expiry state change, (2) verify `updatePortalVisibility(true)` is called on valid renewal, (3) verify retry is attempted up to 3 times on failure before logging to `dynamics_sync_failures`, (4) verify local state is not rolled back when all retries fail, (5) verify DynamicsSyncException is thrown after retry exhaustion, (6) verify sync is skipped for non-HLF organisations. Mock `HLFDynamicsSyncService` using `mocktail` with configurable failure counts. Integration test: use a mock Dynamics REST endpoint (via a local HTTP server or WireMock equivalent) to simulate transient failures and assert the retry loop behaves correctly. Test the `dynamics_sync_failures` table insertion with a live Supabase test instance.
The auto-pause workflow requires CertificationManagementService to call PauseManagementService and HLFDynamicsSyncService in the same logical transaction. If PauseManagementService succeeds but the Dynamics webhook fails, the mentor is paused locally but remains visible on the HLF portal.
Mitigation & Contingency
Mitigation: Implement a saga pattern: write a pending sync event to the database before calling Dynamics, and have a background retry job consume pending events. This guarantees eventual consistency even if the webhook fails transiently.
Contingency: If the Dynamics sync fails after auto-pause, surface an explicit coordinator alert in the dashboard indicating 'Dynamics sync pending — mentor may still be visible on portal'. Allow manual retry from coordinator UI.
If the nightly cron job runs concurrently (e.g., due to infra retry), CertificationReminderService could dispatch duplicate notifications to mentors before the cert_notification_log insert is visible to the second invocation.
Mitigation & Contingency
Mitigation: Use Supabase's upsert with a unique constraint on (mentor_id, threshold_days, cert_id) in cert_notification_log. The second concurrent insert will fail gracefully and the duplicate dispatch will be skipped.
Contingency: If duplicate notifications do reach mentors, add a post-dispatch dedup check and include a 'you may receive this notification again' disclaimer until the constraint is deployed.