critical priority medium complexity integration pending integration specialist Tier 5

Acceptance Criteria

After any expiry state change, `HLFDynamicsSyncService.updatePortalVisibility(mentorId, visible: bool)` is called with `visible: false` on expiry and `visible: true` on valid renewal
The Dynamics sync call is made via a Supabase Edge Function — never directly from the mobile client using Dynamics credentials
If the Dynamics sync fails on first attempt, the service retries up to 3 times with exponential backoff (1s, 2s, 4s) before giving up
After exhausting retries, the sync failure is logged to a `dynamics_sync_failures` table with mentor_id, certification_id, attempted_at, error_code, and retry_count for manual recovery
The local certification status update and auto-pause (if applicable) are NOT rolled back when Dynamics sync fails — local state is preserved
The error is surfaced to the BLoC/caller as a `DynamicsSyncException` so the UI can show a non-blocking warning (e.g., 'Portal visibility update pending')
On successful renewal, if a previous sync failure record exists for the mentor, it is marked as resolved
The sync logic is HLF-organisation-scoped — the service checks `organization_id` before calling Dynamics and silently skips for non-HLF organisations
Unit tests verify the retry loop, failure logging, and organisation scoping

Technical Requirements

frameworks
Flutter
Dart
Supabase
apis
Microsoft Dynamics 365 REST API
Supabase Edge Functions (Deno)
Supabase PostgreSQL 15
data models
certification
performance requirements
Retry backoff must not block the UI thread — use `Future.delayed` within an async retry loop
Total retry window must not exceed 15 seconds to avoid holding the mobile session open
security requirements
Azure AD credentials for Dynamics 365 stored server-side only in Edge Function environment secrets — never in the Flutter app binary
Minimal required Dynamics permissions: write portal visibility field only — no read access to broader Dynamics data
OAuth token rotation enforced via Azure AD token lifetime policies on the Edge Function side
Sync scoped to HLF organisation via credential isolation — credentials for one org never accessible to another

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Implement `HLFDynamicsSyncService` as a thin client that calls a Supabase Edge Function (`hlf-dynamics-sync`) rather than hitting the Dynamics REST API directly from Dart. The Edge Function holds the Azure AD token lifecycle and retries at the server level for network-layer failures. The Dart-side retry in `CertificationManagementService` is the application-layer retry for transient Edge Function availability issues (cold starts, rate limits). Use a helper `RetryPolicy` class with configurable maxAttempts and backoff strategy to keep the retry logic reusable across other sync services (Xledger, Bufdir).

The `dynamics_sync_failures` table should have a `resolved_at` nullable column — set it when the next successful sync clears the failure. Consider using Supabase Realtime to listen for `dynamics_sync_failures` inserts on the coordinator dashboard so they can be notified of pending visibility issues without polling.

Testing Requirements

Unit tests: (1) verify `updatePortalVisibility(false)` is called on expiry state change, (2) verify `updatePortalVisibility(true)` is called on valid renewal, (3) verify retry is attempted up to 3 times on failure before logging to `dynamics_sync_failures`, (4) verify local state is not rolled back when all retries fail, (5) verify DynamicsSyncException is thrown after retry exhaustion, (6) verify sync is skipped for non-HLF organisations. Mock `HLFDynamicsSyncService` using `mocktail` with configurable failure counts. Integration test: use a mock Dynamics REST endpoint (via a local HTTP server or WireMock equivalent) to simulate transient failures and assert the retry loop behaves correctly. Test the `dynamics_sync_failures` table insertion with a live Supabase test instance.

Component
Certification Management Service
service high
Epic Risks (2)
high impact medium prob technical

The auto-pause workflow requires CertificationManagementService to call PauseManagementService and HLFDynamicsSyncService in the same logical transaction. If PauseManagementService succeeds but the Dynamics webhook fails, the mentor is paused locally but remains visible on the HLF portal.

Mitigation & Contingency

Mitigation: Implement a saga pattern: write a pending sync event to the database before calling Dynamics, and have a background retry job consume pending events. This guarantees eventual consistency even if the webhook fails transiently.

Contingency: If the Dynamics sync fails after auto-pause, surface an explicit coordinator alert in the dashboard indicating 'Dynamics sync pending — mentor may still be visible on portal'. Allow manual retry from coordinator UI.

medium impact low prob technical

If the nightly cron job runs concurrently (e.g., due to infra retry), CertificationReminderService could dispatch duplicate notifications to mentors before the cert_notification_log insert is visible to the second invocation.

Mitigation & Contingency

Mitigation: Use Supabase's upsert with a unique constraint on (mentor_id, threshold_days, cert_id) in cert_notification_log. The second concurrent insert will fail gracefully and the duplicate dispatch will be skipped.

Contingency: If duplicate notifications do reach mentors, add a post-dispatch dedup check and include a 'you may receive this notification again' disclaimer until the constraint is deployed.