Build Dynamics 365 portal API client with retry logic
epic-peer-mentor-pause-management-foundation-task-008 — Implement the DynamicsPortalClient Dart class for communicating with HLF's Dynamics 365 portal REST API. Include authentication header management, exponential backoff retry logic for transient failures, request/response logging, and typed response models. Ensure secrets are loaded from Supabase Vault, not hardcoded.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Keep DynamicsPortalClient purely focused on HTTP mechanics — no business logic. Define a CredentialProvider abstract class with Future
Use http.Client (injected via constructor) rather than calling http.get() directly — this is the standard approach for testable Dart HTTP code. Map Dynamics-specific error response bodies (JSON with error.code and error.message) to the DynamicsPortalClientException.code enum so callers get meaningful error types. Since this client runs inside a Supabase Edge Function (Deno), ensure all Dart code is compatible with the dart2js/Deno runtime — avoid dart:mirrors or platform-specific packages.
Testing Requirements
Write flutter_test (or pure Dart test) unit tests using a mock HTTP client (mocktail or http's MockClient) covering: (1) successful request returns typed model, (2) 429 response triggers retry with correct backoff delay, (3) 401 response throws DynamicsPortalClientException with authFailure code immediately (no retry), (4) 3 consecutive 503s exhaust retries and throw serverError, (5) token cache hit avoids a second token request within expiry window, (6) token cache miss (expired) triggers re-fetch before the next API call, (7) constructor rejects HTTP base URLs, (8) request log output contains no secret values. Integration test (manual / CI with test credentials): verify a real token can be obtained and a known Dynamics endpoint returns 200. Document how to run integration tests with environment variables set.
Supabase RLS policies for coordinator-scoped status queries may be difficult to express correctly, especially for peer mentors assigned to multiple coordinators or chapters, leading to data leakage or overly restrictive access blocking valid queries.
Mitigation & Contingency
Mitigation: Design RLS policies using security-definer RPCs rather than table-level policies for complex multi-coordinator scenarios. Write a comprehensive RLS test matrix covering all role and assignment permutations before marking complete.
Contingency: Fall back to application-level filtering in the repository layer with explicit coordinator_id parameter checks if RLS proves intractable, and document the trade-off for security review.
The HLF Dynamics portal API contract may be undocumented or subject to change, causing the DynamicsPortalClient to break during development or production rollout.
Mitigation & Contingency
Mitigation: Obtain the full Dynamics portal API specification and credentials early in the sprint. Build the client behind a well-defined interface so the HLF-specific implementation can be swapped without affecting upstream services.
Contingency: If the Dynamics API is unavailable or unstable, stub the client with a feature-flag-guarded no-op implementation so all other epics can proceed to completion independently.
Supabase Edge Functions used as the nightly scheduler host may have cold-start latency or execution time limits that prevent reliable nightly certification checks on large mentor rosters.
Mitigation & Contingency
Mitigation: Benchmark Edge Function execution time against the expected roster size. Design the expiry check to process in paginated batches to stay within execution limits. Use pg_cron with a direct database function as an alternative trigger if Edge Functions prove unreliable.
Contingency: Migrate the scheduler trigger to pg_cron invoking a Postgres function directly, removing the Edge Function dependency entirely for the scheduling layer.