Implement expiry-based query methods on CertificationRepository
epic-certification-management-foundation-task-004 — Extend CertificationRepository with expiry-aware queries: getExpiringSoon(withinDays), getExpiredCertifications, getActiveCertifications, getCertificationsExpiringThisMonth. These methods feed the nightly cron job and coordinator dashboard widgets. Queries must be optimised with server-side date arithmetic and indexed columns.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Use Supabase PostgREST filters directly: .gte('expires_at', DateTime.now().toIso8601String()) and .lte('expires_at', DateTime.now().add(Duration(days: withinDays)).toIso8601String()). For getCertificationsExpiringThisMonth, compute the last day of the current month in Dart using DateTime(now.year, now.month + 1, 0) and pass it to getExpiringSoon. Avoid calling getExpiringSoon(90) followed by client-side filtering for getExpiringSoon(30) — each method should issue its own targeted query. Add a brief JSDoc-style comment above each method explaining its use case (cron job vs.
dashboard widget) to guide future maintainers. Consider adding a stream-based variant (watchExpiringSoon) later using Supabase Realtime for live dashboard updates.
Testing Requirements
Unit tests (flutter_test with mocked Supabase client): verify that getExpiringSoon(30) constructs a query with gte(now) and lte(now+30d) filters; verify that getExpiredCertifications constructs a lt(now) filter on expires_at. Integration tests against local Supabase: seed 5 active, 3 expiring-in-7d, 2 expired, and 1 suspended certification; assert each method returns the expected subset. Test that getExpiringSoon(0) returns an empty list (no certifications expiring in zero days). Test that the method handles a null expires_at correctly (certification with no expiry should appear in getActiveCertifications but not in getExpiringSoon).
HLF Dynamics portal webhook API contract may be undocumented, subject to change, or require a separate authentication flow not yet agreed upon with HLF. If the contract changes post-implementation, the sync service silently fails and expired peer mentors remain on public listings.
Mitigation & Contingency
Mitigation: Obtain the official Dynamics webhook specification and test credentials from HLF before starting HLFDynamicsSyncService implementation. Agree on a versioned webhook contract and request a staging endpoint for integration testing.
Contingency: If the contract is unavailable, stub the sync service behind a feature flag and ship without Dynamics sync initially. Queue sync events locally and replay once the contract is confirmed.
Supabase RLS policies for certifications must correctly scope data to the coordinator's chapter without leaking cross-organisation data, particularly complex in multi-chapter membership scenarios. A misconfigured policy could expose peer mentor PII to wrong coordinators.
Mitigation & Contingency
Mitigation: Write RLS policies against the established org-hierarchy schema used by other tables. Peer review all policies before migration deployment. Add integration tests that assert cross-organisation data isolation using test accounts with different org scopes.
Contingency: If a policy gap is discovered post-merge, immediately disable the affected query endpoint and apply a hotfix migration. Audit access logs in Supabase for any cross-org data access events.
Storing renewal history as a JSONB field rather than a normalised table simplifies queries but makes retrospective schema changes (adding fields to history entries) harder and could cause issues if history grows very large for long-tenured mentors.
Mitigation & Contingency
Mitigation: Define a versioned JSONB entry schema (include a schema_version field in each entry) so future migrations can transform old entries. Add a size guard in the repository to warn if renewal_history exceeds 500 entries.
Contingency: If JSONB approach proves limiting, add a normalised certification_renewal_events table and migrate history entries in a background job, keeping the JSONB field as a read cache.