Test PushNotificationService and idempotency end-to-end
epic-certification-management-foundation-task-013 — Write integration tests verifying the PushNotificationService correctly deduplicates notifications using cert_notification_log: send a notification, confirm it is recorded, attempt to send the same notification type for the same certification, and confirm it is suppressed. Also test FCM token refresh flow and batch send path with a mix of valid and stale tokens.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
The idempotency key should be a composite of (peer_mentor_id, cert_type, notification_type) with an optional time-window column (e.g., sent_at > now() - interval '24 hours') to allow re-notification after a cooldown. Inject the FCM client as an abstract interface so tests can substitute a MockFcmClient that captures calls without hitting real FCM. For the batch path, process tokens in chunks and collect results into a BatchSendResult object — this makes assertions straightforward. When testing token refresh, simulate the stale token by inserting a known-bad token string before the test run.
Use supabase_flutter's test utilities or a service locator (GetIt) to swap the Supabase client for a test instance. Ensure the PushNotificationService does not catch all exceptions silently — surface typed errors (FcmDeliveryException, IdempotencyException) so tests can assert on error type, not just absence of crash.
Testing Requirements
Integration tests using flutter_test with a real Supabase test instance (or supabase-local via Docker) and a mocked FCM client. Cover: (1) happy path single send + log insertion, (2) duplicate suppression with same peer_mentor_id + cert_type + notification_type, (3) FCM token refresh flow with token invalidation and re-fetch, (4) batch send with mixed valid/stale tokens verifying partial success, (5) stale token cleanup post-batch. Use setUp/tearDown to seed and clean cert_notification_log. Verify database state with direct Supabase queries inside tests.
No e2e device tests required for this task — integration level is sufficient.
HLF Dynamics portal webhook API contract may be undocumented, subject to change, or require a separate authentication flow not yet agreed upon with HLF. If the contract changes post-implementation, the sync service silently fails and expired peer mentors remain on public listings.
Mitigation & Contingency
Mitigation: Obtain the official Dynamics webhook specification and test credentials from HLF before starting HLFDynamicsSyncService implementation. Agree on a versioned webhook contract and request a staging endpoint for integration testing.
Contingency: If the contract is unavailable, stub the sync service behind a feature flag and ship without Dynamics sync initially. Queue sync events locally and replay once the contract is confirmed.
Supabase RLS policies for certifications must correctly scope data to the coordinator's chapter without leaking cross-organisation data, particularly complex in multi-chapter membership scenarios. A misconfigured policy could expose peer mentor PII to wrong coordinators.
Mitigation & Contingency
Mitigation: Write RLS policies against the established org-hierarchy schema used by other tables. Peer review all policies before migration deployment. Add integration tests that assert cross-organisation data isolation using test accounts with different org scopes.
Contingency: If a policy gap is discovered post-merge, immediately disable the affected query endpoint and apply a hotfix migration. Audit access logs in Supabase for any cross-org data access events.
Storing renewal history as a JSONB field rather than a normalised table simplifies queries but makes retrospective schema changes (adding fields to history entries) harder and could cause issues if history grows very large for long-tenured mentors.
Mitigation & Contingency
Mitigation: Define a versioned JSONB entry schema (include a schema_version field in each entry) so future migrations can transform old entries. Add a size guard in the repository to warn if renewal_history exceeds 500 entries.
Contingency: If JSONB approach proves limiting, add a normalised certification_renewal_events table and migrate history entries in a background job, keeping the JSONB field as a read cache.