high priority medium complexity testing pending testing specialist Tier 2

Acceptance Criteria

Integration test sends a certification expiry notification and confirms a row is inserted in cert_notification_log with correct peer_mentor_id, cert_type, notification_type, and sent_at timestamp
Integration test sends the identical notification type for the same certification within the idempotency window and confirms no second FCM call is made and no duplicate row is inserted
Test verifies suppression logic returns a clear reason code (e.g., ALREADY_SENT) when a duplicate is detected
Test simulates FCM token refresh: invalidate existing token, trigger send, confirm PushNotificationService fetches the refreshed token from Supabase and retries delivery successfully
Batch send test submits a list of 5 FCM tokens (3 valid, 2 stale/expired), confirms delivery receipts for valid tokens and graceful failure handling for stale tokens without crashing the batch
Stale token cleanup: after batch send, confirm stale tokens are removed or flagged in the Supabase FCM token store
All test assertions use flutter_test matchers; no hardcoded delays — use pump() or fake async where needed
Tests pass in CI without live FCM credentials by injecting a mock FCM client via dependency injection
Code coverage for PushNotificationService reaches at least 85% branch coverage after these tests are added

Technical Requirements

frameworks
Flutter
flutter_test
BLoC
mocktail or mockito for FCM client mocking
apis
Supabase REST/PostgREST for cert_notification_log reads and writes
Firebase Cloud Messaging (FCM) — mocked in tests
data models
cert_notification_log (peer_mentor_id, cert_type, notification_type, sent_at)
peer_mentor_certifications
fcm_tokens (peer_mentor_id, token, is_stale)
performance requirements
Batch send test must complete within 2 seconds using fake async
Idempotency check query must use indexed lookup on (peer_mentor_id, cert_type, notification_type)
security requirements
FCM tokens must never be logged in test output
Supabase test environment must use isolated test schema or row-level security policies that prevent cross-tenant data leakage

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

The idempotency key should be a composite of (peer_mentor_id, cert_type, notification_type) with an optional time-window column (e.g., sent_at > now() - interval '24 hours') to allow re-notification after a cooldown. Inject the FCM client as an abstract interface so tests can substitute a MockFcmClient that captures calls without hitting real FCM. For the batch path, process tokens in chunks and collect results into a BatchSendResult object — this makes assertions straightforward. When testing token refresh, simulate the stale token by inserting a known-bad token string before the test run.

Use supabase_flutter's test utilities or a service locator (GetIt) to swap the Supabase client for a test instance. Ensure the PushNotificationService does not catch all exceptions silently — surface typed errors (FcmDeliveryException, IdempotencyException) so tests can assert on error type, not just absence of crash.

Testing Requirements

Integration tests using flutter_test with a real Supabase test instance (or supabase-local via Docker) and a mocked FCM client. Cover: (1) happy path single send + log insertion, (2) duplicate suppression with same peer_mentor_id + cert_type + notification_type, (3) FCM token refresh flow with token invalidation and re-fetch, (4) batch send with mixed valid/stale tokens verifying partial success, (5) stale token cleanup post-batch. Use setUp/tearDown to seed and clean cert_notification_log. Verify database state with direct Supabase queries inside tests.

No e2e device tests required for this task — integration level is sufficient.

Component
Push Notification Service
infrastructure medium
Epic Risks (3)
high impact medium prob integration

HLF Dynamics portal webhook API contract may be undocumented, subject to change, or require a separate authentication flow not yet agreed upon with HLF. If the contract changes post-implementation, the sync service silently fails and expired peer mentors remain on public listings.

Mitigation & Contingency

Mitigation: Obtain the official Dynamics webhook specification and test credentials from HLF before starting HLFDynamicsSyncService implementation. Agree on a versioned webhook contract and request a staging endpoint for integration testing.

Contingency: If the contract is unavailable, stub the sync service behind a feature flag and ship without Dynamics sync initially. Queue sync events locally and replay once the contract is confirmed.

high impact medium prob security

Supabase RLS policies for certifications must correctly scope data to the coordinator's chapter without leaking cross-organisation data, particularly complex in multi-chapter membership scenarios. A misconfigured policy could expose peer mentor PII to wrong coordinators.

Mitigation & Contingency

Mitigation: Write RLS policies against the established org-hierarchy schema used by other tables. Peer review all policies before migration deployment. Add integration tests that assert cross-organisation data isolation using test accounts with different org scopes.

Contingency: If a policy gap is discovered post-merge, immediately disable the affected query endpoint and apply a hotfix migration. Audit access logs in Supabase for any cross-org data access events.

medium impact low prob technical

Storing renewal history as a JSONB field rather than a normalised table simplifies queries but makes retrospective schema changes (adding fields to history entries) harder and could cause issues if history grows very large for long-tenured mentors.

Mitigation & Contingency

Mitigation: Define a versioned JSONB entry schema (include a schema_version field in each entry) so future migrations can transform old entries. Add a size guard in the repository to warn if renewal_history exceeds 500 entries.

Contingency: If JSONB approach proves limiting, add a normalised certification_renewal_events table and migrate history entries in a background job, keeping the JSONB field as a read cache.