Implement Supabase-backed duplicate warning event logger
epic-multi-chapter-membership-handling-data-layer-task-008 — Implement the concrete SupabaseDuplicateWarningEventLogger. On logWarningEvent, insert a structured row into the duplicate_warning_events audit table in Supabase. Include all fields from DuplicateWarningEvent plus the authenticated coordinator user_id. Handle insert errors gracefully (log to console in debug, swallow in production) so that audit logging never interrupts the user-facing flow. Verify the table exists and create a migration if needed.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Use the kDebugMode constant from Flutter foundation to gate console output: if (kDebugMode) { debugPrint('DuplicateWarningEventLogger insert failed: $e'); }. To make the call truly fire-and-forget without blocking the BLoC, wrap the Supabase insert in unawaited(Future(() async { ... })) or use scheduleMicrotask. However, since logWarningEvent returns Future
For the Dart-side JSON serialization, call event.toJson() and then spread the map: {'event_id': event.eventId, 'timestamp': event.timestamp.toIso8601String(), 'contact_id': event.contactId, 'involved_chapter_ids': event.involvedChapterIds, 'suspected_duplicate_activity_id': event.suspectedDuplicateActivityId, 'activity_date': event.activityDate.toIso8601String(), 'activity_type': event.activityType, 'coordinator_decision': event.coordinatorDecision.name, 'coordinator_user_id': supabase.auth.currentUser?.id}. Add a null-check guard: if currentUser is null, log and return early without inserting. Register SupabaseDuplicateWarningEventLogger in the dependency injection container (Riverpod provider or get_it) bound to the DuplicateWarningEventLogger abstract type.
Testing Requirements
Write unit tests with a mocked SupabaseClient: (1) verify that logWarningEvent calls supabase.from('duplicate_warning_events').insert() with a map containing all DuplicateWarningEvent fields plus coordinator_user_id; (2) mock the insert to throw a PostgrestException and verify logWarningEvent does NOT rethrow — the Future completes normally; (3) verify that in debug mode the exception message is passed to a logging callback (inject a logger stub). Write an integration test against a local Supabase instance: call logWarningEvent with a valid event, query the table directly with the service_role key, and assert the row exists with correct field values. Verify the CHECK constraint rejects an insert with coordinator_decision = 'unknown'.
The Cross-Chapter Activity Query must avoid N+1 fetches across chapters. If naively implemented as a per-chapter loop, it will cause severe performance degradation for contacts affiliated with 5 chapters on poor mobile connections.
Mitigation & Contingency
Mitigation: Design the query as a single PostgREST join of contact_chapters and activities on contact_id from the start. Add a query performance test with 5 affiliations and 100+ activities to the integration test suite and enforce a maximum execution time threshold.
Contingency: If a performance regression is detected post-merge, introduce a Supabase RPC function (stored procedure) to move the join server-side, bypassing any client-side N+1 pattern.
If the Duplicate Warning Event Logger write fails silently (network error, RLS denial), audit entries will be missing from the Bufdir compliance record without the user being aware.
Mitigation & Contingency
Mitigation: Implement the logger with a local fallback queue: if the Supabase write fails, persist the event locally and retry on next launch. Log all failures to a verbose output channel.
Contingency: Add a reconciliation job that compares locally queued events to Supabase entries and re-submits any gaps. Provide a data export of the local queue for manual audit if reconciliation fails.
Two coordinators simultaneously adding the 5th chapter affiliation for the same contact could bypass the maximum enforcement check if both reads occur before either write completes.
Mitigation & Contingency
Mitigation: Enforce the 5-affiliation maximum as a database-level constraint (CHECK + trigger or RPC with a FOR UPDATE lock) rather than relying solely on application-layer validation.
Contingency: If a constraint violation is detected in production, run a corrective query to end the most recently created excess affiliation and notify the relevant coordinator.