high priority low complexity infrastructure pending infrastructure specialist Tier 1

Acceptance Criteria

A Supabase migration file creates the duplicate_warning_events table with columns: id (UUID primary key default gen_random_uuid()), event_id (UUID unique not null), timestamp (TIMESTAMPTZ not null), contact_id (UUID not null), involved_chapter_ids (UUID[] not null), suspected_duplicate_activity_id (UUID not null), activity_date (TIMESTAMPTZ not null), activity_type (TEXT not null), coordinator_decision (TEXT not null CHECK IN ('dismissed','cancelled')), coordinator_user_id (UUID not null references auth.users(id)), created_at (TIMESTAMPTZ default now())
RLS is ENABLED on duplicate_warning_events with an INSERT policy allowing authenticated users to insert only rows where coordinator_user_id = auth.uid()
RLS includes a SELECT policy allowing org admins to read all rows and coordinators to read only their own rows
SupabaseDuplicateWarningEventLogger implements DuplicateWarningEventLogger and injects SupabaseClient via constructor
logWarningEvent calls supabase.from('duplicate_warning_events').insert({...event.toJson(), 'coordinator_user_id': supabase.auth.currentUser!.id})
If the insert throws a PostgrestException, the error is caught: in debug mode it is printed to the console; in release mode it is silently swallowed — the method always returns normally
logWarningEvent never throws an exception to the caller regardless of network state or database errors
The migration file is idempotent (CREATE TABLE IF NOT EXISTS)
A manual smoke test confirms a row appears in the duplicate_warning_events table after triggering the duplicate warning flow in the app

Technical Requirements

frameworks
Flutter
Supabase
apis
Supabase PostgREST (insert)
Supabase Auth (currentUser.id)
data models
duplicate_warning_events
performance requirements
The insert must be fire-and-forget — do not await the result in the calling BLoC; use unawaited() or schedule in a microtask to avoid blocking the UI
No retry logic — audit logs are best-effort; a failed insert is acceptable
security requirements
coordinator_user_id must always be set from auth.uid() server-side via RLS, not from a client-supplied value — add a DEFAULT auth.uid() column default in the migration as a defence-in-depth measure
The anon role must have NO access to duplicate_warning_events — RLS must block unauthenticated reads and writes
involved_chapter_ids must be stored as UUID array, not as a freeform text field, to prevent injection

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use the kDebugMode constant from Flutter foundation to gate console output: if (kDebugMode) { debugPrint('DuplicateWarningEventLogger insert failed: $e'); }. To make the call truly fire-and-forget without blocking the BLoC, wrap the Supabase insert in unawaited(Future(() async { ... })) or use scheduleMicrotask. However, since logWarningEvent returns Future, the caller can simply not await it — document this in the interface.

For the Dart-side JSON serialization, call event.toJson() and then spread the map: {'event_id': event.eventId, 'timestamp': event.timestamp.toIso8601String(), 'contact_id': event.contactId, 'involved_chapter_ids': event.involvedChapterIds, 'suspected_duplicate_activity_id': event.suspectedDuplicateActivityId, 'activity_date': event.activityDate.toIso8601String(), 'activity_type': event.activityType, 'coordinator_decision': event.coordinatorDecision.name, 'coordinator_user_id': supabase.auth.currentUser?.id}. Add a null-check guard: if currentUser is null, log and return early without inserting. Register SupabaseDuplicateWarningEventLogger in the dependency injection container (Riverpod provider or get_it) bound to the DuplicateWarningEventLogger abstract type.

Testing Requirements

Write unit tests with a mocked SupabaseClient: (1) verify that logWarningEvent calls supabase.from('duplicate_warning_events').insert() with a map containing all DuplicateWarningEvent fields plus coordinator_user_id; (2) mock the insert to throw a PostgrestException and verify logWarningEvent does NOT rethrow — the Future completes normally; (3) verify that in debug mode the exception message is passed to a logging callback (inject a logger stub). Write an integration test against a local Supabase instance: call logWarningEvent with a valid event, query the table directly with the service_role key, and assert the row exists with correct field values. Verify the CHECK constraint rejects an insert with coordinator_decision = 'unknown'.

Component
Duplicate Warning Event Logger
infrastructure low
Epic Risks (3)
high impact medium prob technical

The Cross-Chapter Activity Query must avoid N+1 fetches across chapters. If naively implemented as a per-chapter loop, it will cause severe performance degradation for contacts affiliated with 5 chapters on poor mobile connections.

Mitigation & Contingency

Mitigation: Design the query as a single PostgREST join of contact_chapters and activities on contact_id from the start. Add a query performance test with 5 affiliations and 100+ activities to the integration test suite and enforce a maximum execution time threshold.

Contingency: If a performance regression is detected post-merge, introduce a Supabase RPC function (stored procedure) to move the join server-side, bypassing any client-side N+1 pattern.

high impact low prob security

If the Duplicate Warning Event Logger write fails silently (network error, RLS denial), audit entries will be missing from the Bufdir compliance record without the user being aware.

Mitigation & Contingency

Mitigation: Implement the logger with a local fallback queue: if the Supabase write fails, persist the event locally and retry on next launch. Log all failures to a verbose output channel.

Contingency: Add a reconciliation job that compares locally queued events to Supabase entries and re-submits any gaps. Provide a data export of the local queue for manual audit if reconciliation fails.

medium impact low prob technical

Two coordinators simultaneously adding the 5th chapter affiliation for the same contact could bypass the maximum enforcement check if both reads occur before either write completes.

Mitigation & Contingency

Mitigation: Enforce the 5-affiliation maximum as a database-level constraint (CHECK + trigger or RPC with a FOR UPDATE lock) rather than relying solely on application-layer validation.

Contingency: If a constraint violation is detected in production, run a corrective query to end the most recently created excess affiliation and notify the relevant coordinator.