critical priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

resolveKeepBoth(ActivityDraft draft, String duplicateCandidateId) saves the new activity with a 'duplicate_reviewed' flag set to true, preventing re-detection on the next save cycle
resolveCancel(ActivityDraft draft) discards the draft without any database write and emits a DUPLICATE_CANCELLED audit event with the draft's metadata
resolveReplace(ActivityDraft draft, String candidateActivityId) atomically deletes the candidate activity and saves the new draft in a single transaction; if the delete fails, the new activity is not saved
All three resolution paths emit a structured audit event including: resolution_type, actor_id, actor_role, new_activity_id (if applicable), replaced_activity_id (if applicable), organization_id, and timestamp
The duplicate-reviewed-flag middleware bypasses DuplicateDetectionService when the 'duplicate_reviewed' flag is true on the incoming draft
resolveReplace correctly handles the case where the candidate activity has already been deleted (returns a typed AppError, does not crash)
Audit events are written to the database even if the primary operation fails (use a compensating write or separate audit transaction)
Handler returns a typed ResolutionResult sealed class (KeepBothResult, CancelResult, ReplaceResult, ErrorResult) consumed by the BLoC

Technical Requirements

frameworks
Flutter
BLoC
Supabase Dart SDK
apis
Supabase PostgreSQL REST API
Supabase Edge Functions (Deno)
data models
activity
bufdir_export_audit_log
claim_event
performance requirements
REPLACE resolution (delete + insert) must complete within 2 seconds
KEEP_BOTH resolution must complete within 1.5 seconds
CANCEL resolution (local only) must be synchronous — no network call required
security requirements
resolveReplace must verify the candidate activity belongs to the same organization_id before deletion — prevent cross-org deletion via manipulated IDs
All resolution operations must be executed under the authenticated user's RLS context
Audit events must never be skipped — use a Supabase Edge Function or Postgres trigger for guaranteed audit write
duplicate_reviewed flag must be a server-set field, not client-settable directly, to prevent bypass

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Use a Supabase RPC (Postgres function) for the REPLACE path to ensure atomicity — DELETE + INSERT in a single server-side transaction is safer than two sequential client calls. The duplicate-reviewed-flag should be a boolean column 'duplicate_reviewed' on the activity table, defaulting to false, set to true server-side inside the RPC for KEEP_BOTH and REPLACE. The CANCEL path is entirely client-side: emit the audit event via a separate lightweight Supabase insert, then signal the BLoC to transition to 'cancelled' state — no activity row is created. Model ResolutionResult as a Dart sealed class for exhaustive pattern matching in the BLoC.

Register via Riverpod and ensure it is disposed with the BLoC. For audit events, reuse the claim_event table structure (actor_id, actor_role, from_status, to_status) with an additional resolution_type field.

Testing Requirements

Unit tests: test each resolution path (KEEP_BOTH, CANCEL, REPLACE) with mocked repositories. Test REPLACE rolls back new activity insert if candidate delete fails. Test that the duplicate-reviewed-flag is correctly set on KEEP_BOTH path. Test REPLACE with already-deleted candidate returns ErrorResult without throwing.

Test audit event shape matches required schema for all three paths. Integration tests: verify REPLACE atomicity — run against a real Supabase test instance and confirm no partial writes. Verify cross-organization deletion is blocked by RLS. Confirm audit log entries are created for all resolution types.

Achieve 90% branch coverage for the handler.

Component
Duplicate Detection BLoC
infrastructure medium
Epic Risks (2)
medium impact high prob technical

For bulk registration with many participants, running duplicate checks sequentially before surfacing the consolidated summary could introduce a multi-second delay as each peer mentor is checked individually against the RPC. This degrades the bulk submission UX significantly.

Mitigation & Contingency

Mitigation: Issue all duplicate check RPC calls concurrently using Dart's `Future.wait` or a bounded parallel executor (max 5 concurrent calls to avoid Supabase rate limits). The BLoC collects all results and emits a single BulkDuplicateSummary state with the consolidated list.

Contingency: If concurrent RPC calls hit Supabase connection limits or rate limits, implement a batched sequential approach with a progress indicator showing 'Checking participant N of M' so the coordinator understands the delay is expected and bounded.

high impact medium prob integration

In proxy registration, the peer mentor's ID must be used as the duplicate check parameter, not the coordinator's ID. If the proxy context is not correctly threaded through the BLoC and service layer, duplicate checks will silently run against the wrong person, missing actual duplicates.

Mitigation & Contingency

Mitigation: Define a `SubmissionContext` model that carries the effective `peer_mentor_id` (distinct from `submitter_id`) and pass it explicitly through the BLoC event payload. The DuplicateDetectionService always reads peer_mentor_id from SubmissionContext, never from the authenticated user session.

Contingency: If SubmissionContext threading proves difficult to retrofit into the existing proxy registration BLoC, add an assertion in DuplicateDetectionService that throws a descriptive error when peer_mentor_id is null or matches the coordinator's own ID in a proxy context, making the bug immediately visible in testing.