Implement Deduplication Queue Service
epic-duplicate-activity-detection-state-management-task-006 — Create the DeduplicationQueueService that manages the coordinator-facing duplicate review queue: fetching pending items, marking records as reviewed, bulk-dismissal, and emitting PROCEED_WITH_DUPLICATE audit events required for Bufdir documentation. Service depends on DuplicateQueueRepository.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Implement using a repository pattern: DeduplicationQueueService depends on DuplicateQueueRepository (abstract interface) for testability. Use Supabase Realtime `.stream()` builder for queue subscriptions rather than `.on()` to get automatic RLS enforcement on the subscription. For bulk dismiss, use Supabase `rpc()` to call a Postgres function that performs the batch update atomically — do not loop individual updates from the client. Audit events should be inserted in the same transaction as the status update using a Postgres trigger or RPC.
Model audit entries using the existing claim_event pattern (actor_id, actor_role, from_status, to_status). Use Dart's sealed classes or a Result
Testing Requirements
Unit tests: test fetchPendingItems with mocked DuplicateQueueRepository for empty queue, single item, and 50+ item scenarios. Test markAsReviewed emits correct PROCEED_WITH_DUPLICATE audit event shape. Test bulkDismiss rolls back on partial failure using mock that throws on item N. Integration tests: verify RLS prevents cross-organization queue access using two test organizations.
Verify Realtime subscription delivers updates within expected latency. Test audit log entries are correctly written to bufdir_export_audit_log with all required fields. Achieve minimum 85% line coverage on service class.
For bulk registration with many participants, running duplicate checks sequentially before surfacing the consolidated summary could introduce a multi-second delay as each peer mentor is checked individually against the RPC. This degrades the bulk submission UX significantly.
Mitigation & Contingency
Mitigation: Issue all duplicate check RPC calls concurrently using Dart's `Future.wait` or a bounded parallel executor (max 5 concurrent calls to avoid Supabase rate limits). The BLoC collects all results and emits a single BulkDuplicateSummary state with the consolidated list.
Contingency: If concurrent RPC calls hit Supabase connection limits or rate limits, implement a batched sequential approach with a progress indicator showing 'Checking participant N of M' so the coordinator understands the delay is expected and bounded.
In proxy registration, the peer mentor's ID must be used as the duplicate check parameter, not the coordinator's ID. If the proxy context is not correctly threaded through the BLoC and service layer, duplicate checks will silently run against the wrong person, missing actual duplicates.
Mitigation & Contingency
Mitigation: Define a `SubmissionContext` model that carries the effective `peer_mentor_id` (distinct from `submitter_id`) and pass it explicitly through the BLoC event payload. The DuplicateDetectionService always reads peer_mentor_id from SubmissionContext, never from the authenticated user session.
Contingency: If SubmissionContext threading proves difficult to retrofit into the existing proxy registration BLoC, add an assertion in DuplicateDetectionService that throws a descriptive error when peer_mentor_id is null or matches the coordinator's own ID in a proxy context, making the bug immediately visible in testing.