high priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

DeduplicationQueueService exposes a Stream<List<DuplicateQueueItem>> for real-time queue updates via Supabase Realtime subscription scoped by organization_id
fetchPendingItems() returns only unreviewed duplicate records belonging to the coordinator's organization, ordered by created_at ascending
markAsReviewed(String queueItemId) updates the queue record status to 'reviewed' and persists a PROCEED_WITH_DUPLICATE audit event in the bufdir_export_audit_log table
bulkDismiss(List<String> queueItemIds) atomically marks all provided items as reviewed in a single database transaction; partial failures are rolled back
All audit events include actor_id (coordinator user_id), actor_role, timestamp, and affected activity IDs for Bufdir traceability
Service returns a typed Result<T, AppError> for all async operations — no raw exceptions leak to callers
Queue subscription is automatically cancelled when the service is disposed; no memory leaks under repeated navigation
Bulk dismiss of 50+ items completes within 3 seconds on a standard connection
Service correctly filters out items that have already been resolved (status != 'pending') before returning queue data

Technical Requirements

frameworks
Flutter
BLoC
Riverpod
Supabase Dart SDK
apis
Supabase PostgreSQL REST API
Supabase Realtime WebSocket
data models
activity
bufdir_export_audit_log
claim_event
performance requirements
Bulk dismiss of up to 100 items must complete within 3 seconds
Real-time queue updates must reflect within 500ms of database change
Initial queue fetch must return within 1 second on 4G connection
security requirements
Row-Level Security must restrict queue access to coordinator's own organization_id
Realtime subscription must be scoped with RLS-enforced JWT — coordinator cannot see other organizations' queues
Audit log writes must include full actor context; never allow anonymous audit entries
Service role key must never be used client-side; all writes go through authenticated user session

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Implement using a repository pattern: DeduplicationQueueService depends on DuplicateQueueRepository (abstract interface) for testability. Use Supabase Realtime `.stream()` builder for queue subscriptions rather than `.on()` to get automatic RLS enforcement on the subscription. For bulk dismiss, use Supabase `rpc()` to call a Postgres function that performs the batch update atomically — do not loop individual updates from the client. Audit events should be inserted in the same transaction as the status update using a Postgres trigger or RPC.

Model audit entries using the existing claim_event pattern (actor_id, actor_role, from_status, to_status). Use Dart's sealed classes or a Result pattern for error propagation. Register the service via Riverpod Provider so it is scoped to the coordinator session and disposed on logout.

Testing Requirements

Unit tests: test fetchPendingItems with mocked DuplicateQueueRepository for empty queue, single item, and 50+ item scenarios. Test markAsReviewed emits correct PROCEED_WITH_DUPLICATE audit event shape. Test bulkDismiss rolls back on partial failure using mock that throws on item N. Integration tests: verify RLS prevents cross-organization queue access using two test organizations.

Verify Realtime subscription delivers updates within expected latency. Test audit log entries are correctly written to bufdir_export_audit_log with all required fields. Achieve minimum 85% line coverage on service class.

Component
Duplicate Detection BLoC
infrastructure medium
Epic Risks (2)
medium impact high prob technical

For bulk registration with many participants, running duplicate checks sequentially before surfacing the consolidated summary could introduce a multi-second delay as each peer mentor is checked individually against the RPC. This degrades the bulk submission UX significantly.

Mitigation & Contingency

Mitigation: Issue all duplicate check RPC calls concurrently using Dart's `Future.wait` or a bounded parallel executor (max 5 concurrent calls to avoid Supabase rate limits). The BLoC collects all results and emits a single BulkDuplicateSummary state with the consolidated list.

Contingency: If concurrent RPC calls hit Supabase connection limits or rate limits, implement a batched sequential approach with a progress indicator showing 'Checking participant N of M' so the coordinator understands the delay is expected and bounded.

high impact medium prob integration

In proxy registration, the peer mentor's ID must be used as the duplicate check parameter, not the coordinator's ID. If the proxy context is not correctly threaded through the BLoC and service layer, duplicate checks will silently run against the wrong person, missing actual duplicates.

Mitigation & Contingency

Mitigation: Define a `SubmissionContext` model that carries the effective `peer_mentor_id` (distinct from `submitter_id`) and pass it explicitly through the BLoC event payload. The DuplicateDetectionService always reads peer_mentor_id from SubmissionContext, never from the authenticated user session.

Contingency: If SubmissionContext threading proves difficult to retrofit into the existing proxy registration BLoC, add an assertion in DuplicateDetectionService that throws a descriptive error when peer_mentor_id is null or matches the coordinator's own ID in a proxy context, making the bug immediately visible in testing.