critical priority medium complexity database pending backend specialist Tier 0

Acceptance Criteria

DuplicateQueueRepository is implemented as abstract interface + Supabase concrete class with Riverpod registration
Method `getQueuePage({required String coordinatorId, required DuplicateQueueStatus? statusFilter, required int pageSize, required int offset})` returns `Future<DuplicateQueuePage>` with items and total count for pagination
Method `updateStatus({required String queueItemId, required DuplicateQueueStatus newStatus, String? resolutionNote})` performs optimistic status transition with server confirmation
Status enum covers exactly: pending, reviewed, resolved, dismissed — no free-form strings
Method `enqueueDetectedDuplicate({required String incomingActivityId, required String candidateActivityId, required String assignedCoordinatorId})` creates a new queue record in pending status
Batch query returns items sorted by created_at ascending (oldest unreviewed first) to support FIFO coordinator workflow
Pagination is cursor- or offset-based and returns correct hasNextPage flag
RLS policy is respected: coordinators only see queue items assigned to their scope (chapter/region)
Status transitions validate allowed sequences: pending→reviewed, reviewed→resolved, reviewed→dismissed, pending→dismissed; invalid transitions throw DuplicateQueueTransitionException
Unit tests cover all CRUD methods, pagination boundary conditions, and invalid status transition rejection

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
Supabase
apis
Supabase PostgREST duplicate_queue table
Supabase Realtime (optional: for live queue updates)
data models
DuplicateQueueItem
DuplicateQueueStatus
DuplicateQueuePage
Activity
PeerMentor
performance requirements
Paginated list query must return first page in < 300ms for coordinator queues up to 500 pending items
Status update (optimistic + server confirm) must complete round-trip in < 800ms
Batch queries must use indexed columns (coordinator_id, status, created_at) — verified with EXPLAIN ANALYZE
security requirements
RLS policy on duplicate_queue table must restrict coordinators to their assigned chapters/regions — never expose cross-org queue items
Status update endpoint must verify the requesting user has coordinator role for the relevant chapter before allowing transition
Queue item creation (enqueue) must only be callable by system-level operations or backend functions, not directly from client — enforce via RLS or Supabase function

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Model DuplicateQueueStatus as a Dart enum with a `fromString` factory that throws on unknown values — never use raw strings from the database as enum values without validation. The DuplicateQueuePage value object should carry both `items` and `totalCount` so the UI can render a progress indicator (e.g., '12 of 47 resolved'). For the coordinator queue, NHF's structure of 12 landsforeninger × 9 regioner × 1,400 chapters means a coordinator's scope may span multiple chapters — the `coordinator_scope` should be stored as a list of chapter IDs, not a single FK. Pitfall: offset-based pagination becomes inconsistent if items are inserted between page fetches (coordinator sees duplicates or gaps).

Consider cursor-based pagination using `created_at + id` composite cursor for production stability. If Supabase Realtime is scoped into this task, use `supabase.channel()` with a filter on `coordinator_id` to avoid broadcasting all queue changes to all clients.

Testing Requirements

Unit tests using flutter_test with mocked Supabase client. Test scenarios: (1) first page returned with correct count and hasNextPage=true; (2) last page returns hasNextPage=false; (3) status filter returns only matching items; (4) valid status transition succeeds; (5) invalid transition (e.g., resolved→pending) throws DuplicateQueueTransitionException; (6) enqueue creates item in pending status with correct coordinator assignment; (7) RLS violation (coordinator accessing another chapter's queue) surfaces as typed PermissionException. Integration tests against local Supabase: seed 25 queue items across 3 coordinators, verify pagination returns correct subsets per coordinator. Test that Supabase Realtime subscription (if implemented) delivers a new item event within 2 seconds of insertion.

Component
Duplicate Detection BLoC
infrastructure medium
Epic Risks (2)
medium impact high prob technical

For bulk registration with many participants, running duplicate checks sequentially before surfacing the consolidated summary could introduce a multi-second delay as each peer mentor is checked individually against the RPC. This degrades the bulk submission UX significantly.

Mitigation & Contingency

Mitigation: Issue all duplicate check RPC calls concurrently using Dart's `Future.wait` or a bounded parallel executor (max 5 concurrent calls to avoid Supabase rate limits). The BLoC collects all results and emits a single BulkDuplicateSummary state with the consolidated list.

Contingency: If concurrent RPC calls hit Supabase connection limits or rate limits, implement a batched sequential approach with a progress indicator showing 'Checking participant N of M' so the coordinator understands the delay is expected and bounded.

high impact medium prob integration

In proxy registration, the peer mentor's ID must be used as the duplicate check parameter, not the coordinator's ID. If the proxy context is not correctly threaded through the BLoC and service layer, duplicate checks will silently run against the wrong person, missing actual duplicates.

Mitigation & Contingency

Mitigation: Define a `SubmissionContext` model that carries the effective `peer_mentor_id` (distinct from `submitter_id`) and pass it explicitly through the BLoC event payload. The DuplicateDetectionService always reads peer_mentor_id from SubmissionContext, never from the authenticated user session.

Contingency: If SubmissionContext threading proves difficult to retrofit into the existing proxy registration BLoC, add an assertion in DuplicateDetectionService that throws a descriptive error when peer_mentor_id is null or matches the coordinator's own ID in a proxy context, making the bug immediately visible in testing.