Implement DeduplicationQueueService chapter-scoped queue fetching
epic-duplicate-activity-detection-core-logic-task-005 — Implement the concrete getUnresolvedPairs() method that fetches unresolved duplicate pairs from the duplicate-queue-repository, filtered by the authenticated coordinator's chapter scope. Apply pagination and sorting by submission date descending. Map repository rows to UnresolvedPair domain objects with full activity snapshots for both sides of each pair.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Use Supabase's `.from('duplicate_queue').select('*, activity_a:activities!activity_a_id(*), activity_b:activities!activity_b_id(*)')` to fetch both snapshots in one round trip. Apply `.eq('status', 'unresolved').in('chapter_id', coordinatorChapterIds).order('submitted_at', ascending: false).range(offset, offset + pageSize - 1)`. Resolve the coordinator's chapter list from the Riverpod auth state provider — do not accept it as a method parameter to avoid privilege escalation. Map the Supabase response to domain objects in a dedicated mapper class (`DuplicateQueueMapper`) to keep the service clean.
Consider a freezed union type for the return value: `AsyncValue
Testing Requirements
Unit tests: mock the duplicate-queue-repository and verify getUnresolvedPairs() applies the correct chapter filter, sorts descending, and maps rows to domain objects correctly. Test empty result, single result, and multi-page scenarios. Integration tests (flutter_test with a Supabase local/test instance): confirm RLS chapter scoping prevents cross-chapter access. Test pagination boundary — last page may have fewer items than pageSize.
Test that resolved/dismissed pairs are excluded. Minimum 90% branch coverage on the service method.
If the duplicate check RPC fails due to a network error or Supabase outage, the service must decide whether to block submission entirely (safe but disruptive) or allow submission to proceed silently (functional but risks data duplication). An incorrect choice leads to either user frustration or data quality issues.
Mitigation & Contingency
Mitigation: Define an explicit error policy in the service: RPC failures result in a DuplicateCheckResult with status: 'check_failed' and no candidates. The caller treats this as 'allow submission, flag for async review'. Document this as the intended graceful degradation behaviour in the service interface contract.
Contingency: If stakeholders require blocking on RPC failure, expose a configurable `failMode` parameter in the service that can be toggled per organisation via the feature flag system without a code deployment.
The DuplicateComparisonPanel must handle varying activity schemas across organisations (NHF, HLF, Blindeforbundet each have different activity fields). A rigid layout may not accommodate all field variations, causing truncation or missing data in the comparison view.
Mitigation & Contingency
Mitigation: Design the panel to render a dynamic list of key-value pairs rather than a fixed-column layout. Define a `ComparisonField` model that each service populates with only the fields relevant to the activity type and organisation, allowing the panel to adapt without schema knowledge.
Contingency: If dynamic rendering proves too complex within the timeline, ship a simplified panel showing only the five most critical fields (peer mentor, activity type, date, chapter, submitter) and log a follow-up ticket for full field rendering in a later sprint.