critical priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

getUnresolvedPairs() returns only pairs whose chapter_id is in the authenticated coordinator's assigned chapters list
Results are sorted by submission_date descending (most recent first)
Pagination is applied: accepts page (int, 0-indexed) and pageSize (int, default 20) parameters
Each returned UnresolvedPair contains full ActivityRecord snapshots for both activity_a and activity_b (not just IDs)
Returns empty list (not null or error) when no unresolved pairs exist in scope
Throws ChapterScopeException if authenticated user has no assigned chapters
Throws RepositoryException with descriptive message on Supabase query failure
Query does not return pairs with status = 'resolved' or status = 'dismissed'
UnresolvedPair domain objects include pair_id, submitted_at, similarity_score, activity_a snapshot, activity_b snapshot
Integration test confirms that a coordinator from chapter A cannot see pairs from chapter B

Technical Requirements

frameworks
Flutter
Riverpod
Supabase Dart client
apis
Supabase PostgREST — duplicate_queue table with join to activities table for snapshots
data models
UnresolvedPair
ActivityRecord
DuplicateQueueRow
CoordinatorChapterScope
performance requirements
Single Supabase query with JOIN — avoid N+1 by fetching both activity snapshots in one call
Page size capped at 50 to prevent oversized payloads
Query must complete within 500ms for pages of up to 50 pairs under normal load
security requirements
Chapter scope filter must be applied server-side via Supabase RLS policy — never trust client-side filtering alone
Coordinator's chapter list must be resolved from the authenticated session JWT claims, not from a client-supplied parameter
No raw SQL string interpolation — use parameterized Supabase query builder

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use Supabase's `.from('duplicate_queue').select('*, activity_a:activities!activity_a_id(*), activity_b:activities!activity_b_id(*)')` to fetch both snapshots in one round trip. Apply `.eq('status', 'unresolved').in('chapter_id', coordinatorChapterIds).order('submitted_at', ascending: false).range(offset, offset + pageSize - 1)`. Resolve the coordinator's chapter list from the Riverpod auth state provider — do not accept it as a method parameter to avoid privilege escalation. Map the Supabase response to domain objects in a dedicated mapper class (`DuplicateQueueMapper`) to keep the service clean.

Consider a freezed union type for the return value: `AsyncValue>` to handle loading/error states in the BLoC layer. NHF's multi-chapter membership (up to 5 chapters per user) means the `in` filter list can have multiple values — handle this case explicitly in tests.

Testing Requirements

Unit tests: mock the duplicate-queue-repository and verify getUnresolvedPairs() applies the correct chapter filter, sorts descending, and maps rows to domain objects correctly. Test empty result, single result, and multi-page scenarios. Integration tests (flutter_test with a Supabase local/test instance): confirm RLS chapter scoping prevents cross-chapter access. Test pagination boundary — last page may have fewer items than pageSize.

Test that resolved/dismissed pairs are excluded. Minimum 90% branch coverage on the service method.

Component
Deduplication Queue Service
service medium
Epic Risks (2)
medium impact medium prob technical

If the duplicate check RPC fails due to a network error or Supabase outage, the service must decide whether to block submission entirely (safe but disruptive) or allow submission to proceed silently (functional but risks data duplication). An incorrect choice leads to either user frustration or data quality issues.

Mitigation & Contingency

Mitigation: Define an explicit error policy in the service: RPC failures result in a DuplicateCheckResult with status: 'check_failed' and no candidates. The caller treats this as 'allow submission, flag for async review'. Document this as the intended graceful degradation behaviour in the service interface contract.

Contingency: If stakeholders require blocking on RPC failure, expose a configurable `failMode` parameter in the service that can be toggled per organisation via the feature flag system without a code deployment.

medium impact medium prob scope

The DuplicateComparisonPanel must handle varying activity schemas across organisations (NHF, HLF, Blindeforbundet each have different activity fields). A rigid layout may not accommodate all field variations, causing truncation or missing data in the comparison view.

Mitigation & Contingency

Mitigation: Design the panel to render a dynamic list of key-value pairs rather than a fixed-column layout. Define a `ComparisonField` model that each service populates with only the fields relevant to the activity type and organisation, allowing the panel to adapt without schema knowledge.

Contingency: If dynamic rendering proves too complex within the timeline, ship a simplified panel showing only the five most critical fields (peer mentor, activity type, date, chapter, submitter) and log a follow-up ticket for full field rendering in a later sprint.