Define DeduplicationQueueService interface and queue models
epic-duplicate-activity-detection-core-logic-task-004 — Define the abstract interface for DeduplicationQueueService including UnresolvedPair, QueueResolutionAction, and QueueScopeFilter models. Declare getUnresolvedPairs(), getUnresolvedCount(), and forceResolve() method signatures. Ensure the interface models chapter-scoped access so the coordinator sees only pairs within their assigned chapters.
Acceptance Criteria
Technical Requirements
Implementation Notes
Separate the DeduplicationQueueService interface from the DuplicateDetectionService (task-001) — these are distinct services with different responsibilities: DuplicateDetectionService is invoked pre-submit (real-time check), DeduplicationQueueService is a coordinator admin tool for reviewing persisted unresolved pairs. The chapterId scoping on UnresolvedPair is critical for NHF's multi-chapter coordinator structure — a coordinator managing 3 chapters should only see pairs within those chapters. forceResolve() with QueueResolutionAction.dismiss means 'mark as reviewed, not a duplicate' — document this distinction from keepA/keepB which imply deletion of one activity. Place the barrel export file at lib/domain/services/deduplication_queue_exports.dart and ensure it exports both the service interface and all 4 model types.
Testing Requirements
Write unit tests in test/domain/services/deduplication_queue_service_test.dart covering: (1) UnresolvedPair equality — two instances with same pairId and fields are equal, (2) UnresolvedPair with null resolvedAt and resolvedBy is valid (unresolved state), (3) copyWith() on UnresolvedPair correctly produces partial update with resolvedAt set, (4) QueueResolutionAction enum has exactly 4 values, (5) QueueScopeFilter with empty chapterIds documents the invalid state (test the model structure, not validation logic which belongs in the implementation). No mocking required — pure model tests. Target 100% line coverage for all model files.
If the duplicate check RPC fails due to a network error or Supabase outage, the service must decide whether to block submission entirely (safe but disruptive) or allow submission to proceed silently (functional but risks data duplication). An incorrect choice leads to either user frustration or data quality issues.
Mitigation & Contingency
Mitigation: Define an explicit error policy in the service: RPC failures result in a DuplicateCheckResult with status: 'check_failed' and no candidates. The caller treats this as 'allow submission, flag for async review'. Document this as the intended graceful degradation behaviour in the service interface contract.
Contingency: If stakeholders require blocking on RPC failure, expose a configurable `failMode` parameter in the service that can be toggled per organisation via the feature flag system without a code deployment.
The DuplicateComparisonPanel must handle varying activity schemas across organisations (NHF, HLF, Blindeforbundet each have different activity fields). A rigid layout may not accommodate all field variations, causing truncation or missing data in the comparison view.
Mitigation & Contingency
Mitigation: Design the panel to render a dynamic list of key-value pairs rather than a fixed-column layout. Define a `ComparisonField` model that each service populates with only the fields relevant to the activity type and organisation, allowing the panel to adapt without schema knowledge.
Contingency: If dynamic rendering proves too complex within the timeline, ship a simplified panel showing only the five most critical fields (peer mentor, activity type, date, chapter, submitter) and log a follow-up ticket for full field rendering in a later sprint.