Implement DuplicateDetectionService pre-submit RPC orchestration
epic-duplicate-activity-detection-core-logic-task-002 — Implement the concrete DuplicateDetectionService that calls the duplicate-check-repository RPC before activity submission. The method must accept an activity draft payload, invoke the repository's findPotentialDuplicates() call, and return a structured DuplicateCheckResult containing the candidate list and a boolean indicating whether submission should be blocked.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
The fail-safe pattern (catch network exceptions → return non-blocking result) is critical: duplicate detection must never prevent a legitimate activity submission due to a transient network error. Document this explicitly with a code comment above the try/catch. The blocking threshold (default 0.85) should be a named parameter with a const default so it can be changed per organization in future without rebuilding the service. Keep _mapCandidate() private and test it indirectly through checkForDuplicates() — avoid making it public just to test it directly.
For the Riverpod provider, use a Provider
Testing Requirements
Write unit tests in test/data/services/duplicate_detection_service_impl_test.dart using a mock DuplicateCheckRepository (via mocktail). Test scenarios: (1) repository returns 2 candidates, one above blocking threshold → result has isBlocked=true and severity=block, (2) repository returns 1 candidate below threshold → isBlocked=false and severity=warning, (3) repository returns empty list → severity=none, (4) repository throws SocketException → service returns safe fallback result with isBlocked=false, (5) custom blocking threshold injected at 0.9 → verify threshold is applied correctly. Use bloc_test or manual async test patterns. Target 90%+ line coverage.
If the duplicate check RPC fails due to a network error or Supabase outage, the service must decide whether to block submission entirely (safe but disruptive) or allow submission to proceed silently (functional but risks data duplication). An incorrect choice leads to either user frustration or data quality issues.
Mitigation & Contingency
Mitigation: Define an explicit error policy in the service: RPC failures result in a DuplicateCheckResult with status: 'check_failed' and no candidates. The caller treats this as 'allow submission, flag for async review'. Document this as the intended graceful degradation behaviour in the service interface contract.
Contingency: If stakeholders require blocking on RPC failure, expose a configurable `failMode` parameter in the service that can be toggled per organisation via the feature flag system without a code deployment.
The DuplicateComparisonPanel must handle varying activity schemas across organisations (NHF, HLF, Blindeforbundet each have different activity fields). A rigid layout may not accommodate all field variations, causing truncation or missing data in the comparison view.
Mitigation & Contingency
Mitigation: Design the panel to render a dynamic list of key-value pairs rather than a fixed-column layout. Define a `ComparisonField` model that each service populates with only the fields relevant to the activity type and organisation, allowing the panel to adapt without schema knowledge.
Contingency: If dynamic rendering proves too complex within the timeline, ship a simplified panel showing only the five most critical fields (peer mentor, activity type, date, chapter, submitter) and log a follow-up ticket for full field rendering in a later sprint.