Implement duplicate evaluation and blocking threshold logic
epic-duplicate-activity-detection-core-logic-task-003 — Add the isBlockableDuplicate() evaluation method to DuplicateDetectionService. Apply configurable scoring thresholds based on field overlap (contact ID, date, activity type, duration). Determine whether the result warrants a hard block versus a soft warning. Ensure the logic is pure and unit-testable without database dependencies.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
Implement computeOverlapScore() as a static method on a DuplicateScoreCalculator class rather than as a free function, to make it easy to override in tests and to group related threshold constants. The date match check should compare DateTime values truncated to calendar day (year, month, day only) — use DateTime(a.year, a.month, a.day) == DateTime(b.year, b.month, b.day) rather than comparing millisecondsSinceEpoch. The 10-minute duration tolerance for durationMinutes should use (draft.durationMinutes - candidate.durationMinutes).abs() <= 10. Document all threshold values with a // Rationale: comment so future engineers understand why the weights were chosen.
This is the most logic-dense file in the duplicate detection epic — invest in thorough documentation.
Testing Requirements
Write exhaustive unit tests in test/domain/services/duplicate_evaluation_test.dart with no mocking needed. Test matrix for computeOverlapScore(): (1) all fields match → 1.0, (2) only contactId matches → 0.40, (3) contactId + date match → 0.65 → classified as warning, (4) contactId + date + activityType match → 0.85 → classified as block, (5) no fields match → 0.0, (6) null contactId on draft → 0.0 for that field component, no exception. Test isBlockableDuplicate(): (1) result with severity=block → true, (2) result with severity=warning → false, (3) result with severity=none → false, (4) result with isBlocked=true but severity=warning (inconsistent state) → true (isBlocked takes precedence). Achieve 100% branch coverage for the scoring function.
If the duplicate check RPC fails due to a network error or Supabase outage, the service must decide whether to block submission entirely (safe but disruptive) or allow submission to proceed silently (functional but risks data duplication). An incorrect choice leads to either user frustration or data quality issues.
Mitigation & Contingency
Mitigation: Define an explicit error policy in the service: RPC failures result in a DuplicateCheckResult with status: 'check_failed' and no candidates. The caller treats this as 'allow submission, flag for async review'. Document this as the intended graceful degradation behaviour in the service interface contract.
Contingency: If stakeholders require blocking on RPC failure, expose a configurable `failMode` parameter in the service that can be toggled per organisation via the feature flag system without a code deployment.
The DuplicateComparisonPanel must handle varying activity schemas across organisations (NHF, HLF, Blindeforbundet each have different activity fields). A rigid layout may not accommodate all field variations, causing truncation or missing data in the comparison view.
Mitigation & Contingency
Mitigation: Design the panel to render a dynamic list of key-value pairs rather than a fixed-column layout. Define a `ComparisonField` model that each service populates with only the fields relevant to the activity type and organisation, allowing the panel to adapt without schema knowledge.
Contingency: If dynamic rendering proves too complex within the timeline, ship a simplified panel showing only the five most critical fields (peer mentor, activity type, date, chapter, submitter) and log a follow-up ticket for full field rendering in a later sprint.