Unit and widget tests for service layer and comparison panel
epic-duplicate-activity-detection-core-logic-task-011 — Write unit tests for DuplicateDetectionService covering: RPC result mapping, blockable duplicate threshold evaluation at boundary values, and null/empty candidate list handling. Write unit tests for DeduplicationQueueService covering: chapter-scoped filtering, unresolved count accuracy, and forceResolve action persistence. Write widget tests for DuplicateComparisonPanel verifying: two-column render with mock records, divergent field highlight applied correctly, matching fields rendered without highlight, and screen reader semantics present on divergent rows.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
Inject the Supabase client through the constructor (or a repository abstraction) so tests can swap in a mock without reflection hacks. Use `bloc_test`'s `whenListen` / `expectLater` pattern if the services expose BLoC state streams. For widget tests, wrap DuplicateComparisonPanel in a minimal MaterialApp with the project's design token theme so that color comparisons resolve correctly against theme tokens rather than hard-coded hex values. When asserting Semantics, use `tester.getSemantics(find.byType(DivergentFieldRow))` and check the `label` property.
For the boundary value tests, parameterise the threshold value through a const and assert using `equals(true)` / `equals(false)` to make intent clear. Avoid `pump` loops — use `pumpAndSettle()` unless the widget has animations that need explicit frame stepping. Ensure mock Supabase returns typed responses matching the PostgrestList/PostgrestMap shapes the repositories expect, so fromJson paths are exercised.
Testing Requirements
Unit tests (flutter_test): DuplicateDetectionService — RPC mapping (happy path, empty list, null input), threshold boundary values (at-1, at, at+1). DeduplicationQueueService — chapter filter isolation, unresolved count, forceResolve side effect. Widget tests (flutter_test): DuplicateComparisonPanel — two-column layout assertion, highlight on divergent rows, no highlight on matching rows, Semantics labels on divergent rows. Use mockito for Supabase client injection.
Aim for 100% branch coverage on service classes. Run via `flutter test test/unit/duplicate_detection_service_test.dart test/unit/deduplication_queue_service_test.dart test/widget/duplicate_comparison_panel_test.dart`.
If the duplicate check RPC fails due to a network error or Supabase outage, the service must decide whether to block submission entirely (safe but disruptive) or allow submission to proceed silently (functional but risks data duplication). An incorrect choice leads to either user frustration or data quality issues.
Mitigation & Contingency
Mitigation: Define an explicit error policy in the service: RPC failures result in a DuplicateCheckResult with status: 'check_failed' and no candidates. The caller treats this as 'allow submission, flag for async review'. Document this as the intended graceful degradation behaviour in the service interface contract.
Contingency: If stakeholders require blocking on RPC failure, expose a configurable `failMode` parameter in the service that can be toggled per organisation via the feature flag system without a code deployment.
The DuplicateComparisonPanel must handle varying activity schemas across organisations (NHF, HLF, Blindeforbundet each have different activity fields). A rigid layout may not accommodate all field variations, causing truncation or missing data in the comparison view.
Mitigation & Contingency
Mitigation: Design the panel to render a dynamic list of key-value pairs rather than a fixed-column layout. Define a `ComparisonField` model that each service populates with only the fields relevant to the activity type and organisation, allowing the panel to adapt without schema knowledge.
Contingency: If dynamic rendering proves too complex within the timeline, ship a simplified panel showing only the five most critical fields (peer mentor, activity type, date, chapter, submitter) and log a follow-up ticket for full field rendering in a later sprint.