Create Dart model and repository for duplicates
epic-organizational-hierarchy-management-duplicate-detection-task-009 — Implement the Dart data model classes for SuspectedDuplicate, DuplicateDetectionConfig, and ActivityFingerprint. Create the DuplicateActivityDetectorRepository with methods to fetch pending duplicates, update review status, and load organization-specific configuration. Use Supabase client for all database operations.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Follow the repository pattern already established in the codebase — inject SupabaseClient via constructor. Place models in `lib/features/duplicate_detection/data/models/` and repository in `lib/features/duplicate_detection/data/repositories/`. Use Dart enums with string serialization helpers for DuplicateStatus to safely handle unknown values from the DB (use `fromString` factory that falls back to `pending`). For the Supabase query in fetchPendingDuplicates: `.from('suspected_duplicates').select('*').eq('org_id', orgId).eq('status', 'pending').order('detected_at', ascending: false)`.
Use freezed package if already used in the project for immutable models with copyWith — check existing model files for the pattern in use.
Testing Requirements
Write unit tests using flutter_test and a mock Supabase client (mockito or manual stub). Test cases must cover: (1) fetchPendingDuplicates returns correctly deserialized SuspectedDuplicate list, (2) fetchPendingDuplicates with empty result returns empty list without throwing, (3) updateReviewStatus sends correct status and notes payload, (4) loadDetectionConfig returns null when no config row exists for org, (5) any Supabase PostgrestException is rethrown as DuplicateRepositoryException with readable message. Also test all fromJson/toJson round-trips for all three model classes with representative fixture data. Target 90%+ line coverage on models and repository.
Fingerprint-based similarity matching may produce high false-positive rates for common activity types (e.g., weekly group sessions with the same participants), causing alert fatigue among coordinators and undermining trust in the detection system.
Mitigation & Contingency
Mitigation: Start with conservative, high-confidence thresholds (exact peer mentor match + same date + same activity type) before adding looser fuzzy matching. Allow NHF administrators to tune thresholds based on observed false-positive rates. Log all detection decisions for retrospective threshold calibration.
Contingency: Introduce a snooze mechanism allowing coordinators to dismiss false positives for a configurable period. Track dismissal rates per activity type and automatically raise the similarity threshold for activity types with high dismissal rates.
A database trigger on the activities insert path adds synchronous overhead to every activity registration. For HLF peer mentors with 380 annual registrations or coordinators doing bulk proxy registration, this could create perceptible latency or lock contention.
Mitigation & Contingency
Mitigation: Implement the trigger as a DEFERRED constraint trigger (fires after the transaction commits) or replace it with a LISTEN/NOTIFY pattern that queues detection work asynchronously via an Edge Function, completely decoupling detection from the registration write path.
Contingency: Disable the synchronous trigger entirely and rely solely on the scheduled Edge Function for batch detection. Accept a detection delay of up to the scheduling interval (e.g., 15 minutes) in exchange for zero impact on registration latency.
The duplicate detection logic must be validated and approved by NHF before go-live, including agreement on threshold values and the review workflow. NHF stakeholder availability for sign-off may delay this epic's release independently of technical readiness.
Mitigation & Contingency
Mitigation: Gate the feature behind the NHF-specific feature flag so technical deployment can proceed independently of business approval. Involve an NHF administrator in threshold calibration sessions during QA, reducing the formal sign-off surface to policy and workflow rather than technical details.
Contingency: Release the detection system in 'silent mode' — flagging duplicates internally without surfacing notifications to coordinators — until NHF approves the workflow. Use the silent period to collect real data on false-positive rates and refine thresholds before activating notifications.