critical priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

DuplicateDetectionService is implemented with a single primary method: `detectDuplicates({required ActivityDraft draft}) → Future<DuplicateDetectionResult>`
Matching algorithm: a candidate is flagged when peerMentorId matches AND activityType matches AND the candidate's date falls within the configurable overlap window (default: same calendar day ± 0 days, configurable to ± N days)
Configurable time overlap window is read from DuplicateDetectionConfig (injected, not hardcoded) to allow per-organization threshold tuning without code changes
Cross-chapter detection: if the peer mentor is registered in multiple NHF chapters (up to 5), the service queries all chapters' activities for that mentor, not just the submitting chapter
Proxy submission detection: if `submittedByUserId != peerMentorUserId` (coordinator submitting on behalf), the service includes proxy-submitted activities in the candidate search
DuplicateDetectionResult carries: `hasDuplicates: bool`, `candidates: List<CandidateDuplicate>`, `detectionStrategy: String` (for audit/debug), `queryDurationMs: int`
Service returns `DuplicateDetectionResult(hasDuplicates: false, candidates: [])` (not an error) when no duplicates found
Service throws `DuplicateDetectionServiceException` wrapping underlying repository errors — never leaks raw Supabase errors to callers
Unit tests cover: exact match, time-window boundary (edge of ± N days), cross-chapter match, proxy submission match, no match, repository error propagation
Service is registered as a Riverpod provider and injectable into DuplicateDetectionBloc

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
DuplicateCheckRepository (internal)
data models
ActivityDraft
CandidateDuplicate
DuplicateDetectionResult
DuplicateDetectionConfig
performance requirements
detectDuplicates must complete in < 600ms end-to-end (includes repository query time) for the common single-chapter case
Cross-chapter detection (up to 5 chapters) must complete in < 1500ms — queries should be parallelized with Future.wait, not sequential awaits
security requirements
Service must not expose peer mentor data from other chapters beyond what is needed for the duplicate comparison — returned CandidateDuplicate should not include fields like phone or email
DuplicateDetectionConfig thresholds must not be modifiable by peer mentor role users — config is coordinator/admin-level only
Audit log entry must be created for each duplicate detection invocation (peerMentorId, timestamp, candidates found count) for NHF Bufdir compliance

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Implement the time-window overlap check as a pure function `bool _overlapsWindow(DateTime a, DateTime b, int windowDays)` — pure functions are trivially testable and reusable. For cross-chapter parallel querying, use `Future.wait(chapterIds.map((id) => _repository.checkForDuplicates(..., chapterId: id)))` rather than sequential awaits — this brings the 5-chapter case from ~2500ms (5 × 500ms) to ~500ms (1 parallel batch). DuplicateDetectionConfig should be a frozen value object with sensible defaults: `overlapWindowDays = 0` (same calendar day), `requireSameActivityType = true`. Pitfall: timezone handling — ensure all date comparisons normalize to UTC before comparing, since peer mentors in NHF's 9 regions may submit from different local timezones.

Store activityDate as UTC in the database. Proxy detection: the proxy flag is critical for NHF because coordinators frequently bulk-register on behalf of mentors (see likeperson.md section 2.4), and without proxy detection, coordinator-submitted duplicates are invisible to the system. Implement detectionStrategy as a human-readable string like 'single-chapter-same-day' or 'cross-chapter-proxy' for audit log clarity.

Testing Requirements

Unit tests using flutter_test with a mocked DuplicateCheckRepository (implement MockDuplicateCheckRepository returning controlled fixtures). Test cases: (1) exact same day + same type → hasDuplicates=true with 1 candidate; (2) different activity type same day → hasDuplicates=false; (3) date at boundary of overlap window → included; (4) date one day outside overlap window → excluded; (5) proxy submission: submittedByUserId ≠ peerMentorUserId returns proxy-submitted activities in candidates; (6) cross-chapter: mock returns activities from 3 different chapters, all appear in candidates; (7) repository throws → DuplicateDetectionServiceException with wrapped cause; (8) empty repository result → DuplicateDetectionResult with hasDuplicates=false. Performance test: mock repository with 50ms artificial delay, assert detectDuplicates completes within 700ms for single-chapter case and 1600ms for 5-chapter parallel case. Achieve 100% branch coverage on the matching algorithm.

Component
Duplicate Detection BLoC
infrastructure medium
Epic Risks (2)
medium impact high prob technical

For bulk registration with many participants, running duplicate checks sequentially before surfacing the consolidated summary could introduce a multi-second delay as each peer mentor is checked individually against the RPC. This degrades the bulk submission UX significantly.

Mitigation & Contingency

Mitigation: Issue all duplicate check RPC calls concurrently using Dart's `Future.wait` or a bounded parallel executor (max 5 concurrent calls to avoid Supabase rate limits). The BLoC collects all results and emits a single BulkDuplicateSummary state with the consolidated list.

Contingency: If concurrent RPC calls hit Supabase connection limits or rate limits, implement a batched sequential approach with a progress indicator showing 'Checking participant N of M' so the coordinator understands the delay is expected and bounded.

high impact medium prob integration

In proxy registration, the peer mentor's ID must be used as the duplicate check parameter, not the coordinator's ID. If the proxy context is not correctly threaded through the BLoC and service layer, duplicate checks will silently run against the wrong person, missing actual duplicates.

Mitigation & Contingency

Mitigation: Define a `SubmissionContext` model that carries the effective `peer_mentor_id` (distinct from `submitter_id`) and pass it explicitly through the BLoC event payload. The DuplicateDetectionService always reads peer_mentor_id from SubmissionContext, never from the authenticated user session.

Contingency: If SubmissionContext threading proves difficult to retrofit into the existing proxy registration BLoC, add an assertion in DuplicateDetectionService that throws a descriptive error when peer_mentor_id is null or matches the coordinator's own ID in a proxy context, making the bug immediately visible in testing.