Implement per-participant duplicate detection loop in BulkRegistrationService
epic-proxy-activity-registration-orchestration-task-007 — For each participant entry in BulkRegistrationRequest, call ProxyDuplicateDetectionService with the participant-specific context. Collect all detected conflicts into a BulkConflictSummary without aborting the loop — every participant must be checked even when earlier duplicates are found. The resulting summary must identify which participants are clean and which have conflicts, ready for coordinator review.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 8 - 48 tasks
Can start after Tier 7 completes
Implementation Notes
Implement the loop using Future.wait() on a list of detection futures, not a sequential for-await loop. To avoid overwhelming Supabase with concurrent requests, batch participants into groups of 5 using a simple chunking utility (List> chunks = [...]). Collect results into two lists (conflicts, clean) using a fold or partition operation on the settled futures. Use a Result
A participant whose detection call throws should be added to conflicts with a special conflict_type: 'detection_error' — this surfaces the issue to the coordinator rather than silently dropping the participant. The cleanParticipants list is the authoritative input to the batch insert step — it must be computed here, not re-derived later.
Testing Requirements
Write unit tests using flutter_test with a mocked ProxyDuplicateDetectionService. Required test scenarios: (1) all participants clean → BulkConflictSummary has empty conflicts; (2) all participants duplicated → all in conflicts list, none in cleanParticipants; (3) mixed results → correct separation into conflicts and cleanParticipants; (4) one participant's detection throws → treated as conflict with error type, loop continues; (5) participant count invariant holds (conflicts + clean == total); (6) loop does not short-circuit on first conflict. Performance test: mock 50 participants with 100ms detection delay each — verify parallel execution completes significantly faster than 5000ms.
If the Supabase batch RPC partial-inserts some records before encountering an error and does not roll back cleanly, the bulk service may report failure while orphaned records exist in the database, corrupting reporting data.
Mitigation & Contingency
Mitigation: Wrap the bulk insert in an explicit Supabase transaction via the RPC function. Write an integration test that simulates a mid-batch constraint violation and asserts zero records were written.
Contingency: If a partial-write incident occurs, the registered_by audit field allows identification and deletion of the orphaned records. Implement a coordinator-facing bulk submission status screen to surface any such anomalies.
When a bulk submission of 15 participants has 4 duplicates, the aggregated conflict summary may be too complex for coordinators to process quickly, leading to blanket override decisions that defeat the purpose of duplicate detection.
Mitigation & Contingency
Mitigation: Design the conflict result type to support per-participant override flags, so the UI can present a clear list of conflicting participants with individual cancel/override toggles rather than a single global decision.
Contingency: If coordinator usability testing reveals the conflict review screen is too complex, simplify to a 'skip all conflicts and submit the rest' mode as an immediate fallback while a more granular UI is designed.
If the coordinator role check inside proxy-registration-service is inconsistent with the route-level guard, a regression in the guard could allow peer mentors to call the service directly via deep links, submitting records with incorrect attribution.
Mitigation & Contingency
Mitigation: Enforce role authorization at both the route guard level (coordinator-role-guard) and inside each service method independently. Write a security test that calls the service directly with a peer mentor session token and asserts rejection.
Contingency: If a bypass is discovered, immediately enable the server-side RLS policy as the final enforcement layer and audit any records written during the exposure window using the registered_by field.