Define DuplicateCandidate and DuplicateCheckResult Dart models
epic-duplicate-activity-detection-foundation-task-005 — Implement the Dart model classes DuplicateCandidate and DuplicateCheckResult with full JSON serialization (fromJson/toJson), equality operators, copyWith, and toString. DuplicateCandidate must hold activity metadata, similarity score, and peer mentor context. DuplicateCheckResult must carry the list of candidates, the queried activity ID, and a boolean indicating if duplicates were found. These models are the shared contract for all upstream layers.
Acceptance Criteria
Technical Requirements
Implementation Notes
Keep both models fully immutable (all fields final, no setters). Derive hasDuplicates as a getter on DuplicateCheckResult: `bool get hasDuplicates => candidates.isNotEmpty;` — this eliminates the possibility of the flag being out of sync with the candidates list. For DateTime parsing from the RPC response, use DateTime.parse(json['date'] as String) since Supabase returns dates as ISO 8601 strings. Implement == and hashCode manually using a pattern like `@override bool operator ==(Object other) => identical(this, other) || other is DuplicateCandidate && runtimeType == other.runtimeType && activityId == other.activityId && ...`.
For the candidates list equality in DuplicateCheckResult, use the listEquals utility from Flutter's foundation library. Place models in a models/ subdirectory within the duplicate_detection feature folder, co-located with the repository and service files for this epic.
Testing Requirements
Unit tests (flutter_test) for both model classes: (1) fromJson with a complete JSON map produces a model with all fields correctly set; (2) fromJson with missing optional field (peerMentorName) does not throw and sets the field to null; (3) toJson produces the exact map expected by the RPC contract (snake_case keys, correct types); (4) Two instances with identical field values are equal (== returns true, hashCode matches); (5) Two instances with one differing field are not equal; (6) copyWith with one overridden field produces a new instance with that field changed and all other fields unchanged; (7) toString contains the activityId and similarityScore for DuplicateCandidate, and queriedActivityId and candidate count for DuplicateCheckResult. Run via `flutter test test/unit/models/duplicate_candidate_test.dart test/unit/models/duplicate_check_result_test.dart`.
The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.
Mitigation & Contingency
Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.
Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.
RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).
Mitigation & Contingency
Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.
Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.
Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.
Mitigation & Contingency
Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.
Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.