critical priority low complexity backend pending backend specialist Tier 0

Acceptance Criteria

DuplicateCandidate class is defined with fields: activityId (String), peerMentorId (String), activityTypeId (String), date (DateTime), status (String), similarityScore (double), peerMentorName (String?)
DuplicateCheckResult class is defined with fields: queriedActivityId (String), candidates (List<DuplicateCandidate>), hasDuplicates (bool)
hasDuplicates is a derived getter (candidates.isNotEmpty) — not stored as a separate field — to ensure consistency
DuplicateCandidate.fromJson correctly parses the JSON keys returned by the check_activity_duplicates RPC (snake_case keys mapped to camelCase fields)
DuplicateCandidate.toJson produces a Map<String, dynamic> with snake_case keys matching the RPC contract
DuplicateCheckResult.fromJson parses a map containing queriedActivityId and a candidates JSON array
DuplicateCheckResult.toJson produces a serializable map
Both classes implement == and hashCode using all fields (excluding derived getters)
Both classes implement copyWith returning a new instance with selectively overridden fields
Both classes implement toString returning a readable debug representation
Unit tests (in separate _test.dart files) verify: fromJson round-trip, toJson output, equality, copyWith, and toString for both classes
Models are placed in lib/features/duplicate_detection/models/ or the project's established model directory structure
No external code generation dependencies (e.g., json_serializable) unless already used project-wide — prefer manual implementation for these simple models

Technical Requirements

frameworks
Flutter
Dart
flutter_test
apis
check_activity_duplicates RPC (JSON contract)
data models
DuplicateCandidate
DuplicateCheckResult
activities
performance requirements
fromJson must handle lists of up to 100 candidates without perceptible delay
Models are immutable (all fields final) to support safe use in BLoC/Riverpod state
security requirements
peerMentorName is nullable — must not throw if absent from JSON (use json['peer_mentor_name'] as String?)
Models must not store or log raw personal data in toString output — use activityId and peerMentorId only

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Keep both models fully immutable (all fields final, no setters). Derive hasDuplicates as a getter on DuplicateCheckResult: `bool get hasDuplicates => candidates.isNotEmpty;` — this eliminates the possibility of the flag being out of sync with the candidates list. For DateTime parsing from the RPC response, use DateTime.parse(json['date'] as String) since Supabase returns dates as ISO 8601 strings. Implement == and hashCode manually using a pattern like `@override bool operator ==(Object other) => identical(this, other) || other is DuplicateCandidate && runtimeType == other.runtimeType && activityId == other.activityId && ...`.

For the candidates list equality in DuplicateCheckResult, use the listEquals utility from Flutter's foundation library. Place models in a models/ subdirectory within the duplicate_detection feature folder, co-located with the repository and service files for this epic.

Testing Requirements

Unit tests (flutter_test) for both model classes: (1) fromJson with a complete JSON map produces a model with all fields correctly set; (2) fromJson with missing optional field (peerMentorName) does not throw and sets the field to null; (3) toJson produces the exact map expected by the RPC contract (snake_case keys, correct types); (4) Two instances with identical field values are equal (== returns true, hashCode matches); (5) Two instances with one differing field are not equal; (6) copyWith with one overridden field produces a new instance with that field changed and all other fields unchanged; (7) toString contains the activityId and similarityScore for DuplicateCandidate, and queriedActivityId and candidate count for DuplicateCheckResult. Run via `flutter test test/unit/models/duplicate_candidate_test.dart test/unit/models/duplicate_check_result_test.dart`.

Component
Duplicate Check Repository
data medium
Epic Risks (3)
high impact medium prob technical

The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.

Mitigation & Contingency

Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.

Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.

high impact medium prob security

RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).

Mitigation & Contingency

Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.

Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.

medium impact low prob dependency

Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.

Mitigation & Contingency

Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.

Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.