Implement duplicate-reviewed-flag middleware
epic-duplicate-activity-detection-foundation-task-008 — Create the DuplicateReviewedFlagMiddleware class that intercepts every activity insert payload (wizard, bulk registration, proxy submission) and injects the duplicate_reviewed boolean field set to false. The middleware must be applied consistently at the repository layer before any Supabase insert call. Ensure no submission path can bypass this injection by writing a shared activity insert helper that enforces the flag.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Keep DuplicateReviewedFlagMiddleware as a simple Dart class with no dependencies — it is a pure transformation function. Place it in lib/features/duplicate_detection/middleware/duplicate_reviewed_flag_middleware.dart. The shared insertActivity() helper belongs in the ActivityRepository class; replace all existing direct .insert() calls in ActivityRepository with calls to this helper. Since bulk registration may call insertActivity() in a loop, ensure the middleware does not have any shared mutable state that could cause race conditions.
After implementing, do a project-wide search for '.from("activities").insert(' to verify there are no remaining bypass points and document the result in a code comment on the helper method.
Testing Requirements
Write flutter_test unit tests for DuplicateReviewedFlagMiddleware.inject(): (1) payload without duplicate_reviewed key gets it added as false, (2) payload with duplicate_reviewed=false is returned unchanged, (3) payload with duplicate_reviewed=true is overwritten to false, (4) payload with additional arbitrary fields is returned with all fields intact. Write unit tests for insertActivity() helper verifying it always calls middleware.inject() before the Supabase insert. Add integration tests tracing each submission path (wizard, bulk, proxy) to confirm duplicate_reviewed=false is present in the inserted row.
The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.
Mitigation & Contingency
Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.
Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.
RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).
Mitigation & Contingency
Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.
Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.
Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.
Mitigation & Contingency
Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.
Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.