Create coordinator_duplicate_queue database view
epic-duplicate-activity-detection-foundation-task-003 — Define the coordinator_duplicate_queue view in Supabase that surfaces unresolved duplicate candidates scoped to the authenticated coordinator's chapter. The view must join activity data with duplicate check results, expose the duplicate_reviewed flag, and enforce chapter-level RLS. Include migration script and rollback procedure.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
Because Supabase PostgREST exposes views as REST endpoints automatically, naming the view coordinator_duplicate_queue makes it queryable from Dart without additional configuration. Use auth.uid() inside the view definition (or in a row-level security policy on the view) to resolve the coordinator's chapter memberships dynamically. If PostgreSQL does not support auth.uid() directly in view definitions in your Supabase version, use a security barrier view (WITH (security_barrier = true)) and attach an RLS policy instead. The candidate_count column can be a subquery or a lateral join counting related duplicate candidate records — keep it simple and avoid window functions if they prevent index usage.
Consider adding a created_at ordering so the coordinator sees the oldest unresolved duplicates first, prioritising stale items in the queue.
Testing Requirements
SQL test cases: (1) Coordinator from Chapter A queries view — receives only Chapter A unresolved duplicates; (2) Coordinator from Chapter B queries same view — receives only Chapter B items; (3) After marking an activity as duplicate_reviewed=true, that activity no longer appears in the view; (4) Empty chapter returns empty set without error; (5) View is accessible via Dart client using supabase.from('coordinator_duplicate_queue').select() without 403/404. Test RLS isolation by creating two test coordinator users assigned to different chapters in the seed data. Run rollback script and confirm view is dropped, then re-run migration and confirm view is recreated correctly.
The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.
Mitigation & Contingency
Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.
Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.
RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).
Mitigation & Contingency
Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.
Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.
Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.
Mitigation & Contingency
Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.
Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.