Implement DuplicateQueueRepository with view queries
epic-duplicate-activity-detection-foundation-task-007 — Build the DuplicateQueueRepository Dart class that queries the coordinator_duplicate_queue view. Expose fetchQueue() returning Stream<List<DuplicateCandidate>> for real-time updates, resolveEntry(candidateId, resolution) to mark items as resolved or dismissed, and getQueueCount() for badge counts. Handle Supabase Realtime subscription lifecycle and RLS-scoped queries.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Use SupabaseClient.from('coordinator_duplicate_queue').stream(primaryKey: ['candidate_id']) for the Realtime-backed stream — this is the simplest approach and handles reconnects automatically via the Supabase SDK. If the view is not directly streamable (views have limitations with Realtime), fall back to subscribing to the underlying table changes and re-querying the view on each event. Manage the RealtimeChannel in a class-level field; call channel.unsubscribe() in a dispose() method exposed on the repository interface. For getQueueCount(), use .count(CountOption.exact).eq('status', 'pending') to avoid fetching full rows.
Keep DuplicateCandidateStatus as a Dart enum with a fromString() factory to safely parse DB strings. Riverpod StreamProvider wrapping fetchQueue() should use autoDispose to ensure the channel is cleaned up when the UI navigates away.
Testing Requirements
Write flutter_test unit tests using a mocked SupabaseClient. Test cases: (1) fetchQueue emits list of DuplicateCandidate objects on initial load, (2) fetchQueue emits updated list on Realtime INSERT event, (3) fetchQueue emits updated list on Realtime UPDATE event (status change), (4) resolveEntry('resolved') sends correct UPDATE payload, (5) resolveEntry('dismissed') sends correct UPDATE payload, (6) getQueueCount returns correct integer, (7) getQueueCount returns 0 on empty result, (8) Supabase error in resolveEntry throws DuplicateQueueException. Add an integration test against local Supabase that verifies: Realtime subscription receives an event within 2 seconds of a test INSERT into the underlying table, and RLS blocks cross-chapter reads.
The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.
Mitigation & Contingency
Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.
Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.
RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).
Mitigation & Contingency
Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.
Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.
Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.
Mitigation & Contingency
Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.
Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.