Write unit and integration tests for repositories
epic-duplicate-activity-detection-foundation-task-011 — Write flutter_test unit tests for DuplicateCheckRepository (mock Supabase RPC responses, test typed exception mapping, test timeout handling) and DuplicateQueueRepository (mock view queries, test stream updates, test resolve/dismiss mutations). Add integration tests against a local Supabase instance verifying the composite index is used and RLS policies block cross-chapter access. Cover the DuplicateReviewedFlagMiddleware injection across all submission paths.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
Use mocktail (preferred over mockito for null-safe Dart) to mock SupabaseClient and its chain calls (from().select(), rpc()). For mocking the stream in DuplicateQueueRepository tests, use a StreamController>>() and emit events manually to simulate Realtime updates. For the timeout test in DuplicateCheckRepository, wrap the repository method call in a fake async zone using fake_async package so the test runs instantly without real delays. For integration tests, use a dedicated supabase/seed.sql that creates test chapters, users with different roles, and sample activities.
The EXPLAIN ANALYZE index test should query: SELECT * FROM activities WHERE chapter_id = $1 AND peer_mentor_id = $2 AND activity_date = $3 and parse the query plan for 'Index Scan' — fail the test if 'Seq Scan' appears on a table with more than 1000 rows.
Testing Requirements
This task IS the testing task. Structure tests in three groups: (1) Unit tests — mock all Supabase calls using mocktail's when/thenAnswer pattern; run with flutter test; (2) Integration tests — use supabase_flutter connected to a local Supabase Docker instance started via supabase start; seed test data in setUp() and clean up in tearDown(); (3) RLS policy tests — run as SQL scripts via supabase db test or as Dart integration tests using two SupabaseClient instances authenticated with different test JWTs. CI pipeline should run unit tests on every PR and integration tests on merge to main. Document the local Supabase setup steps in a TESTING.md comment block at the top of the integration test file.
The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.
Mitigation & Contingency
Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.
Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.
RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).
Mitigation & Contingency
Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.
Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.
Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.
Mitigation & Contingency
Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.
Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.