high priority medium complexity testing pending testing specialist Tier 4

Acceptance Criteria

Unit tests for DuplicateCheckRepository achieve 100% branch coverage on checkForDuplicates() and markAsReviewed()
Unit tests cover: successful RPC with duplicates found, successful RPC with no duplicates, RPC PostgrestException → DuplicateCheckRpcException mapping, timeout → DuplicateCheckTimeoutException mapping, markAsReviewed success, markAsReviewed DB error → DuplicateCheckRpcException
Unit tests for DuplicateQueueRepository cover: fetchQueue emits correct DuplicateCandidate list on initial load, stream emits updated list on simulated Realtime event, resolveEntry sends 'resolved' mutation, resolveEntry sends 'dismissed' mutation, getQueueCount returns correct integer, getQueueCount returns 0 on empty result, Supabase error → DuplicateQueueException
Unit tests for DuplicateReviewedFlagMiddleware cover: inject adds false when key absent, inject preserves false when already false, inject overwrites true with false, inject preserves all other payload fields
Integration test verifies: inserting an activity via insertActivity() results in duplicate_reviewed=false in the database row
Integration test verifies: cross-chapter RLS block — coordinator from chapter A cannot read chapter B queue entries
Integration test verifies: peer mentor cannot UPDATE duplicate_reviewed on any row
Integration test verifies: the composite index on activities (chapter_id, peer_mentor_id, activity_date) is used for the duplicate check query (EXPLAIN ANALYZE output shows Index Scan)
All tests pass in CI without a live Supabase connection (unit tests) and with a local Supabase Docker instance (integration tests)
Test files follow the naming convention {class_name}_test.dart and are placed in the test/features/duplicate_detection/ directory

Technical Requirements

frameworks
Flutter
flutter_test
mockito or mocktail
Supabase local Docker (integration tests)
apis
check_activity_duplicates Supabase RPC (mocked in unit, real in integration)
coordinator_duplicate_queue view (mocked in unit, real in integration)
data models
DuplicateCheckRepository
DuplicateQueueRepository
DuplicateReviewedFlagMiddleware
DuplicateCandidate
DuplicateCheckResult
performance requirements
Unit test suite must complete in under 30 seconds (no real network calls)
Integration test suite must complete in under 5 minutes against local Supabase Docker
security requirements
Integration tests must use separate test user accounts for coordinator-A, coordinator-B, and peer-mentor roles — never reuse service role key for RLS tests
Test database must be seeded with isolated chapter data; tear down after each test to prevent cross-test contamination

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Use mocktail (preferred over mockito for null-safe Dart) to mock SupabaseClient and its chain calls (from().select(), rpc()). For mocking the stream in DuplicateQueueRepository tests, use a StreamController>>() and emit events manually to simulate Realtime updates. For the timeout test in DuplicateCheckRepository, wrap the repository method call in a fake async zone using fake_async package so the test runs instantly without real delays. For integration tests, use a dedicated supabase/seed.sql that creates test chapters, users with different roles, and sample activities.

The EXPLAIN ANALYZE index test should query: SELECT * FROM activities WHERE chapter_id = $1 AND peer_mentor_id = $2 AND activity_date = $3 and parse the query plan for 'Index Scan' — fail the test if 'Seq Scan' appears on a table with more than 1000 rows.

Testing Requirements

This task IS the testing task. Structure tests in three groups: (1) Unit tests — mock all Supabase calls using mocktail's when/thenAnswer pattern; run with flutter test; (2) Integration tests — use supabase_flutter connected to a local Supabase Docker instance started via supabase start; seed test data in setUp() and clean up in tearDown(); (3) RLS policy tests — run as SQL scripts via supabase db test or as Dart integration tests using two SupabaseClient instances authenticated with different test JWTs. CI pipeline should run unit tests on every PR and integration tests on merge to main. Document the local Supabase setup steps in a TESTING.md comment block at the top of the integration test file.

Component
Duplicate Check Repository
data medium
Dependencies (4)
Build the DuplicateCheckRepository Dart class that wraps the check_activity_duplicates Supabase RPC. Expose checkForDuplicates(activityParams) returning Future<DuplicateCheckResult> and markAsReviewed(activityId) for updating the duplicate_reviewed flag. Include typed exception classes for RPC failures (DuplicateCheckRpcException, DuplicateCheckTimeoutException). Wire to the SupabaseClient via dependency injection. epic-duplicate-activity-detection-foundation-task-006 Build the DuplicateQueueRepository Dart class that queries the coordinator_duplicate_queue view. Expose fetchQueue() returning Stream<List<DuplicateCandidate>> for real-time updates, resolveEntry(candidateId, resolution) to mark items as resolved or dismissed, and getQueueCount() for badge counts. Handle Supabase Realtime subscription lifecycle and RLS-scoped queries. epic-duplicate-activity-detection-foundation-task-007 Write and apply RLS policies on the activities table (for the duplicate_reviewed column) and the coordinator_duplicate_queue view scoped to chapter membership. Coordinators may only read and update records belonging to their chapter. Peer mentors may only read their own duplicate_reviewed status. Include policy migration scripts and verify policy enforcement with test role impersonation queries in Supabase. epic-duplicate-activity-detection-foundation-task-009 Create the DuplicateReviewedFlagMiddleware class that intercepts every activity insert payload (wizard, bulk registration, proxy submission) and injects the duplicate_reviewed boolean field set to false. The middleware must be applied consistently at the repository layer before any Supabase insert call. Ensure no submission path can bypass this injection by writing a shared activity insert helper that enforces the flag. epic-duplicate-activity-detection-foundation-task-008
Epic Risks (3)
high impact medium prob technical

The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.

Mitigation & Contingency

Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.

Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.

high impact medium prob security

RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).

Mitigation & Contingency

Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.

Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.

medium impact low prob dependency

Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.

Mitigation & Contingency

Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.

Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.