critical priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

DuplicateCheckRepository class is implemented as an abstract interface + concrete implementation pair to allow mocking in tests
checkForDuplicates(activityParams) calls the check_activity_duplicates Supabase RPC and returns Future<DuplicateCheckResult> with typed fields: isDuplicate (bool), candidateIds (List<String>), similarityScore (double)
markAsReviewed(activityId) executes an UPDATE on the activities table setting duplicate_reviewed = true for the given activityId, scoped to the authenticated user's chapter via RLS
DuplicateCheckRpcException is thrown when the Supabase RPC returns a non-success PostgrestException, containing the original error code and message
DuplicateCheckTimeoutException is thrown when the RPC call exceeds a configurable timeout (default 10 seconds)
All network errors are caught and rethrown as typed domain exceptions — no raw PostgrestException or SocketException leaks to callers
SupabaseClient is injected via constructor, never instantiated inside the class
Repository method signatures match the interface contract defined in task-002
activityParams maps to the RPC parameter schema without nullable mismatches
Repository is registered and accessible via Riverpod (wired in task-010)

Technical Requirements

frameworks
Flutter
Riverpod
Supabase Dart SDK
apis
check_activity_duplicates Supabase RPC
Supabase REST (UPDATE activities.duplicate_reviewed)
data models
Activity
DuplicateCheckResult
DuplicateCheckParams
performance requirements
RPC call must complete within 10 seconds; configurable timeout via constructor parameter
checkForDuplicates must not block the UI thread — always called from a BLoC/Cubit async method
No redundant RPC calls: callers are responsible for debouncing; repository itself performs no caching
security requirements
SupabaseClient must be authenticated before any call; repository must not bypass RLS
activityId validated as non-empty UUID before sending UPDATE to prevent malformed queries
Exception messages must not expose raw SQL or internal Supabase error details to the UI layer

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Define an abstract IDuplicateCheckRepository interface first so the BLoC layer depends on the abstraction, not the concrete Supabase implementation. Use supabase_flutter's SupabaseClient.rpc() method for the RPC call; map the returned Map to DuplicateCheckResult using a factory constructor with null-safe field access. Wrap the RPC Future in Future.timeout() to enforce the configurable deadline. For markAsReviewed, use client.from('activities').update({'duplicate_reviewed': true}).eq('id', activityId) — RLS ensures the user can only update records in their chapter.

Keep exception classes in a dedicated exceptions.dart file inside the repository layer so they can be imported by both repository and BLoC layers without circular dependencies. Do not add retry logic here; retries belong in the service/BLoC layer.

Testing Requirements

Write flutter_test unit tests using a mocked SupabaseClient (via mockito or mocktail). Test cases must cover: (1) successful RPC response with isDuplicate=true and candidate list, (2) successful RPC response with isDuplicate=false and empty candidates, (3) PostgrestException from RPC maps to DuplicateCheckRpcException with correct code, (4) timeout scenario maps to DuplicateCheckTimeoutException, (5) markAsReviewed succeeds silently on valid activityId, (6) markAsReviewed throws DuplicateCheckRpcException on DB error. Aim for 100% branch coverage on both public methods. Integration test against local Supabase instance verifies the RPC exists and returns the expected response shape.

Component
Duplicate Check Repository
data medium
Epic Risks (3)
high impact medium prob technical

The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.

Mitigation & Contingency

Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.

Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.

high impact medium prob security

RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).

Mitigation & Contingency

Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.

Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.

medium impact low prob dependency

Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.

Mitigation & Contingency

Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.

Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.