critical priority medium complexity database pending database specialist Tier 3

Acceptance Criteria

RLS is enabled on the activities table and a policy named 'coordinator_can_update_duplicate_reviewed' allows UPDATE of the duplicate_reviewed column only for rows where chapter_id matches the coordinator's chapter membership (derived from auth.uid() via a chapters_members join)
A policy named 'peer_mentor_can_read_own_duplicate_reviewed' allows peer mentors to SELECT their own activities rows (where peer_mentor_id = auth.uid()) including the duplicate_reviewed column
The coordinator_duplicate_queue view is protected so coordinators only see rows belonging to their chapter; if the view cannot have RLS directly, the underlying table has a policy with the same effect
Coordinators from chapter A cannot read or update entries from chapter B — verified by role impersonation test queries
Peer mentors cannot update duplicate_reviewed on any row — verified by role impersonation test queries
Migration scripts are idempotent: running them twice does not produce duplicate policies or errors
All policy names follow the naming convention: {role}_{action}_{resource} (e.g., coordinator_update_duplicate_reviewed)
Migration scripts are stored in the supabase/migrations/ directory with a timestamp prefix and applied via Supabase CLI

Technical Requirements

frameworks
Supabase
apis
Supabase RLS (Row Level Security)
PostgreSQL policy DDL
Supabase CLI (supabase db push)
data models
Activity
DuplicateCandidate
ChapterMembership
UserRole
performance requirements
RLS policies must use indexed columns (chapter_id, peer_mentor_id) in their USING expressions to prevent full table scans on large activity tables
The chapters_members join used in coordinator policies must be a subquery on an indexed foreign key, not a lateral join, to maintain query performance
security requirements
Use auth.uid() in all policy USING clauses — never trust client-supplied user IDs
Policies must be RESTRICTIVE by default; only explicitly permitted operations are allowed
Service role key must bypass RLS only in migration scripts and server-side admin operations, never in the Flutter app
Test impersonation must use Supabase's set_config('request.jwt.claims', ...) pattern to simulate real user tokens

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Start by mapping out the existing RLS policies on the activities table to avoid conflicts. For coordinator chapter scoping, create a stable PostgreSQL function get_user_chapter_ids(user_id UUID) RETURNS SETOF UUID that queries the chapter_members table — this function can be reused across multiple policies and keeps USING expressions readable. For the coordinator_duplicate_queue view, if Supabase doesn't support RLS directly on views, add a WHERE chapter_id = ANY(get_user_chapter_ids(auth.uid())) condition to the view definition itself, scoping it at the view level rather than relying on a separate policy. Test each policy with role impersonation before committing the migration script to avoid locked-out states in the dev database.

Include a rollback script alongside each migration.

Testing Requirements

Write SQL test scripts (run via Supabase SQL editor or pgTAP) that: (1) impersonate a coordinator from chapter A and confirm SELECT returns only chapter A queue entries, (2) impersonate a coordinator from chapter A and confirm UPDATE of chapter B's duplicate_reviewed fails with permission denied, (3) impersonate a peer mentor and confirm they can read their own activity's duplicate_reviewed field, (4) impersonate a peer mentor and confirm UPDATE of duplicate_reviewed fails. Document each test query in the migration PR. Additionally, write a Dart integration test in flutter_test that exercises the repository against a local Supabase instance with multiple test users to verify the same RLS behaviour from the app perspective.

Component
Duplicate Check Repository
data medium
Epic Risks (3)
high impact medium prob technical

The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.

Mitigation & Contingency

Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.

Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.

high impact medium prob security

RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).

Mitigation & Contingency

Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.

Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.

medium impact low prob dependency

Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.

Mitigation & Contingency

Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.

Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.