critical priority medium complexity database pending database specialist Tier 0

Acceptance Criteria

Migration file is created under supabase/migrations/ with a timestamp prefix following the project convention (e.g., 20250301000000_add_activities_duplicate_check_index.sql)
Index name follows convention: idx_activities_peer_mentor_type_date_status
Index is defined as CREATE INDEX IF NOT EXISTS on columns (peer_mentor_id, activity_type_id, date, status) in that exact order to match the RPC WHERE clause selectivity
Migration script is idempotent: running it twice does not produce an error (achieved via IF NOT EXISTS clause)
EXPLAIN ANALYZE on a representative query (peer_mentor_id = $1 AND activity_type_id = $2 AND date = $3) confirms Index Scan or Bitmap Index Scan node is used, not Seq Scan
Query execution time on a dataset of 10,000+ activity rows for a single chapter returns in under 500ms as measured by EXPLAIN ANALYZE actual time
Migration includes a corresponding rollback script (down migration) that drops the index using DROP INDEX IF EXISTS
Migration applies cleanly on a fresh Supabase project via supabase db push without errors
Index does not break existing RLS policies on the activities table
Code review confirms no raw personal data is embedded in the migration script

Technical Requirements

frameworks
Supabase CLI
PostgreSQL
data models
activities
performance requirements
check_activity_duplicates RPC must complete in under 500ms on datasets of 10,000+ rows
Index creation must use CONCURRENTLY in production to avoid table locks (note: not supported inside transactions — run outside a BEGIN/COMMIT block)
security requirements
Migration must not alter RLS policies on the activities table
Migration script must not include any hardcoded user IDs or organisation IDs

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Column order in the composite index matters for query plan selection: place peer_mentor_id first (highest selectivity for chapter-scoped queries), then activity_type_id, date, status. If the RPC also filters by chapter_id, consider adding it as the leading column instead and benchmark both options. Use CREATE INDEX CONCURRENTLY in a standalone migration (not inside a transaction) to avoid locking the activities table in production. For local development, CONCURRENTLY is safe to omit inside supabase db reset flows.

Add a comment block at the top of the migration file explaining the business reason (duplicate detection RPC performance) to aid future maintainers. The rollback (down) migration should be a separate file named with the same timestamp suffix plus _down or placed in a /down subfolder per project convention.

Testing Requirements

Verify migration with `supabase db reset` on a local instance seeded with representative data (minimum 1,000 rows across multiple chapters). Run EXPLAIN ANALYZE on the exact query pattern used by check_activity_duplicates and confirm index usage. Run the migration twice to confirm idempotency (no errors on second run). Run the rollback script and confirm index is dropped.

Confirm existing integration tests still pass after migration. Document query plan output (Index Scan vs Seq Scan) in a comment at the top of the migration file.

Component
Duplicate Check Repository
data medium
Epic Risks (3)
high impact medium prob technical

The `check_activity_duplicates` RPC may not meet the 500ms target on production-scale data if the composite index is not applied correctly or if Supabase RLS evaluation adds unexpected overhead, causing the duplicate check to noticeably delay activity submission.

Mitigation & Contingency

Mitigation: Write the RPC with an explicit EXPLAIN ANALYZE in development against a seeded dataset representative of a large chapter (10,000+ activities). Pin the index hint in the RPC body and verify the query plan in Supabase's SQL editor before merging.

Contingency: If the 500ms target cannot be met with the RPC approach, introduce an async post-submit check pattern where the activity is saved first and the duplicate warning is surfaced as a follow-up notification, preserving submission speed at the cost of real-time blocking UX.

high impact medium prob security

RLS policies for the coordinator_duplicate_queue view must correctly scope results to the coordinator's chapters. Incorrect policies could expose duplicate records from other chapters (privacy violation) or hide legitimate duplicates (functional regression).

Mitigation & Contingency

Mitigation: Write explicit integration tests that verify RLS behaviour using at least three distinct coordinator + chapter combinations, including a peer mentor belonging to two chapters. Use Supabase's built-in RLS testing utilities.

Contingency: If RLS proves too complex for the queue view, move the chapter-scoping filter into the DuplicateQueueRepository query layer at the application level, trading database-enforced isolation for application-enforced scoping with full test coverage.

medium impact low prob dependency

Adding the duplicate_reviewed column to the activities table and the composite index requires a migration against a live table. If the migration locks the table for an extended period, it could disrupt active coordinators submitting activities.

Mitigation & Contingency

Mitigation: Use PostgreSQL's `CREATE INDEX CONCURRENTLY` to avoid table lock. Add the duplicate_reviewed column with a DEFAULT false so no backfill update lock is required. Schedule the migration during a low-traffic window.

Contingency: If concurrent index creation fails or takes too long, fall back to a smaller partial index scoped to the last 90 days of activities, then expand it incrementally.