Deploy RLS migrations to staging and production Supabase
epic-organizational-hierarchy-management-core-services-task-015 — Apply all RLS migration scripts to the staging Supabase project, run the full role simulation test suite, then promote to production. Document the migration sequence, rollback procedure, and any required Supabase CLI commands. Verify that existing data remains accessible to correctly authenticated users after deployment.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 5 - 253 tasks
Can start after Tier 4 completes
Implementation Notes
Migration sequence commands: (1) supabase login, (2) supabase link --project-ref {STAGING_REF}, (3) supabase db push --dry-run (review output), (4) supabase db push (apply), (5) supabase db test (run pgTAP suite). Repeat steps 2-5 for production ref. The --dry-run flag shows which migrations will be applied without executing them — always run this first. Use supabase migration list to confirm which migrations have been applied and which are pending.
For the rollback procedure: supabase db push applies down migrations if they exist in the migration files; alternatively, maintain a dedicated rollback.sql that can be run directly via psql. Critical deployment timing consideration: since task-012 (JWT hook) must be registered in the Supabase Auth settings manually (not via CLI migration), include a checklist step to verify the hook is registered BEFORE applying the RLS policy migrations. If the hook is not active when RLS migrations are applied, legitimate users will lose data access immediately. The correct deployment order is: (1) Register JWT hook in Auth settings, (2) Apply task-011 base RLS migrations, (3) Apply task-012 hook function migration, (4) Apply task-013 role-differentiated policy migrations, (5) Run full test suite.
Document this order explicitly in the runbook.
Testing Requirements
Pre-deployment (staging): run supabase db test to execute full pgTAP suite; run manual smoke tests for each role type (peer_mentor, coordinator, global_admin) using actual user accounts in the staging environment; record results in a deployment checklist. Post-deployment (staging): verify Supabase dashboard policy list matches expected; run EXPLAIN ANALYZE on 3 queries; confirm row counts match pre-migration. Pre-deployment (production): repeat staging smoke tests using production test accounts (created specifically for deployment validation, not real organisation accounts). Post-deployment (production): run smoke tests again; monitor Supabase logs for 30 minutes post-deployment for unexpected 403 errors or RLS policy violations; confirm all organisation contacts can access their data normally.
Rollback test (staging only): apply down migrations, confirm policies removed and data accessible without restriction, re-apply up migrations.
Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.
Mitigation & Contingency
Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.
Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.
Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.
Mitigation & Contingency
Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.
Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.
Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.
Mitigation & Contingency
Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.
Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.
Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.
Mitigation & Contingency
Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.
Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.