critical priority medium complexity deployment pending devops specialist Tier 6

Acceptance Criteria

All pending migrations (RLS policies, bufdir_category_mappings, is_proxy_registered column, generate_bufdir_report RPC) are applied to the staging Supabase project with zero errors
Migration idempotency: re-running migrations produces no errors and no duplicate objects
Full integration test suite (from task-009) passes against the staging environment with no failures
RLS isolation verified in staging for all four organisation tenants using real staging JWT sessions
generate_bufdir_report RPC returns non-zero results for at least two organisations that have staging activity data
GDPR sign-off checklist is completed and stored as a markdown file in the repository under docs/gdpr-compliance/
Sign-off checklist explicitly covers: data minimisation (only aggregated counts sent to Bufdir, no personal identifiers), RLS enforcement, RPC parameter validation, and audit logging of report generation events
Rollback procedure is documented and tested — staging can be restored to pre-migration state within 10 minutes
No production environment is touched during this task
Deployment steps are documented in a runbook under docs/deployment/ for future use

Technical Requirements

frameworks
supabase_flutter
flutter_test
apis
Supabase Migrations API
Supabase Management API (staging project)
generate_bufdir_report RPC
data models
bufdir_category_mappings
participants
activities
organisations
rls_policies
performance requirements
All migrations must apply in under 30 seconds on staging
generate_bufdir_report RPC must return results in under 2 seconds on staging dataset sizes
security requirements
Staging environment credentials must be separate from production — no shared keys
Migration scripts must not include hardcoded credentials or organisation-specific data
GDPR checklist must confirm that the RPC returns only aggregate counts — no participant names, addresses, or personal identifiers
Audit log entry must be created each time generate_bufdir_report is called, recording: calling org_id, timestamp, report period, row counts returned — not the report content

Execution Context

Execution Tier
Tier 6

Tier 6 - 158 tasks

Can start after Tier 5 completes

Implementation Notes

Apply migrations using the Supabase CLI (`supabase db push --project-ref `). Before applying, run `supabase db diff` to confirm only the expected changes are included — no unintended schema drift. After each migration file is applied, verify the object exists using `supabase db inspect`. For the RLS policy deployment, confirm policies are enabled on the table level (`ALTER TABLE ...

ENABLE ROW LEVEL SECURITY`) and that the correct policy expressions match what was tested. The GDPR checklist should follow the Norwegian Datatilsynet guidance for data processors — specifically confirm that Bufdir report submissions constitute a data processing agreement under GDPR Article 28, that only statistical aggregates leave the system, and that the RPC cannot be abused to reconstruct individual-level data through repeated narrow queries (add rate limiting or parameter validation if needed).

Testing Requirements

Run the full integration test suite from task-009 against the staging environment as the primary validation gate. Additionally perform manual smoke tests: (1) log in as an NHF coordinator, call the RPC, confirm results match expected staging data counts; (2) attempt a cross-org read as HLF coordinator, confirm zero rows returned.

Document all manual test results in the sign-off checklist. No new automated tests are written in this task — execution of existing tests is the validation mechanism.

Component
Multi-Organization Data Isolator
data medium
Epic Risks (3)
high impact medium prob security

Supabase RLS policies may not propagate correctly into RPC function execution context, causing org-scoping predicates to be silently ignored when the function is invoked with service_role key. This could lead to cross-org data exposure in production without any obvious error.

Mitigation & Contingency

Mitigation: Invoke all RPCs using the anon/authenticated key rather than service_role, write explicit WHERE org_id = auth.uid()::org_id predicates inside the RPC body as a secondary control, and include automated cross-org leakage tests in the CI pipeline from day one.

Contingency: If RLS bypass is discovered post-deployment, immediately revoke service_role usage in all aggregation paths and hotfix with explicit org_id parameters passed as function arguments validated server-side.

high impact medium prob dependency

Bufdir may update its official reporting category taxonomy between the mapping configuration being defined and the annual submission deadline. If the ActivityCategoryMappingConfig is compiled as a static Dart constant, it cannot be updated without an app release, potentially causing mapping failures that block submission.

Mitigation & Contingency

Mitigation: Store the mapping as a remote-configurable table (bufdir_category_mappings) in Supabase with a version field rather than as a hardcoded Dart constant. Fetch the current mapping at aggregation time so updates can be pushed without a new app release.

Contingency: If a mapping mismatch is detected during an active reporting cycle, coordinators can be temporarily directed to the manual Excel fallback while an emergency mapping update is pushed to the Supabase table.

high impact low prob technical

For large organisations like NHF with 1,400 local chapters and potentially tens of thousands of activity records per reporting period, the Supabase RPC aggregation query may exceed the default PostgREST statement timeout, causing the aggregation to fail with a 503 error.

Mitigation & Contingency

Mitigation: Add partial indexes on (organization_id, created_at) and (organization_id, activity_type_id) to the activities table before writing the RPC. Profile the query plan against a realistic fixture of 50,000 records during development and increase the statement_timeout setting for the RPC role if needed.

Contingency: Implement chunked aggregation fallback: split the period into monthly sub-ranges and aggregate each chunk client-side, merging results with UNION-style Dart logic before assembling the final payload.