Create bulk_register_activities atomic RPC function
epic-proxy-activity-registration-foundation-task-002 — Implement a PostgreSQL function `bulk_register_activities(records jsonb[])` that inserts multiple activity rows in a single transaction, populating `registered_by` from the calling auth user and `attributed_to` from each record's payload. The function must roll back the entire batch on any constraint violation and return inserted row IDs.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Use a single INSERT INTO activities (...) SELECT ... FROM jsonb_array_elements($1) as r pattern for performance — avoid PL/pgSQL FOR loops which issue individual inserts. Extract organization_id from the calling user's profile or JWT claims at the start of the function and use it to validate all attributed_to UUIDs in one batch query before inserting. Structure: (1) validate input array not null, (2) resolve calling user's org, (3) validate all attributed_to IDs belong to same org in one query, (4) bulk insert via jsonb_array_elements, (5) return array_agg of inserted IDs.
Wrap the entire body in a single transaction (implicit in PostgreSQL functions). Register the function in the Supabase types generation so the Dart client gets a strongly typed rpc() call.
Testing Requirements
Integration tests against a local Supabase instance: (1) insert batch of 5 valid records and verify all 5 IDs returned and rows exist in activities table, (2) insert batch where one record has an invalid attributed_to UUID — verify full rollback and no rows inserted, (3) insert batch where one record violates a NOT NULL constraint — verify full rollback, (4) insert empty array — verify empty array returned, (5) verify registered_by on all inserted rows equals the calling user's auth.uid() regardless of payload content, (6) attempt call from a user in a different organization — verify rejection. Run via Supabase CLI with `supabase functions serve` or direct psql test scripts.
The activities table migration adding registered_by and attributed_to columns may conflict with existing RLS policies or FK constraints if the user profile table structure differs from assumptions, blocking all subsequent epics.
Mitigation & Contingency
Mitigation: Review existing activities table schema and RLS policies before writing the migration. Run the migration against a staging database clone first. Write rollback scripts alongside the migration.
Contingency: If migration fails in staging, isolate the conflict with a targeted schema audit, adjust FK references or RLS policy scope, and re-run before touching production.
The RLS policy must filter proxy inserts to the coordinator's chapter scope. If the chapter-scope resolver pattern differs between organisations (multi-chapter coordinators in NHF vs single-chapter in HLF), the policy may be too broad or too restrictive.
Mitigation & Contingency
Mitigation: Design the RLS policy to accept a coordinator's full set of assigned chapter IDs (array) rather than a single chapter_id. Validate the policy against NHF multi-chapter test fixtures during the integration test phase.
Contingency: If the policy is found to be incorrect after deployment, introduce a server-side validation edge function as a safety net while the RLS policy is corrected.
The bulk_register_activities RPC function may time out or cause lock contention when inserting large participant batches (e.g. 40+ peer mentors in a single group session), degrading the user experience.
Mitigation & Contingency
Mitigation: Benchmark the RPC function with 50-participant batches during development. Use unnest-based bulk insert rather than row-by-row PL/pgSQL loops. Set a reasonable statement_timeout.
Contingency: If performance is insufficient, split the client-side submission into chunks of 20 participants with progress feedback, rather than a single RPC call.