Create insert_bulk_activities Supabase RPC Function
epic-bulk-and-proxy-registration-foundation-task-002 — Implement the insert_bulk_activities PostgreSQL function as a Supabase RPC endpoint. The function must accept an array of activity record payloads (including recorded_by_user_id), execute all inserts inside a single database transaction for atomicity, and return the list of inserted IDs. Include proper error handling so that any constraint violation rolls back the entire batch, preventing partial-success states that would corrupt activity counts.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Prefer `INSERT INTO activity_records SELECT * FROM jsonb_populate_recordset(null::activity_records, $1)` over a FOR loop — it executes as a single statement and is significantly faster for large batches. Wrap in an explicit transaction block even though PL/pgSQL functions are implicitly transactional, to make the intent clear in code review. Return IDs using `RETURNING id` into an array variable. Define the function in a new migration file (not the same file as task-001) to keep concerns separated.
Document the JSON payload schema in a comment block within the function definition for discoverability.
Testing Requirements
Write pgTAP tests covering: (1) happy path — 5-record batch inserts all rows and returns correct IDs, (2) atomicity — one invalid record in a 5-record batch causes zero inserts, (3) empty array returns empty array, (4) array of 201 records returns error, (5) non-coordinator calling function receives permission denied. Write a Flutter integration test that calls the RPC via the Supabase Dart client and verifies the returned IDs can be queried back from activity_records. Test with a local Supabase instance seeded with the task-001 migration.
Adding recorded_by_user_id to the activities table and writing correct RLS policies is error-prone: overly permissive policies would allow coordinators to record activities under arbitrary user IDs they do not manage, while overly restrictive policies would silently block valid proxy inserts. A policy defect here would either create a security vulnerability or break the entire proxy feature at runtime.
Mitigation & Contingency
Mitigation: Write RLS policies in a local Supabase emulator first. Include policy unit tests using pg_tap or supabase test helpers. Have a second reviewer check the migration SQL before merging. Explicitly test the three cases: coordinator inserting for their own mentors (should succeed), coordinator inserting for another chapter's mentors (should fail), peer mentor inserting for themselves (should succeed as before).
Contingency: If a policy defect is discovered in staging, roll back the migration with a down-migration script. Delay feature release until the policy is corrected and re-verified. Apply a feature flag to keep the proxy entry point hidden from coordinators until the fix is confirmed.
The insert_bulk_activities RPC must behave atomically — a failure on row 7 of 12 must roll back rows 1–6. If Supabase's RPC transaction handling is misconfigured or if network interruptions cause partial acknowledgements, some peer mentors could receive duplicate or missing activity records, directly corrupting Bufdir statistics for the coordinator's chapter.
Mitigation & Contingency
Mitigation: Implement the RPC as a PostgreSQL function with explicit BEGIN/EXCEPTION/END block to guarantee atomicity. Add an integration test that inserts a batch where one row violates a unique constraint and asserts zero rows are committed. Document the transaction semantics in code comments.
Contingency: If atomicity cannot be guaranteed via RPC (e.g., due to Supabase plan limitations), fall back to a sequential insert loop with a compensating DELETE in case of partial failure, and surface a clear error to the coordinator listing which mentors failed and which succeeded.