critical priority medium complexity database pending database specialist Tier 1

Acceptance Criteria

Function insert_bulk_activities(activities JSONB[]) is callable via Supabase RPC (POST /rest/v1/rpc/insert_bulk_activities) and returns a JSON array of UUIDs
All inserts are wrapped in a single BEGIN/COMMIT transaction — if any row fails, the entire batch is rolled back and no rows are persisted
Function accepts an array of at least 1 and at most 200 activity payloads per call; calls exceeding 200 items return a 400-equivalent error
Each payload in the array must include: peer_mentor_user_id, activity_type_id, recorded_at, duration_minutes, and optionally recorded_by_user_id
Foreign key violations (invalid peer_mentor_user_id, activity_type_id) cause full rollback and return a structured error message identifying the violating field
The function is defined as SECURITY DEFINER with a SEARCH_PATH set to prevent privilege escalation, and the owner is a non-superuser role
Return value is an ordered array of inserted row IDs matching the input order
Calling the function with an empty array returns an empty array without error
Function is exposed via Supabase RPC and accessible only to authenticated users (not anon role)

Technical Requirements

frameworks
PostgreSQL PL/pgSQL
Supabase RPC layer
apis
Supabase REST API /rpc/insert_bulk_activities
data models
activity_records
activity_types
auth.users
performance requirements
Batch of 200 records must complete in < 2 seconds under normal load
Use a single INSERT ... SELECT FROM unnest() pattern rather than looping inserts for optimal performance
security requirements
SECURITY DEFINER with explicit SEARCH_PATH = public, pg_catalog to prevent search path injection
Caller's auth.uid() must be validated against allowed coordinator roles before any insert proceeds
Input JSONB array must be validated for required fields before any DB write attempt
Max batch size of 200 enforced at function entry to prevent resource exhaustion

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Prefer `INSERT INTO activity_records SELECT * FROM jsonb_populate_recordset(null::activity_records, $1)` over a FOR loop — it executes as a single statement and is significantly faster for large batches. Wrap in an explicit transaction block even though PL/pgSQL functions are implicitly transactional, to make the intent clear in code review. Return IDs using `RETURNING id` into an array variable. Define the function in a new migration file (not the same file as task-001) to keep concerns separated.

Document the JSON payload schema in a comment block within the function definition for discoverability.

Testing Requirements

Write pgTAP tests covering: (1) happy path — 5-record batch inserts all rows and returns correct IDs, (2) atomicity — one invalid record in a 5-record batch causes zero inserts, (3) empty array returns empty array, (4) array of 201 records returns error, (5) non-coordinator calling function receives permission denied. Write a Flutter integration test that calls the RPC via the Supabase Dart client and verifies the returned IDs can be queried back from activity_records. Test with a local Supabase instance seeded with the task-001 migration.

Component
Batch Insert RPC Adapter
infrastructure medium
Epic Risks (2)
high impact medium prob security

Adding recorded_by_user_id to the activities table and writing correct RLS policies is error-prone: overly permissive policies would allow coordinators to record activities under arbitrary user IDs they do not manage, while overly restrictive policies would silently block valid proxy inserts. A policy defect here would either create a security vulnerability or break the entire proxy feature at runtime.

Mitigation & Contingency

Mitigation: Write RLS policies in a local Supabase emulator first. Include policy unit tests using pg_tap or supabase test helpers. Have a second reviewer check the migration SQL before merging. Explicitly test the three cases: coordinator inserting for their own mentors (should succeed), coordinator inserting for another chapter's mentors (should fail), peer mentor inserting for themselves (should succeed as before).

Contingency: If a policy defect is discovered in staging, roll back the migration with a down-migration script. Delay feature release until the policy is corrected and re-verified. Apply a feature flag to keep the proxy entry point hidden from coordinators until the fix is confirmed.

high impact low prob technical

The insert_bulk_activities RPC must behave atomically — a failure on row 7 of 12 must roll back rows 1–6. If Supabase's RPC transaction handling is misconfigured or if network interruptions cause partial acknowledgements, some peer mentors could receive duplicate or missing activity records, directly corrupting Bufdir statistics for the coordinator's chapter.

Mitigation & Contingency

Mitigation: Implement the RPC as a PostgreSQL function with explicit BEGIN/EXCEPTION/END block to guarantee atomicity. Add an integration test that inserts a batch where one row violates a unique constraint and asserts zero rows are committed. Document the transaction semantics in code comments.

Contingency: If atomicity cannot be guaranteed via RPC (e.g., due to Supabase plan limitations), fall back to a sequential insert loop with a compensating DELETE in case of partial failure, and surface a clear error to the coordinator listing which mentors failed and which succeeded.