critical priority medium complexity database pending database specialist Tier 1

Acceptance Criteria

Function bulk_register_activities(records jsonb[]) exists in Supabase schema and is callable via Supabase RPC
registered_by is always set to auth.uid() server-side — the function ignores any registered_by value in the jsonb payload
attributed_to is read from each record's jsonb payload and validated as a non-null UUID
All records are inserted in a single transaction — if any single record fails a constraint (NOT NULL, FK violation, unique conflict), the entire batch is rolled back
On success, function returns an array of inserted row UUIDs in the same order as input records
Function validates that all attributed_to UUIDs belong to the same organization as the calling coordinator (prevents cross-org attribution)
Function is not callable by the anon role — requires authenticated user with coordinator or admin role
Calling with an empty array returns an empty array without error
Calling with a single record behaves identically to inserting one row (regression test for single-item batch)
Migration for this function is versioned in supabase/migrations/ alongside the schema migration

Technical Requirements

frameworks
Dart
Flutter
apis
Supabase PostgreSQL 15
Supabase Auth
Supabase Edge Functions (Deno)
data models
activity
activity_type
performance requirements
Bulk insert of 50 records must complete within 2 seconds under normal Supabase load
Function must use a single INSERT ... SELECT from jsonb_array_elements() rather than a loop to avoid per-row round-trips
security requirements
registered_by must be sourced exclusively from auth.uid() — never from client payload — preventing coordinator impersonation
Organization scope validation: attributed_to users must belong to the same organization_id as the calling user, enforced inside the function
Function must be defined with SECURITY DEFINER only if necessary for service-role operations; prefer SECURITY INVOKER to inherit caller's RLS context
Input jsonb is validated for required fields (attributed_to, activity_type_id, date, duration_minutes) before any insert to prevent partial writes
Service role key used for this function must never be exposed to the mobile client

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use a single INSERT INTO activities (...) SELECT ... FROM jsonb_array_elements($1) as r pattern for performance — avoid PL/pgSQL FOR loops which issue individual inserts. Extract organization_id from the calling user's profile or JWT claims at the start of the function and use it to validate all attributed_to UUIDs in one batch query before inserting. Structure: (1) validate input array not null, (2) resolve calling user's org, (3) validate all attributed_to IDs belong to same org in one query, (4) bulk insert via jsonb_array_elements, (5) return array_agg of inserted IDs.

Wrap the entire body in a single transaction (implicit in PostgreSQL functions). Register the function in the Supabase types generation so the Dart client gets a strongly typed rpc() call.

Testing Requirements

Integration tests against a local Supabase instance: (1) insert batch of 5 valid records and verify all 5 IDs returned and rows exist in activities table, (2) insert batch where one record has an invalid attributed_to UUID — verify full rollback and no rows inserted, (3) insert batch where one record violates a NOT NULL constraint — verify full rollback, (4) insert empty array — verify empty array returned, (5) verify registered_by on all inserted rows equals the calling user's auth.uid() regardless of payload content, (6) attempt call from a user in a different organization — verify rejection. Run via Supabase CLI with `supabase functions serve` or direct psql test scripts.

Component
Proxy Activity Repository
data medium
Epic Risks (3)
high impact medium prob technical

The activities table migration adding registered_by and attributed_to columns may conflict with existing RLS policies or FK constraints if the user profile table structure differs from assumptions, blocking all subsequent epics.

Mitigation & Contingency

Mitigation: Review existing activities table schema and RLS policies before writing the migration. Run the migration against a staging database clone first. Write rollback scripts alongside the migration.

Contingency: If migration fails in staging, isolate the conflict with a targeted schema audit, adjust FK references or RLS policy scope, and re-run before touching production.

high impact medium prob security

The RLS policy must filter proxy inserts to the coordinator's chapter scope. If the chapter-scope resolver pattern differs between organisations (multi-chapter coordinators in NHF vs single-chapter in HLF), the policy may be too broad or too restrictive.

Mitigation & Contingency

Mitigation: Design the RLS policy to accept a coordinator's full set of assigned chapter IDs (array) rather than a single chapter_id. Validate the policy against NHF multi-chapter test fixtures during the integration test phase.

Contingency: If the policy is found to be incorrect after deployment, introduce a server-side validation edge function as a safety net while the RLS policy is corrected.

medium impact low prob technical

The bulk_register_activities RPC function may time out or cause lock contention when inserting large participant batches (e.g. 40+ peer mentors in a single group session), degrading the user experience.

Mitigation & Contingency

Mitigation: Benchmark the RPC function with 50-participant batches during development. Use unnest-based bulk insert rather than row-by-row PL/pgSQL loops. Set a reasonable statement_timeout.

Contingency: If performance is insufficient, split the client-side submission into chunks of 20 participants with progress feedback, rather than a single RPC call.