critical priority medium complexity database pending database specialist Tier 2

Acceptance Criteria

RPC `get_activity_aggregation_for_org(p_organisation_id UUID, p_period_start TIMESTAMPTZ, p_period_end TIMESTAMPTZ)` exists and is callable by the `authenticated` role — returns `session_count INTEGER, total_hours NUMERIC(10,2)`
RPC `get_activity_aggregation_for_peer_mentor(p_organisation_id UUID, p_user_id UUID, p_period_start TIMESTAMPTZ, p_period_end TIMESTAMPTZ)` exists — returns same typed result scoped to the single peer mentor's activities
Both RPCs filter activities where `activity_date >= p_period_start AND activity_date < p_period_end` (exclusive end boundary matching the Dart Period model)
NULL duration values in the activities table are treated as 0 hours (`COALESCE(duration_minutes, 0)`) and do not cause the aggregation to return NULL
session_count counts the number of distinct activity rows (not participants); total_hours converts duration_minutes to hours with NUMERIC precision
Both RPCs are defined with `SECURITY DEFINER` only if necessary to bypass RLS — prefer `SECURITY INVOKER` with proper RLS policies so the caller's JWT governs row visibility
RPCs are deployed via a versioned Supabase migration file, not via the dashboard
Calling `get_activity_aggregation_for_org` with an organisation_id the caller does not belong to returns `session_count=0, total_hours=0` (RLS silently filters — not an error)
RPCs handle an empty result set (no activities in period) by returning `session_count=0, total_hours=0` rather than NULL or an error
EXPLAIN ANALYZE on both RPCs shows index scans on `activities.organisation_id` and `activities.activity_date` — no sequential scans on large tables

Technical Requirements

frameworks
PostgreSQL plpgsql
Supabase CLI migrations
Supabase RPC
apis
supabase.rpc()
Supabase PostgREST RPC endpoint
data models
activities
periodic_summaries
organisations
user_roles
performance requirements
Both RPCs must return in <50ms for an organisation with 10,000 activities in a single quarter
Indexes required: `activities(organisation_id, activity_date)` composite index — create in this migration if not already present
Use `SUM` with `COALESCE` rather than subqueries or CTEs for the aggregation to keep the query plan simple
security requirements
Use `SECURITY INVOKER` (default) so RLS policies on the activities table apply to the calling user's context
Validate that `p_organisation_id` matches the caller's JWT claim inside the function as a defence-in-depth check — raise `insufficient_privilege` if mismatch
Do not expose internal table structure through error messages — catch exceptions and re-raise with generic messages
Grant EXECUTE only to the `authenticated` role, not `anon`

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Define both RPCs in a single migration file. Use `RETURNS TABLE(session_count INTEGER, total_hours NUMERIC)` return type for type safety in the Dart client. For `get_activity_aggregation_for_org`, the query pattern is: `SELECT COUNT(*)::INTEGER, ROUND(SUM(COALESCE(duration_minutes,0))/60.0, 2) FROM activities WHERE organisation_id = p_organisation_id AND activity_date >= p_period_start AND activity_date < p_period_end`. For the peer-mentor variant, add `AND created_by = p_user_id` (or whatever the user FK column is on activities — confirm column name from existing schema).

Add `RETURNS SETOF` only if the RPC may return multiple rows in future — for now a single-row `RETURNS TABLE` is cleaner. Document the RPC in a SQL comment block for discoverability.

Testing Requirements

Integration tests using pgTAP or Supabase test runner: (1) insert 5 activities for org A with known durations in a period, call `get_activity_aggregation_for_org` and assert session_count=5 and correct total_hours, (2) insert 2 activities with NULL duration and assert they contribute 0 to total_hours and 2 to session_count, (3) call with a period that contains no activities and assert 0/0 return (not NULL), (4) call `get_activity_aggregation_for_peer_mentor` with a specific user_id and assert only that user's activities are counted, (5) call `get_activity_aggregation_for_org` with an org_id the calling JWT does not belong to and assert 0/0 return (RLS block), (6) run EXPLAIN ANALYZE and assert no sequential scans. Seed data must be inserted and cleaned up within each test transaction.

Component
Activity Aggregation Repository
data medium
Epic Risks (3)
high impact medium prob security

Supabase RLS policies for aggregation views are more complex than single-table policies. A misconfigured policy could silently allow a coordinator in one organisation to see data from another, causing a data breach and breaking trust with participating organisations.

Mitigation & Contingency

Mitigation: Write automated RLS integration tests that create two separate organisations with distinct data, then assert that queries authenticated as org-A users return only org-A rows. Run these tests in CI on every PR touching the database layer.

Contingency: If an RLS bypass is discovered post-deployment, immediately disable the periodic summaries feature flag, revoke affected sessions, audit access logs, notify affected organisations, and patch the policy before re-enabling.

medium impact medium prob technical

Activity records may span multiple sessions types, proxy registrations, and coordinator bulk entries. Incorrect JOIN logic or missing filters in the aggregation query could double-count sessions or omit activity types, producing inaccurate summaries that erode user trust.

Mitigation & Contingency

Mitigation: Build a fixture dataset covering all activity registration paths (direct, proxy, bulk) and assert expected aggregated counts in integration tests before any UI consumes the repository.

Contingency: If inaccurate counts are reported post-launch, mark affected summaries as invalidated in the database and re-trigger generation once the query is corrected. Communicate transparently to affected users via an in-app banner.

low impact low prob integration

The local cache must be invalidated when a new summary arrives via push notification. If the push token is stale or the FCM/APNs delivery is delayed, the device may show an outdated summary for an extended period, confusing users who see different numbers online versus offline.

Mitigation & Contingency

Mitigation: Implement a TTL on cached summaries (max 48 hours) so stale data is auto-cleared even without a push notification. Also trigger cache refresh on app foreground if the current period's summary is older than 24 hours.

Contingency: Provide a manual pull-to-refresh on the summary card that bypasses the cache and fetches directly from Supabase when a network connection is available.