Implement Single-Org Activity Aggregation Query
epic-bufdir-report-export-core-backend-task-002 — Implement the core Supabase query in the activity aggregation service that fetches all activity records for a given org_id and date range. Group results by peer mentor, compute total sessions and minutes per peer mentor, and attach activity type metadata. This is the baseline aggregation path for chapter-level scope before any hierarchy roll-up is applied.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Prefer a Supabase RPC (SQL function) for the GROUP BY aggregation rather than fetching raw rows and aggregating in Dart — this avoids transferring all raw records over the network and is significantly more efficient for large orgs. Define the SQL function in a Supabase migration file so it is version-controlled. The Dart service class should call `.rpc('aggregate_activities_for_org', params: {...})` and map the returned JSON to `BufdirPayload`. If the project does not yet use RPC functions, document this pattern decision.
Validate the `orgId` against the authenticated user's `organisation_id` JWT claim at the start of the method and throw an `UnauthorisedAccessException` if they do not match — this is the application-level guard complementing RLS. Ensure the `period_start` and `period_end` are passed as ISO 8601 strings to Supabase to avoid timezone ambiguity; document the expected timezone convention (UTC) in a comment.
Testing Requirements
Unit tests (Dart, flutter_test): mock the Supabase client to return a scripted list of activity rows and assert that the aggregation logic correctly sums sessions and minutes per peer mentor and per activity type. Test edge cases: single peer mentor with one activity, multiple peer mentors, activity records with null duration (should be treated as 0 or excluded with a warning), empty result set. Integration tests (optional but recommended): use Supabase's local development environment (`supabase start`) to run the actual query against a seeded test database and assert the returned payload matches expected totals. Performance test: seed the test database with 500 rows for a single org and confirm the query returns in under 2 seconds.
Supabase Edge Functions have a default execution timeout. For large national-scope exports aggregating tens of thousands of activities across 1,400 chapters, the edge function may time out before completing, leaving coordinators with a failed export and no partial output.
Mitigation & Contingency
Mitigation: Optimise the aggregation SQL using pre-materialised aggregation views or RPC functions that run inside the database rather than iterating records in Deno. Profile query execution time against realistic production data volumes early. Request an elevated timeout limit from Supabase if needed. Implement progress checkpointing so the export can be resumed from the last completed aggregation batch.
Contingency: For organisations exceeding a configurable threshold (e.g. >5,000 activities), switch to an asynchronous export pattern: the edge function writes a 'pending' audit record and enqueues the job; the client polls for completion and is notified via Supabase Realtime when the file is ready.
Server-side PDF generation in a Deno Edge Function environment restricts library choices. Many popular PDF libraries require Node.js APIs not available in Deno, or produce large bundle sizes that exceed edge function limits. Choosing the wrong library could block the entire PDF generation path.
Mitigation & Contingency
Mitigation: Spike PDF library selection as the first task of this epic, evaluating at least two Deno-compatible options (e.g. pdf-lib, jsPDF with Deno compatibility shim). Test bundle size and basic rendering before committing to an implementation. Document the chosen library's constraints.
Contingency: If no suitable Deno-native PDF library is found, generate a well-structured HTML report from the edge function and use a headless Chromium service (e.g. Browserless, Gotenberg) for HTML-to-PDF conversion, or temporarily ship CSV-only export while the PDF path is resolved.
Peer mentors affiliated with multiple chapters (a documented NHF scenario) must not be double-counted in participant totals. Incorrect deduplication logic would overreport participation figures to Bufdir, which could be discovered during audit and damage organisational credibility.
Mitigation & Contingency
Mitigation: Define and document the deduplication contract explicitly before coding: deduplication is per-person per-period, not per-activity. Build dedicated unit tests with fixtures containing the exact multi-chapter membership patterns described in NHF's documentation. Have a NHF representative validate test fixture outputs against known-good manual counts.
Contingency: If deduplication logic produces results that cannot be verified against manual counts before launch, surface a deduplication warning in the export preview listing the affected peer mentor IDs, and require explicit coordinator acknowledgement before finalising the export.