critical priority medium complexity backend pending backend specialist Tier 4

Acceptance Criteria

SupabaseAggregationRpc.generateBufdirReport(orgId, periodStart, periodEnd, mappingVersion) returns Future<BufdirReportResult> with strongly-typed data
BufdirReportResult contains a List<BufdirCategoryTotal> where each item has bufdirCode, participantCount, totalHours, activityCount fields
Method throws BufdirRpcException (with error code and message) on RPC failure, not a generic Exception
Timeout of 30 seconds is enforced — if the RPC takes longer, BufdirRpcTimeoutException is thrown with a user-friendly message
Progress callback (optional parameter) is invoked at minimum at call start and on completion — intermediate progress if achievable
Riverpod provider is a Provider<SupabaseAggregationRpc> (synchronous) injecting the Supabase client from the existing provider
All date parameters are passed as ISO 8601 strings (YYYY-MM-DD) to the RPC
Empty result (no activities in period) returns BufdirReportResult with an empty list, not null or an exception
Class is covered by dartdoc with usage example showing how to call generateBufdirReport and handle exceptions

Technical Requirements

frameworks
Flutter
Riverpod
Supabase Dart SDK
apis
Supabase RPC (generate_bufdir_report)
data models
BufdirReportResult
BufdirCategoryTotal
BufdirRpcException
BufdirRpcTimeoutException
performance requirements
HTTP timeout must be set to 30 seconds at the Supabase client level for this call
JSON parsing of the response must handle up to 500 category entries without noticeable delay
No unnecessary data copying — parse directly from the Supabase response map
security requirements
orgId must be validated as a non-null, non-empty UUID string before sending to RPC — throw ArgumentError early
RPC call must use the authenticated Supabase client (not anon key)
Error messages from Supabase must be sanitized before surfacing to UI layers — no raw PostgreSQL error strings to end users

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Use `supabase.rpc('generate_bufdir_report', params: {'p_org_id': orgId, 'p_period_start': periodStart.toIso8601String().substring(0, 10), ...}).timeout(const Duration(seconds: 30))` for the call. Wrap in a try/catch that catches `TimeoutException` separately from `PostgrestException` to surface the right typed error. Create `BufdirCategoryTotal.fromJson(Map json)` factory constructors for clean parsing. The progress callback can be a simple `void Function(BufdirRpcProgress)?` where `BufdirRpcProgress` is an enum with `started`, `completed`, `failed` — true streaming is not possible with a single RPC call, but the callback pattern future-proofs the API for when streaming is added.

Keep the Riverpod provider simple (`Provider((ref) => SupabaseAggregationRpc(ref.read(supabaseClientProvider)))`) — the provider is a thin wrapper, all logic stays in the class.

Testing Requirements

Unit tests (flutter_test with mocked Supabase client): (1) verify happy path returns correctly typed BufdirReportResult from a mocked JSONB response, (2) verify BufdirRpcException is thrown when Supabase returns a PostgreSQL error, (3) verify BufdirRpcTimeoutException is thrown when call exceeds timeout, (4) verify empty JSONB array returns BufdirReportResult with empty list, (5) verify ArgumentError is thrown for null or malformed orgId. Integration test (against a test Supabase instance): call generateBufdirReport with known seed data and verify the result matches expected aggregates. Verify Riverpod provider resolves without error in a ProviderContainer test.

Component
Supabase Aggregation RPC Functions
infrastructure high
Epic Risks (3)
high impact medium prob security

Supabase RLS policies may not propagate correctly into RPC function execution context, causing org-scoping predicates to be silently ignored when the function is invoked with service_role key. This could lead to cross-org data exposure in production without any obvious error.

Mitigation & Contingency

Mitigation: Invoke all RPCs using the anon/authenticated key rather than service_role, write explicit WHERE org_id = auth.uid()::org_id predicates inside the RPC body as a secondary control, and include automated cross-org leakage tests in the CI pipeline from day one.

Contingency: If RLS bypass is discovered post-deployment, immediately revoke service_role usage in all aggregation paths and hotfix with explicit org_id parameters passed as function arguments validated server-side.

high impact medium prob dependency

Bufdir may update its official reporting category taxonomy between the mapping configuration being defined and the annual submission deadline. If the ActivityCategoryMappingConfig is compiled as a static Dart constant, it cannot be updated without an app release, potentially causing mapping failures that block submission.

Mitigation & Contingency

Mitigation: Store the mapping as a remote-configurable table (bufdir_category_mappings) in Supabase with a version field rather than as a hardcoded Dart constant. Fetch the current mapping at aggregation time so updates can be pushed without a new app release.

Contingency: If a mapping mismatch is detected during an active reporting cycle, coordinators can be temporarily directed to the manual Excel fallback while an emergency mapping update is pushed to the Supabase table.

high impact low prob technical

For large organisations like NHF with 1,400 local chapters and potentially tens of thousands of activity records per reporting period, the Supabase RPC aggregation query may exceed the default PostgREST statement timeout, causing the aggregation to fail with a 503 error.

Mitigation & Contingency

Mitigation: Add partial indexes on (organization_id, created_at) and (organization_id, activity_type_id) to the activities table before writing the RPC. Profile the query plan against a realistic fixture of 50,000 records during development and increase the statement_timeout setting for the RPC role if needed.

Contingency: Implement chunked aggregation fallback: split the period into monthly sub-ranges and aggregate each chunk client-side, merging results with UNION-style Dart logic before assembling the final payload.