Implement ActivityCategoryMappingConfig Dart Class
epic-bufdir-data-aggregation-foundation-task-006 — Implement the ActivityCategoryMappingConfig Dart class that reads the bufdir_category_mappings table, exposes a lookup method (getBuffirCode(internalTypeId, mappingVersion)) and caches results in memory. Include a version negotiation method that resolves which mapping version applies to a given reporting period. Provide a Riverpod provider for injection into aggregation services.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Use a `Map
Keep the Riverpod provider as `keepAlive: true` in an `AsyncNotifierProvider` so the cache survives widget rebuilds. Create a `BufdirCategoryMapping` Dart model using `fromJson` factory to parse Supabase rows — do not use dynamic maps in business logic.
Testing Requirements
Unit tests (flutter_test): (1) mock Supabase response and verify getBufdirCode returns correct value, (2) verify second call uses cache (mock is called only once), (3) verify MappingNotFoundException is thrown for unknown internalTypeId, (4) verify resolveActiveVersion selects the correct version for a date within the effective range and rejects dates outside all ranges. Test cache invalidation by simulating a table update event. Verify Riverpod provider initializes correctly with AsyncNotifierProvider.build() pattern.
Supabase RLS policies may not propagate correctly into RPC function execution context, causing org-scoping predicates to be silently ignored when the function is invoked with service_role key. This could lead to cross-org data exposure in production without any obvious error.
Mitigation & Contingency
Mitigation: Invoke all RPCs using the anon/authenticated key rather than service_role, write explicit WHERE org_id = auth.uid()::org_id predicates inside the RPC body as a secondary control, and include automated cross-org leakage tests in the CI pipeline from day one.
Contingency: If RLS bypass is discovered post-deployment, immediately revoke service_role usage in all aggregation paths and hotfix with explicit org_id parameters passed as function arguments validated server-side.
Bufdir may update its official reporting category taxonomy between the mapping configuration being defined and the annual submission deadline. If the ActivityCategoryMappingConfig is compiled as a static Dart constant, it cannot be updated without an app release, potentially causing mapping failures that block submission.
Mitigation & Contingency
Mitigation: Store the mapping as a remote-configurable table (bufdir_category_mappings) in Supabase with a version field rather than as a hardcoded Dart constant. Fetch the current mapping at aggregation time so updates can be pushed without a new app release.
Contingency: If a mapping mismatch is detected during an active reporting cycle, coordinators can be temporarily directed to the manual Excel fallback while an emergency mapping update is pushed to the Supabase table.
For large organisations like NHF with 1,400 local chapters and potentially tens of thousands of activity records per reporting period, the Supabase RPC aggregation query may exceed the default PostgREST statement timeout, causing the aggregation to fail with a 503 error.
Mitigation & Contingency
Mitigation: Add partial indexes on (organization_id, created_at) and (organization_id, activity_type_id) to the activities table before writing the RPC. Profile the query plan against a realistic fixture of 50,000 records during development and increase the statement_timeout setting for the RPC role if needed.
Contingency: Implement chunked aggregation fallback: split the period into monthly sub-ranges and aggregate each chunk client-side, merging results with UNION-style Dart logic before assembling the final payload.