Test GeographicDistributionService with full NHF hierarchy
epic-bufdir-data-aggregation-core-logic-task-012 — Write integration tests for GeographicDistributionService using a seeded test database reflecting NHF's 1,400-chapter structure. Test cases: region rollup accuracy, cross-chapter deduplication, org isolation (no data leakage across orgs), performance benchmarks for full hierarchy traversal under 2 seconds.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
Use Supabase CLI (supabase start) to spin up a local Supabase instance for integration tests. Create a seed SQL file (supabase/seed.sql or a test-specific seed) that generates the NHF-scale hierarchy. To simulate 1,400 chapters efficiently, use a SQL generator loop (generate_series) rather than manually inserting rows. Structure the seed to include deliberate cross-chapter memberships, cross-region participants, and cross-org contamination attempts.
Write a Dart test helper (SeedHelper) that calls the seed RPC and returns expected result maps for assertion. Use expect() with custom matchers for result set comparison. Isolate performance tests into a separate test file so they can be skipped in fast feedback loops and run only in CI.
Testing Requirements
All tests are integration tests using flutter_test against a Supabase local dev instance (Supabase CLI). Organize tests into groups: (1) rollup accuracy, (2) deduplication, (3) org isolation, (4) boundary conditions, (5) performance. Use a shared setUp function that seeds the database before each test group and a tearDown that cleans up. For performance tests, run 3 times and assert all 3 runs complete under 2 seconds.
No mocking in integration tests — real Supabase RLS policies must be active. Aim for complete scenario coverage of all acceptance criteria above.
NHF members can belong to up to 5 local chapters. When a participant has activities registered under different chapter IDs within the same reporting period, deduplication requires a reliable cross-chapter identity key. If national IDs are absent for some members (a known data quality issue in NHF's systems), the deduplication service may fail to identify duplicates, resulting in inflated counts submitted to Bufdir.
Mitigation & Contingency
Mitigation: Implement a multi-attribute identity matching strategy: primary match on national_id, fallback to (full_name + birth_year + municipality) composite key. Expose a low-confidence match list in DeduplicationAnomalyReport that coordinators can review and manually resolve before submission.
Contingency: If identity data quality is too poor for reliable automated deduplication for specific organisations, add an organisation-level config flag that disables cross-chapter deduplication for that org and requires coordinators to manually review the anomaly report before submitting.
The geographic distribution algorithm must resolve NHF's 1,400 local chapter hierarchy to regional aggregates. If the organizational unit hierarchy in the database is incomplete (missing parent-child relationships for some chapters), the geographic service will silently drop activities from unmapped chapters, producing an understated geographic breakdown.
Mitigation & Contingency
Mitigation: Add a hierarchy completeness validation step in GeographicDistributionService that counts activities without a resolvable region assignment and surfaces them as an 'unmapped_activities' field in the distribution result. Block export if unmapped_activities > 0.
Contingency: Provide a 'national' fallback bucket for activities from chapters with no region assignment, clearly labelled in the preview screen so coordinators are alerted to fix the org hierarchy data before re-running aggregation.
BufdirAggregationService orchestrates four dependent services. If one service (e.g., GeographicDistributionService) throws mid-pipeline, the partially assembled metrics payload may be silently cached or returned as if complete, resulting in a Bufdir submission missing the geographic breakdown section.
Mitigation & Contingency
Mitigation: Implement the orchestrator as a transactional pipeline using Dart's Result type pattern: each stage returns Either<AggregationError, PartialResult>, and the orchestrator only proceeds if all stages succeed. The final payload is only assembled and persisted when all stages return success.
Contingency: If a partial failure state reaches the UI, the AggregationProgressIndicator must display a specific stage failure message with a retry option that re-runs only the failed stage rather than the full pipeline.
Internal activity types that have no corresponding Bufdir category in the mapping configuration will cause the aggregation to silently exclude those activities from the final counts. Coordinators may not notice the omission until Bufdir queries why submission totals are lower than expected.
Mitigation & Contingency
Mitigation: BufdirAggregationService must produce an unmapped_activity_types list as part of its output. If any internal activity types are unmapped, display a blocking warning in the AggregationSummaryWidget listing the unmapped types before allowing the coordinator to proceed to export.
Contingency: Allow coordinators to temporarily assign unmapped activity types to a Bufdir 'other' catch-all category as an emergency workaround, with an audit flag indicating manual override was applied for that submission.