Implement BufdirAggregationService pipeline orchestration
epic-bufdir-data-aggregation-core-logic-task-013 — Implement the BufdirAggregationService (Tier 4) orchestrator that coordinates the full aggregation pipeline: invoke ReportingPeriodService to resolve the period, call ParticipantDeduplicationService for clean counts, call GeographicDistributionService for regional breakdown, and apply the activity category mapping configuration.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 5 - 253 tasks
Can start after Tier 4 completes
Implementation Notes
Model BufdirAggregationService as a Riverpod Provider (StateNotifierProvider or simple Provider depending on whether the caller needs reactive state). The primary method runAggregationPipeline should be async and structured as a try/catch pipeline with step tracking: record stepStartTime at each step, wrap each sub-service call in its own try/catch, and accumulate results. For concurrent execution: use Future.wait([deduplicationFuture, geographicFuture]) since both depend only on the resolved period, not on each other. Use Dart's sealed classes or a Result
Define BufdirAggregationResult as a freezed data class for immutability and equality. The category mapping application (step 4) should call an injected ActivityCategoryMappingService — keep the wiring point here even though the implementation lands in task-014.
Testing Requirements
Unit tests: mock all four sub-services and assert pipeline calls them in the correct order with correct parameters. Test that Future.wait is used for concurrent steps (verify both deduplication and geographic distribution are initiated before either completes). Test error propagation: mock ReportingPeriodService to throw, assert BufdirAggregationResult contains a typed PipelineStepError with stepName='resolve-period'. Test idempotency: call pipeline twice with same args, assert both return equal results.
Use flutter_test with mockito or manual mock classes. Integration tests: run full pipeline against test Supabase instance seeded from task-012.
NHF members can belong to up to 5 local chapters. When a participant has activities registered under different chapter IDs within the same reporting period, deduplication requires a reliable cross-chapter identity key. If national IDs are absent for some members (a known data quality issue in NHF's systems), the deduplication service may fail to identify duplicates, resulting in inflated counts submitted to Bufdir.
Mitigation & Contingency
Mitigation: Implement a multi-attribute identity matching strategy: primary match on national_id, fallback to (full_name + birth_year + municipality) composite key. Expose a low-confidence match list in DeduplicationAnomalyReport that coordinators can review and manually resolve before submission.
Contingency: If identity data quality is too poor for reliable automated deduplication for specific organisations, add an organisation-level config flag that disables cross-chapter deduplication for that org and requires coordinators to manually review the anomaly report before submitting.
The geographic distribution algorithm must resolve NHF's 1,400 local chapter hierarchy to regional aggregates. If the organizational unit hierarchy in the database is incomplete (missing parent-child relationships for some chapters), the geographic service will silently drop activities from unmapped chapters, producing an understated geographic breakdown.
Mitigation & Contingency
Mitigation: Add a hierarchy completeness validation step in GeographicDistributionService that counts activities without a resolvable region assignment and surfaces them as an 'unmapped_activities' field in the distribution result. Block export if unmapped_activities > 0.
Contingency: Provide a 'national' fallback bucket for activities from chapters with no region assignment, clearly labelled in the preview screen so coordinators are alerted to fix the org hierarchy data before re-running aggregation.
BufdirAggregationService orchestrates four dependent services. If one service (e.g., GeographicDistributionService) throws mid-pipeline, the partially assembled metrics payload may be silently cached or returned as if complete, resulting in a Bufdir submission missing the geographic breakdown section.
Mitigation & Contingency
Mitigation: Implement the orchestrator as a transactional pipeline using Dart's Result type pattern: each stage returns Either<AggregationError, PartialResult>, and the orchestrator only proceeds if all stages succeed. The final payload is only assembled and persisted when all stages return success.
Contingency: If a partial failure state reaches the UI, the AggregationProgressIndicator must display a specific stage failure message with a retry option that re-runs only the failed stage rather than the full pipeline.
Internal activity types that have no corresponding Bufdir category in the mapping configuration will cause the aggregation to silently exclude those activities from the final counts. Coordinators may not notice the omission until Bufdir queries why submission totals are lower than expected.
Mitigation & Contingency
Mitigation: BufdirAggregationService must produce an unmapped_activity_types list as part of its output. If any internal activity types are unmapped, display a blocking warning in the AggregationSummaryWidget listing the unmapped types before allowing the coordinator to proceed to export.
Contingency: Allow coordinators to temporarily assign unmapped activity types to a Bufdir 'other' catch-all category as an emergency workaround, with an audit flag indicating manual override was applied for that submission.