critical priority high complexity backend pending backend specialist Tier 3

Acceptance Criteria

A participant active in 3 chapters within Region Oslo is counted exactly once in Region Oslo's uniqueParticipants total
A participant active in chapters across Region Oslo and Region Bergen is counted once in each region (cross-region participants are NOT deduplicated across regions — only within a region)
Deduplication uses the same identity resolution logic as ParticipantDeduplicationService (shared identity key: resolved participant UUID)
getRegionBreakdown results after deduplication show lower or equal uniqueParticipants compared to the naive sum of chapter counts — never higher
getChapterBreakdown is not affected by cross-chapter deduplication (chapter counts remain raw per-chapter; deduplication only applies at region rollup level)
Org isolation is maintained: deduplication only considers participants within the same orgId
The deduplication pass does not alter activity counts — only uniqueParticipants counts are deduplicated
A participant with membership in up to 5 chapters (NHF max per member) is handled correctly without error
Deduplication completes within the overall 2-second SLA for full hierarchy traversal

Technical Requirements

frameworks
Flutter
Dart
Supabase Dart SDK
Riverpod
apis
Supabase PostgREST API
Supabase RPC (deduplication aggregation)
ParticipantDeduplicationService internal API
data models
Participant
ParticipantIdentity
Region
Chapter
Activity
RegionBreakdownResult
performance requirements
Deduplication must be performed at the database level using SQL DISTINCT ON resolved participant UUID, not by loading all participant records into Dart memory
The deduplication query must be composable with the hierarchy CTE from task-010 to avoid a second full scan of the activity table
Target: full NHF deduplication pass completes within 2 seconds total combined with hierarchy traversal
security requirements
Deduplication queries must include orgId scoping to prevent cross-org identity leakage
Resolved participant UUIDs used for deduplication must not be exposed in API responses — only aggregated counts are returned
ParticipantDeduplicationService must be called with the same auth context to maintain RLS enforcement

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

The cleanest implementation is a SQL-level DISTINCT on the resolved participant identity UUID grouped by region_id. Extend the RPC function from task-010 to include a deduplicated_participants subquery using COUNT(DISTINCT resolved_participant_id). This avoids loading participant lists into Dart memory. Coordinate with ParticipantDeduplicationService to ensure both use the same resolved_participant_id column/view — define a shared Supabase view or function that resolves participant identity once and is referenced by both services.

In Dart, GeographicDistributionService should call ParticipantDeduplicationService.getResolvedIdentityKey(rawParticipantId) only if the database-level deduplication is insufficient (prefer DB-level). NHF explicitly notes members can belong to up to 5 chapters — ensure the deduplication handles multi-membership arrays without Dart-level loops that could degrade at scale.

Testing Requirements

Unit tests: create mock participant sets where the same participant appears in 2 and 3 chapters within the same region; assert uniqueParticipants == 1 in each case. Test cross-region participant appears in both region counts. Test that deduplication does not affect activity counts. Test participant in 5 chapters (NHF maximum).

Integration tests (covered by task-012): seeded database with deliberate cross-chapter membership scenarios. Use flutter_test. Minimum 90% branch coverage on the deduplication logic path.

Component
Geographic Distribution Service
service high
Epic Risks (4)
high impact high prob integration

NHF members can belong to up to 5 local chapters. When a participant has activities registered under different chapter IDs within the same reporting period, deduplication requires a reliable cross-chapter identity key. If national IDs are absent for some members (a known data quality issue in NHF's systems), the deduplication service may fail to identify duplicates, resulting in inflated counts submitted to Bufdir.

Mitigation & Contingency

Mitigation: Implement a multi-attribute identity matching strategy: primary match on national_id, fallback to (full_name + birth_year + municipality) composite key. Expose a low-confidence match list in DeduplicationAnomalyReport that coordinators can review and manually resolve before submission.

Contingency: If identity data quality is too poor for reliable automated deduplication for specific organisations, add an organisation-level config flag that disables cross-chapter deduplication for that org and requires coordinators to manually review the anomaly report before submitting.

high impact medium prob integration

The geographic distribution algorithm must resolve NHF's 1,400 local chapter hierarchy to regional aggregates. If the organizational unit hierarchy in the database is incomplete (missing parent-child relationships for some chapters), the geographic service will silently drop activities from unmapped chapters, producing an understated geographic breakdown.

Mitigation & Contingency

Mitigation: Add a hierarchy completeness validation step in GeographicDistributionService that counts activities without a resolvable region assignment and surfaces them as an 'unmapped_activities' field in the distribution result. Block export if unmapped_activities > 0.

Contingency: Provide a 'national' fallback bucket for activities from chapters with no region assignment, clearly labelled in the preview screen so coordinators are alerted to fix the org hierarchy data before re-running aggregation.

high impact low prob technical

BufdirAggregationService orchestrates four dependent services. If one service (e.g., GeographicDistributionService) throws mid-pipeline, the partially assembled metrics payload may be silently cached or returned as if complete, resulting in a Bufdir submission missing the geographic breakdown section.

Mitigation & Contingency

Mitigation: Implement the orchestrator as a transactional pipeline using Dart's Result type pattern: each stage returns Either<AggregationError, PartialResult>, and the orchestrator only proceeds if all stages succeed. The final payload is only assembled and persisted when all stages return success.

Contingency: If a partial failure state reaches the UI, the AggregationProgressIndicator must display a specific stage failure message with a retry option that re-runs only the failed stage rather than the full pipeline.

medium impact medium prob scope

Internal activity types that have no corresponding Bufdir category in the mapping configuration will cause the aggregation to silently exclude those activities from the final counts. Coordinators may not notice the omission until Bufdir queries why submission totals are lower than expected.

Mitigation & Contingency

Mitigation: BufdirAggregationService must produce an unmapped_activity_types list as part of its output. If any internal activity types are unmapped, display a blocking warning in the AggregationSummaryWidget listing the unmapped types before allowing the coordinator to proceed to export.

Contingency: Allow coordinators to temporarily assign unmapped activity types to a Bufdir 'other' catch-all category as an emergency workaround, with an audit flag indicating manual override was applied for that submission.