critical priority high complexity backend pending backend specialist Tier 1

Acceptance Criteria

getRegionBreakdown(period, orgId) returns a list of RegionBreakdownResult objects, each containing regionId, regionName, totalActivities, uniqueParticipants, and activityCategoryBreakdown
getChapterBreakdown(period, regionId) returns chapter-level aggregates scoped to the given region, including chapterId, chapterName, totalActivities, and uniqueParticipants
All activity counts are aggregated by traversing the full org hierarchy (org → region → chapter) and summing child unit counts upward
Org isolation is enforced: a query with orgId='nhf' never returns data belonging to orgId='blindeforbundet' or 'hlf', even if the same Supabase schema is shared
An org with 12 landsforeninger, 9 regions, and 1,400 chapters (NHF scale) returns correct totals without double-counting chapters across regions
Passing an invalid or unauthorized orgId returns an empty result set, not an error or cross-org data
Both methods accept a ReportingPeriod object and correctly filter activities to the period's start and end timestamps
Results include units with zero activity (chapters/regions that had no activities in the period) to support complete Bufdir submissions
Method execution completes in under 2 seconds for the full NHF 1,400-chapter hierarchy on a warm Supabase connection

Technical Requirements

frameworks
Flutter
Dart
Supabase Dart SDK
Riverpod
apis
Supabase PostgREST API
Supabase RPC (stored procedures for hierarchy traversal)
data models
OrganizationHierarchy
Region
Chapter
Activity
ReportingPeriod
RegionBreakdownResult
ChapterBreakdownResult
performance requirements
Full NHF hierarchy breakdown (1,400 chapters, 9 regions) must complete in under 2 seconds
Use Supabase RPC with recursive CTE for hierarchy traversal to avoid N+1 query patterns
Cache region/chapter structural metadata separately from activity counts to avoid re-fetching static hierarchy on repeat calls
Apply Supabase row-level security (RLS) policies as the enforcement layer for org isolation rather than application-level filtering alone
security requirements
Org isolation MUST be enforced at the database layer via Supabase RLS policies, not only in Dart code
All queries must include orgId as a mandatory filter parameter — never allow open queries across orgs
Service must validate that the authenticated user's organization matches the requested orgId before executing any query
Log all cross-org access attempts for audit purposes

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use a Supabase PostgreSQL recursive CTE (WITH RECURSIVE) via an RPC function to traverse the org hierarchy efficiently — avoid fetching the full tree into Dart memory. The RPC function should accept org_id and period boundaries and return pre-aggregated rows. In Dart, GeographicDistributionService should be a Riverpod-injectable service class. Define clear result types (RegionBreakdownResult, ChapterBreakdownResult) as immutable Dart data classes (use freezed or manual copyWith).

Enforce that the Supabase client used is always initialized with the authenticated user's JWT so RLS policies apply automatically. For zero-activity units: LEFT JOIN the hierarchy tree against activity aggregates so all units appear in results. Do not use a HAVING clause that would suppress empty units.

Testing Requirements

Unit tests: mock Supabase client to verify getRegionBreakdown and getChapterBreakdown call correct RPC endpoints with correct parameters including orgId and period filters. Test that zero-activity units are included in results. Test that an orgId mismatch between auth context and requested orgId returns empty. Integration tests (covered by task-012): use seeded Supabase test database with NHF hierarchy to validate correct rollup math.

Dart test framework (flutter_test) for all unit-level tests. Minimum 90% branch coverage on both methods.

Component
Geographic Distribution Service
service high
Epic Risks (4)
high impact high prob integration

NHF members can belong to up to 5 local chapters. When a participant has activities registered under different chapter IDs within the same reporting period, deduplication requires a reliable cross-chapter identity key. If national IDs are absent for some members (a known data quality issue in NHF's systems), the deduplication service may fail to identify duplicates, resulting in inflated counts submitted to Bufdir.

Mitigation & Contingency

Mitigation: Implement a multi-attribute identity matching strategy: primary match on national_id, fallback to (full_name + birth_year + municipality) composite key. Expose a low-confidence match list in DeduplicationAnomalyReport that coordinators can review and manually resolve before submission.

Contingency: If identity data quality is too poor for reliable automated deduplication for specific organisations, add an organisation-level config flag that disables cross-chapter deduplication for that org and requires coordinators to manually review the anomaly report before submitting.

high impact medium prob integration

The geographic distribution algorithm must resolve NHF's 1,400 local chapter hierarchy to regional aggregates. If the organizational unit hierarchy in the database is incomplete (missing parent-child relationships for some chapters), the geographic service will silently drop activities from unmapped chapters, producing an understated geographic breakdown.

Mitigation & Contingency

Mitigation: Add a hierarchy completeness validation step in GeographicDistributionService that counts activities without a resolvable region assignment and surfaces them as an 'unmapped_activities' field in the distribution result. Block export if unmapped_activities > 0.

Contingency: Provide a 'national' fallback bucket for activities from chapters with no region assignment, clearly labelled in the preview screen so coordinators are alerted to fix the org hierarchy data before re-running aggregation.

high impact low prob technical

BufdirAggregationService orchestrates four dependent services. If one service (e.g., GeographicDistributionService) throws mid-pipeline, the partially assembled metrics payload may be silently cached or returned as if complete, resulting in a Bufdir submission missing the geographic breakdown section.

Mitigation & Contingency

Mitigation: Implement the orchestrator as a transactional pipeline using Dart's Result type pattern: each stage returns Either<AggregationError, PartialResult>, and the orchestrator only proceeds if all stages succeed. The final payload is only assembled and persisted when all stages return success.

Contingency: If a partial failure state reaches the UI, the AggregationProgressIndicator must display a specific stage failure message with a retry option that re-runs only the failed stage rather than the full pipeline.

medium impact medium prob scope

Internal activity types that have no corresponding Bufdir category in the mapping configuration will cause the aggregation to silently exclude those activities from the final counts. Coordinators may not notice the omission until Bufdir queries why submission totals are lower than expected.

Mitigation & Contingency

Mitigation: BufdirAggregationService must produce an unmapped_activity_types list as part of its output. If any internal activity types are unmapped, display a blocking warning in the AggregationSummaryWidget listing the unmapped types before allowing the coordinator to proceed to export.

Contingency: Allow coordinators to temporarily assign unmapped activity types to a Bufdir 'other' catch-all category as an emergency workaround, with an audit flag indicating manual override was applied for that submission.