Implement Hierarchy Aggregation Service rollup engine
epic-organizational-hierarchy-management-assignment-aggregation-task-009 — Build the core rollup engine that aggregates activity counts upward from leaf chapter nodes through regions to national associations. Support configurable aggregation depth, implement parallel subtree computation using Dart isolates for large trees (NHF's 1,400 chapters), cache aggregated results in Hierarchy Cache with smart invalidation on new activity writes, and expose drill-down breakpoints by unit level for Bufdir reporting.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
The rollup engine should operate on a HierarchyTreeSnapshot (in-memory tree) rather than issuing recursive Supabase queries per node. First fetch the entire subtree from HierarchyService.getSubtree(rootId) (which itself uses cache), then fetch activity counts in bulk via a single SQL query: SELECT unit_id, COUNT(*) FROM activity_registrations WHERE unit_id = ANY(subtreeIds) GROUP BY unit_id. Merge these two data structures in Dart to produce the aggregation. For parallel computation, partition the subtree's leaf chapters into groups of ~100 and dispatch each group to a separate Dart isolate using Isolate.run().
Each isolate receives its chunk of (unitId → activityCount) map and the subtree structure for its partition, computes subtotals, and returns the partial AggregationResult. The main isolate merges all partial results. For cache invalidation, subscribe to Supabase Realtime on the activity_registrations table INSERT event; on each event, walk the ancestor chain of the affected unit_id (O(depth) operation, max ~5 levels for NHF) and invalidate cache entries for all ancestors.
Testing Requirements
Unit tests (flutter_test) with mocked HierarchyService and ActivityRepository. Test cases: (1) rollupActivityCounts for a 3-level tree of 10 nodes — assert correct sum at each level; (2) maxDepth=1 truncation — assert only direct children included; (3) parallel isolate merge — create a 150-node mock tree, run rollup, assert result matches serial computation; (4) getAggregationByLevel with LevelType.region — assert correct grouping; (5) cache hit path — second call returns cached value without repository call; (6) cache miss path — repository called, result cached; (7) scoped invalidation — activity write to chapter X invalidates parent and grandparent cache entries but not sibling subtree. Integration test against local Supabase with a seeded 50-chapter test tree: assert rollup result matches manually computed expected value. Performance test: assert 1,400-chapter rollup completes under 30s on CI.
Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.
Mitigation & Contingency
Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.
Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.
The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.
Mitigation & Contingency
Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.
Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.
The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.
Mitigation & Contingency
Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.
Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.