critical priority high complexity backend pending backend specialist Tier 4

Acceptance Criteria

rollupActivityCounts(rootUnitId, {int? maxDepth}) returns an AggregationResult containing total activity counts recursively summed from all descendant chapters up to rootUnitId
Aggregation supports configurable depth: maxDepth = 1 returns only direct children counts; null returns the full subtree
For NHF's tree of 1,400 chapters, a full national rollup completes in under 5 seconds (warm cache) and under 30 seconds (cold cache, using Dart isolates for parallel subtree computation)
Dart isolates are used to partition large subtrees (>100 nodes) into parallel computation units; results are merged on the main isolate using a reduce operation
getAggregationByLevel(rootUnitId, LevelType level) returns a Map<String, int> of {unitId: count} for all units of the specified level within the subtree — this is the Bufdir drill-down endpoint
Hierarchy Cache stores aggregated results keyed by (rootUnitId, maxDepth); cache entries are invalidated when any activity write event is received for a unit in the cached subtree
Cache invalidation is scoped: writing an activity for chapter X invalidates aggregation cache entries for X and all its ancestor nodes only, not unrelated subtrees
Unit tests cover: single-node rollup, two-level rollup, maxDepth truncation, parallel isolate merge correctness, cache hit, cache miss + population, scoped cache invalidation

Technical Requirements

frameworks
Flutter
Dart (dart:isolate)
BLoC (for upstream consumers)
apis
Supabase REST (activity_registrations table aggregate queries)
Supabase Realtime (activity write events for cache invalidation)
data models
AggregationResult
HierarchyTreeSnapshot
ActivityRegistration
LevelType
HierarchyAggregationCache
performance requirements
Full national rollup (1,400 chapters) under 5s with warm cache, under 30s cold
Isolate fan-out: split tree into subtrees of ~100 nodes each and compute in parallel — use Isolate.run() for Dart 2.15+ or compute() from Flutter
Cache entries expire after 5 minutes (TTL) regardless of invalidation events to prevent stale aggregations in edge cases
Supabase aggregate query for activity counts must use SQL SUM grouped by unit_id — avoid fetching individual rows and counting in Dart
security requirements
Aggregation queries run under the authenticated user's RLS context — a coordinator can only aggregate within their assigned subtree
Bufdir drill-down endpoint (getAggregationByLevel) must not expose individual user activity details — only unit-level counts
Isolate message passing must use only serializable value objects (no Supabase client or auth token passed into isolates — isolates must receive pre-fetched data snapshots)

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

The rollup engine should operate on a HierarchyTreeSnapshot (in-memory tree) rather than issuing recursive Supabase queries per node. First fetch the entire subtree from HierarchyService.getSubtree(rootId) (which itself uses cache), then fetch activity counts in bulk via a single SQL query: SELECT unit_id, COUNT(*) FROM activity_registrations WHERE unit_id = ANY(subtreeIds) GROUP BY unit_id. Merge these two data structures in Dart to produce the aggregation. For parallel computation, partition the subtree's leaf chapters into groups of ~100 and dispatch each group to a separate Dart isolate using Isolate.run().

Each isolate receives its chunk of (unitId → activityCount) map and the subtree structure for its partition, computes subtotals, and returns the partial AggregationResult. The main isolate merges all partial results. For cache invalidation, subscribe to Supabase Realtime on the activity_registrations table INSERT event; on each event, walk the ancestor chain of the affected unit_id (O(depth) operation, max ~5 levels for NHF) and invalidate cache entries for all ancestors.

Testing Requirements

Unit tests (flutter_test) with mocked HierarchyService and ActivityRepository. Test cases: (1) rollupActivityCounts for a 3-level tree of 10 nodes — assert correct sum at each level; (2) maxDepth=1 truncation — assert only direct children included; (3) parallel isolate merge — create a 150-node mock tree, run rollup, assert result matches serial computation; (4) getAggregationByLevel with LevelType.region — assert correct grouping; (5) cache hit path — second call returns cached value without repository call; (6) cache miss path — repository called, result cached; (7) scoped invalidation — activity write to chapter X invalidates parent and grandparent cache entries but not sibling subtree. Integration test against local Supabase with a seeded 50-chapter test tree: assert rollup result matches manually computed expected value. Performance test: assert 1,400-chapter rollup completes under 30s on CI.

Component
Hierarchy Aggregation Service
service high
Epic Risks (3)
high impact medium prob technical

Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.

Mitigation & Contingency

Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.

Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.

medium impact medium prob scope

The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.

Mitigation & Contingency

Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.

Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.

medium impact low prob technical

The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.

Mitigation & Contingency

Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.

Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.