Implement Hierarchy Cache data layer
epic-organizational-hierarchy-management-assignment-aggregation-task-001 — Build the in-memory and persistent caching layer for hierarchy tree data. Implement cache invalidation strategies, TTL management, and cache-aside patterns to support fast subtree lookups without repeated database queries. Store pre-computed ancestor/descendant paths for aggregation performance.
Acceptance Criteria
Technical Requirements
Implementation Notes
Implement the LRU cache as a `LinkedHashMap
Pre-compute descendant lists using a BFS/DFS from each node at population time ā this is an O(n²) operation in the worst case but acceptable at initial load for 1,400 nodes. Expose the cache through a `HierarchyCacheRepository` abstract class so the in-memory and persistent implementations can be composed (try memory first, fall back to persistent, fall back to network).
Testing Requirements
Write unit tests for the cache layer in isolation: (1) cache miss triggers repository fetch and populates cache; (2) cache hit returns cached value without repository call; (3) TTL expiry causes a cache miss on the next access; (4) node invalidation removes the node and all ancestor entries; (5) full flush clears all entries; (6) LRU eviction removes the least-recently-used entry when capacity is exceeded. Use fake/mock repositories and a fake clock for TTL tests. Write a performance benchmark test using `flutter_test`'s `Stopwatch` that inserts 1,400 entries and measures lookup time. Verify memory safety by ensuring no entries remain after a full flush.
Recursive aggregation queries across four hierarchy levels (national ā region ā local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.
Mitigation & Contingency
Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.
Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.
The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.
Mitigation & Contingency
Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.
Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.
The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.
Mitigation & Contingency
Mitigation: Use the in-memory hierarchy cache as the search corpus ā since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.
Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.