high priority low complexity database pending database specialist Tier 0

Acceptance Criteria

An in-memory LRU cache stores hierarchy subtree results keyed by node ID, with a configurable maximum entry count (default: 500 nodes)
A persistent cache layer (using Hive, SharedPreferences, or SQLite) stores pre-computed ancestor paths and descendant ID lists for each node
Cache entries have a configurable TTL (default: 15 minutes for in-memory, 1 hour for persistent); entries older than TTL are not served
Cache-aside pattern is implemented: on a cache miss, the repository fetches from Supabase, populates the cache, and returns the result
A cache invalidation method exists that accepts a node ID and invalidates all entries for that node and all its ancestors
Full cache flush method exists for use after bulk hierarchy mutations (add/remove/move operations)
Pre-computed ancestor paths are stored as ordered arrays of node IDs from root to the given node
Pre-computed descendant ID lists are stored as flat arrays of all descendant node IDs for fast subtree membership checks
Cache layer exposes a clean interface (abstract class or protocol) so it can be swapped without modifying consumers
Cache operations complete in under 5ms for in-memory lookups on lists of up to 1,400 nodes (NHF scale)

Technical Requirements

frameworks
Flutter
Riverpod or BLoC (for state invalidation signals)
Hive or Drift (for persistent cache)
dart:collection (for LRU implementation)
apis
Supabase REST API (read operations for cache population)
data models
OrganizationalUnit
HierarchyNode
AncestorPath
DescendantList
CacheEntry
performance requirements
In-memory cache hit latency under 5ms for any single node lookup
Persistent cache read latency under 50ms for pre-computed paths
Cache population (full tree of 1,400 nodes) completes in under 2 seconds on initial load
Memory footprint for in-memory cache must not exceed 10MB for 500 entries
security requirements
Persistent cache must not store personally identifiable information — only structural hierarchy data (node IDs, parent IDs, names)
Cache keys must not expose internal database UUIDs to logs in plain text — hash or truncate for logging

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Implement the LRU cache as a `LinkedHashMap` where insertion order tracks recency — remove the first entry when capacity is exceeded. For TTL management, store `DateTime expiresAt` inside `CacheEntry` and check it on every read rather than using timers (simpler, no background work). For the persistent layer, prefer Hive boxes for simplicity since the data is structural (not relational). Pre-compute ancestor paths at cache population time using a bottom-up traversal: for each node, walk up via `parentId` until you reach the root and store the path.

Pre-compute descendant lists using a BFS/DFS from each node at population time — this is an O(n²) operation in the worst case but acceptable at initial load for 1,400 nodes. Expose the cache through a `HierarchyCacheRepository` abstract class so the in-memory and persistent implementations can be composed (try memory first, fall back to persistent, fall back to network).

Testing Requirements

Write unit tests for the cache layer in isolation: (1) cache miss triggers repository fetch and populates cache; (2) cache hit returns cached value without repository call; (3) TTL expiry causes a cache miss on the next access; (4) node invalidation removes the node and all ancestor entries; (5) full flush clears all entries; (6) LRU eviction removes the least-recently-used entry when capacity is exceeded. Use fake/mock repositories and a fake clock for TTL tests. Write a performance benchmark test using `flutter_test`'s `Stopwatch` that inserts 1,400 entries and measures lookup time. Verify memory safety by ensuring no entries remain after a full flush.

Component
Hierarchy Aggregation Service
service high
Epic Risks (3)
high impact medium prob technical

Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.

Mitigation & Contingency

Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.

Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.

medium impact medium prob scope

The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.

Mitigation & Contingency

Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.

Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.

medium impact low prob technical

The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.

Mitigation & Contingency

Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.

Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.