Implement in-memory HierarchyCache with tree structure
epic-organizational-hierarchy-management-foundation-task-008 — Implement the HierarchyCache class using an in-memory map keyed by unit ID for O(1) node lookups, and a tree structure for traversal. Expose getNode(id), getChildren(parentId), getSubtree(rootId), and getAllNodes() methods. Cache must be populated from OrganizationUnitRepository and expose an invalidate() hook for forced refresh. Use Riverpod provider for DI.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Use a Map
Population itself is async — expose a Future
Testing Requirements
Unit tests (flutter_test): (1) Populate cache from a mock list of OrganizationUnit objects and assert getNode, getChildren, getSubtree, getAllNodes return correct results. (2) Test getNode with an unknown ID returns null. (3) Test getChildren on a leaf node returns empty list. (4) Test getSubtree on root returns all nodes.
(5) Test invalidate() clears state and re-populates on next access. (6) Test cycle detection — list with circular parent_id references must not cause infinite loop. (7) Test multi-tenant isolation — two caches for different org IDs must not share data. Mock OrganizationUnitRepository using Mockito or manual fake.
Target 90%+ line coverage on HierarchyCache class.
Recursive CTE queries for large hierarchies (1,400+ nodes) may exceed Supabase query timeouts or produce unacceptably slow responses, degrading tree load time beyond the 1-second target.
Mitigation & Contingency
Mitigation: Implement Supabase RPC functions for subtree fetches rather than client-side recursive calls. Use materialized path or closure table as a supplemental index for depth-first traversal. Benchmark with realistic NHF data volumes during development.
Contingency: Fall back to a pre-computed flat unit list stored in the hierarchy cache with client-side tree reconstruction, trading freshness for speed. Add a background refresh job to keep the cache warm.
Concurrent writes from multiple admin sessions could cause cache staleness, leading to stale tree views and incorrect ancestor path computations that corrupt aggregation results.
Mitigation & Contingency
Mitigation: Use optimistic versioning on cache entries with a short TTL (5 minutes) as a safety net. Subscribe to Supabase Realtime on the organization_units table to push invalidation events to all connected clients.
Contingency: Provide a manual 'Refresh Hierarchy' action in the admin portal that forces a full cache bust, and display a staleness warning banner when the cache age exceeds the TTL.
Persisting the flat unit list to local storage may expose organization structure data if the device is compromised or the storage is not properly encrypted, violating data protection requirements.
Mitigation & Contingency
Mitigation: Use flutter_secure_storage (AES-256 backed by Keychain/Keystore) for the local unit list cache rather than SharedPreferences. Include only unit IDs, names, and types — no member PII.
Contingency: Disable local-storage persistence entirely and rely on in-memory cache only. Accept the trade-off of no offline hierarchy access for the security guarantee.