critical priority medium complexity backend pending backend specialist Tier 3

Acceptance Criteria

HierarchyCache class is implemented with an internal Map<String, OrganizationUnit> keyed by unit ID, enabling O(1) lookups via getNode(id)
getChildren(parentId) returns an unmodifiable List<OrganizationUnit> of all direct children; returns empty list (not null) when parentId has no children
getSubtree(rootId) performs a breadth-first or depth-first traversal and returns all descendant nodes including the root; returns empty list if rootId is not found
getAllNodes() returns an unmodifiable flat list of all cached units
Cache is populated by calling OrganizationUnitRepository and building the tree from the returned list using parent_id linkage
invalidate() clears the in-memory map and tree, and triggers a re-population from OrganizationUnitRepository on next access
Riverpod provider (hierarchyCacheProvider) is defined, is scoped to the app lifecycle, and returns the singleton HierarchyCache instance
All public methods handle the case where the cache is empty or not yet populated (return safe defaults, not exceptions)
Cache population handles cycles in parent_id references gracefully without infinite loops
Multi-tenant isolation: cache is scoped per organization_id extracted from the authenticated user's JWT claims

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
Supabase PostgreSQL — organization_units table via OrganizationUnitRepository
data models
contact_chapter (OrganizationUnit surrogate — organization_unit_id, role_in_chapter)
assignment (scoped by organization unit for multi-tenant filtering)
performance requirements
getNode(id) must complete in O(1) — backed by HashMap
getChildren(parentId) must complete in O(k) where k is the number of direct children
getSubtree(rootId) must complete in O(n) where n is subtree size
Cache population (full fetch + build) must complete under 500ms for hierarchies up to 1,400 units (NHF scale)
Memory footprint must not exceed 2 MB for the largest expected hierarchy
security requirements
Cache must be scoped per organization_id — never mix units across tenants
Organization units fetched subject to Supabase RLS — service role key never used client-side
invalidate() must be callable only from trusted internal code paths, not exposed to UI layer directly

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Use a Map for the primary lookup store and a separate Map> childrenIndex (parentId → list of child IDs) to make getChildren O(k) without scanning. Build both maps in a single O(n) pass during population. For getSubtree, use an iterative queue-based BFS rather than recursion to avoid stack overflow on deep hierarchies (NHF has up to 1,400 units across 3 levels). The Riverpod provider should be a StateNotifierProvider or plain Provider; avoid AsyncNotifier here since the cache is synchronous after population.

Population itself is async — expose a Future populate() method. Guard against concurrent population calls with a Completer-based lock. Do not expose the internal maps — return UnmodifiableListView and handle null safety throughout.

Testing Requirements

Unit tests (flutter_test): (1) Populate cache from a mock list of OrganizationUnit objects and assert getNode, getChildren, getSubtree, getAllNodes return correct results. (2) Test getNode with an unknown ID returns null. (3) Test getChildren on a leaf node returns empty list. (4) Test getSubtree on root returns all nodes.

(5) Test invalidate() clears state and re-populates on next access. (6) Test cycle detection — list with circular parent_id references must not cause infinite loop. (7) Test multi-tenant isolation — two caches for different org IDs must not share data. Mock OrganizationUnitRepository using Mockito or manual fake.

Target 90%+ line coverage on HierarchyCache class.

Component
Hierarchy Cache
data low
Epic Risks (3)
high impact medium prob technical

Recursive CTE queries for large hierarchies (1,400+ nodes) may exceed Supabase query timeouts or produce unacceptably slow responses, degrading tree load time beyond the 1-second target.

Mitigation & Contingency

Mitigation: Implement Supabase RPC functions for subtree fetches rather than client-side recursive calls. Use materialized path or closure table as a supplemental index for depth-first traversal. Benchmark with realistic NHF data volumes during development.

Contingency: Fall back to a pre-computed flat unit list stored in the hierarchy cache with client-side tree reconstruction, trading freshness for speed. Add a background refresh job to keep the cache warm.

medium impact low prob technical

Concurrent writes from multiple admin sessions could cause cache staleness, leading to stale tree views and incorrect ancestor path computations that corrupt aggregation results.

Mitigation & Contingency

Mitigation: Use optimistic versioning on cache entries with a short TTL (5 minutes) as a safety net. Subscribe to Supabase Realtime on the organization_units table to push invalidation events to all connected clients.

Contingency: Provide a manual 'Refresh Hierarchy' action in the admin portal that forces a full cache bust, and display a staleness warning banner when the cache age exceeds the TTL.

high impact low prob security

Persisting the flat unit list to local storage may expose organization structure data if the device is compromised or the storage is not properly encrypted, violating data protection requirements.

Mitigation & Contingency

Mitigation: Use flutter_secure_storage (AES-256 backed by Keychain/Keystore) for the local unit list cache rather than SharedPreferences. Include only unit IDs, names, and types — no member PII.

Contingency: Disable local-storage persistence entirely and rely on in-memory cache only. Accept the trade-off of no offline hierarchy access for the security guarantee.