high priority medium complexity backend pending backend specialist Tier 4

Acceptance Criteria

persistToLocalStorage() serializes the full list of OrganizationUnit objects to JSON and writes it to SharedPreferences or Hive under a key namespaced by organization_id
loadFromLocalStorage() deserializes the stored JSON and re-populates the in-memory cache without making a Supabase network call
Each persisted payload includes a cache_version integer and a cached_at ISO-8601 timestamp
On load, if cache_version does not match the current app's expected version constant, the local data is discarded and a remote fetch is triggered
On load, if cached_at is older than the configured TTL (default 24 hours, configurable), the local data is used for initial render but a background remote fetch is immediately triggered
If local storage contains no data for the organization, the cache falls back to a remote fetch transparently
After every successful remote fetch, persistToLocalStorage() is called automatically
On logout or organization switch, the persisted data for the previous organization is cleared from local storage
Local storage key is namespaced: hierarchy_cache_{organizationId}_v{cacheVersion} to prevent collisions
loadFromLocalStorage() handles corrupted or unparseable JSON gracefully by returning false and triggering a remote fetch

Technical Requirements

frameworks
Flutter
Riverpod
shared_preferences (or Hive)
apis
Supabase PostgreSQL — organization_units table (fallback fetch on stale/missing cache)
data models
contact_chapter (OrganizationUnit — must be fully JSON-serializable including all fields)
performance requirements
loadFromLocalStorage() must complete in under 100ms for hierarchies up to 1,400 units
persistToLocalStorage() must be non-blocking — run on a background isolate or use compute() if serialization exceeds 16ms
App cold start must display cached hierarchy within 200ms of widget tree mount
security requirements
Hierarchy data stored in local storage must not contain PII beyond organization unit names and IDs
Storage key must be namespaced by organization_id to prevent cross-tenant data leakage on shared devices
On logout, all persisted hierarchy data must be cleared synchronously before navigation to login screen

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Prefer SharedPreferences over Hive for this use case — simpler dependency, no schema migrations needed. Serialize OrganizationUnit list using jsonEncode(list.map((u) => u.toJson()).toList()). Store metadata separately: hierarchy_cache_meta_{orgId} = {version, cachedAt}. Use a wrapper model CacheMetadata to avoid inline map manipulation.

For TTL handling, implement a CacheValidationResult enum: {valid, stale, invalid} returned by a private _validateLocalCache() method. The stale case should: (1) populate in-memory from local data immediately, (2) schedule a background refresh using Future.microtask or unawaited(). This gives instant UI with eventual consistency. Do not use Hive unless SharedPreferences shows performance issues — Hive adds box management complexity.

Testing Requirements

Unit tests (flutter_test): (1) persistToLocalStorage() writes a JSON string with correct cache_version and cached_at. (2) loadFromLocalStorage() returns true and populates the cache from a pre-seeded SharedPreferences mock. (3) Stale TTL detection — seed a cached_at 25 hours ago, assert loadFromLocalStorage() returns stale=true and triggers a background fetch. (4) Version mismatch — seed cache_version=1 with app expecting version=2, assert local data discarded.

(5) Corrupted JSON — seed an invalid JSON string, assert graceful fallback to remote fetch. (6) Logout clears persisted data for the correct org. Use SharedPreferences.setMockInitialValues() for unit tests. Integration test: full round-trip — fetch from Supabase, persist, restart simulation, load from local storage, assert in-memory cache matches original.

Component
Hierarchy Cache
data low
Epic Risks (3)
high impact medium prob technical

Recursive CTE queries for large hierarchies (1,400+ nodes) may exceed Supabase query timeouts or produce unacceptably slow responses, degrading tree load time beyond the 1-second target.

Mitigation & Contingency

Mitigation: Implement Supabase RPC functions for subtree fetches rather than client-side recursive calls. Use materialized path or closure table as a supplemental index for depth-first traversal. Benchmark with realistic NHF data volumes during development.

Contingency: Fall back to a pre-computed flat unit list stored in the hierarchy cache with client-side tree reconstruction, trading freshness for speed. Add a background refresh job to keep the cache warm.

medium impact low prob technical

Concurrent writes from multiple admin sessions could cause cache staleness, leading to stale tree views and incorrect ancestor path computations that corrupt aggregation results.

Mitigation & Contingency

Mitigation: Use optimistic versioning on cache entries with a short TTL (5 minutes) as a safety net. Subscribe to Supabase Realtime on the organization_units table to push invalidation events to all connected clients.

Contingency: Provide a manual 'Refresh Hierarchy' action in the admin portal that forces a full cache bust, and display a staleness warning banner when the cache age exceeds the TTL.

high impact low prob security

Persisting the flat unit list to local storage may expose organization structure data if the device is compromised or the storage is not properly encrypted, violating data protection requirements.

Mitigation & Contingency

Mitigation: Use flutter_secure_storage (AES-256 backed by Keychain/Keystore) for the local unit list cache rather than SharedPreferences. Include only unit IDs, names, and types — no member PII.

Contingency: Disable local-storage persistence entirely and rely on in-memory cache only. Accept the trade-off of no offline hierarchy access for the security guarantee.