high priority medium complexity integration pending integration specialist Tier 5

Acceptance Criteria

After OrganizationUnitRepository.create(), HierarchyCache.invalidate() is called and cache is re-populated before the create() Future completes
After OrganizationUnitRepository.update(), HierarchyCache.invalidate() is called and cache is re-populated
After OrganizationUnitRepository.softDelete(), HierarchyCache.invalidate() is called; the deleted unit must not appear in subsequent getNode/getChildren/getSubtree calls
After OrganizationUnitRepository.restore(), HierarchyCache.invalidate() is called; the restored unit appears correctly in subsequent cache reads
After UnitAssignmentRepository.create() or delete(), HierarchyCache.invalidate() is called to reflect membership changes
If invalidation+re-population fails (Supabase unreachable), the previous in-memory state is preserved and an error is surfaced via the repository's return type (not a thrown exception)
ActiveChapterState broadcasting a new chapter triggers a scope-aware cache refresh only if the new chapter's subtree differs from the currently cached scope
No double invalidation occurs when a repository mutation is followed immediately by a chapter change — debounce or deduplication logic is in place
All invalidation calls are logged at debug level with the triggering operation name for traceability
HierarchyCache invalidation is thread-safe — concurrent mutation calls do not result in partial or inconsistent cache state

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
Supabase PostgreSQL — organization_units table re-fetch on invalidation
OrganizationUnitRepository (internal) — create, update, softDelete, restore
UnitAssignmentRepository (internal) — create, delete
data models
contact_chapter (OrganizationUnit — full entity re-fetched on invalidation)
assignment (UnitAssignment — membership changes trigger cache refresh)
performance requirements
Invalidation + re-population must complete under 1 second on typical mobile network (4G)
UI must not block during invalidation — cache serves stale data while re-population is in progress
Debounce rapid consecutive invalidation triggers to a minimum 300ms window to avoid redundant Supabase fetches
security requirements
Cache re-population fetches must respect Supabase RLS — scoped to the authenticated user's organization_id
Invalidation hook must not be callable from untrusted UI code — internal to the repository layer only

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Use the Observer/callback pattern: inject HierarchyCache into both repositories at construction time and call cache.invalidate() at the end of each mutating method (after the Supabase call succeeds). Do not call invalidate() in a fire-and-forget manner — await the re-population so callers can be confident the cache is fresh when the mutation Future resolves. For concurrency safety, wrap the invalidate+populate sequence in a Mutex (using the synchronized package) to serialize concurrent invalidation calls. For debouncing rapid triggers from ActiveChapterState, use a StreamTransformer with debounceTime (from rxdart) on the activeChapterProvider stream.

Keep invalidation logic in a CacheInvalidationCoordinator helper class rather than scattering it across repositories — this makes the dependency graph explicit and testable.

Testing Requirements

Unit tests (flutter_test): (1) create() on OrganizationUnitRepository calls HierarchyCache.invalidate() exactly once. (2) softDelete() causes the deleted unit to be absent from cache after re-population. (3) restore() causes the restored unit to be present after re-population. (4) UnitAssignment mutation triggers cache invalidation.

(5) Failed re-population preserves previous in-memory cache. (6) Concurrent mutations do not corrupt cache state (use async test with multiple simultaneous calls). (7) Debounce — rapid successive mutations result in only one re-population call. Mock HierarchyCache and capture invalidate() call count.

Integration test: perform a softDelete against a local Supabase instance and assert the deleted unit is absent from a subsequent getNode() call.

Component
Hierarchy Cache
data low
Epic Risks (3)
high impact medium prob technical

Recursive CTE queries for large hierarchies (1,400+ nodes) may exceed Supabase query timeouts or produce unacceptably slow responses, degrading tree load time beyond the 1-second target.

Mitigation & Contingency

Mitigation: Implement Supabase RPC functions for subtree fetches rather than client-side recursive calls. Use materialized path or closure table as a supplemental index for depth-first traversal. Benchmark with realistic NHF data volumes during development.

Contingency: Fall back to a pre-computed flat unit list stored in the hierarchy cache with client-side tree reconstruction, trading freshness for speed. Add a background refresh job to keep the cache warm.

medium impact low prob technical

Concurrent writes from multiple admin sessions could cause cache staleness, leading to stale tree views and incorrect ancestor path computations that corrupt aggregation results.

Mitigation & Contingency

Mitigation: Use optimistic versioning on cache entries with a short TTL (5 minutes) as a safety net. Subscribe to Supabase Realtime on the organization_units table to push invalidation events to all connected clients.

Contingency: Provide a manual 'Refresh Hierarchy' action in the admin portal that forces a full cache bust, and display a staleness warning banner when the cache age exceeds the TTL.

high impact low prob security

Persisting the flat unit list to local storage may expose organization structure data if the device is compromised or the storage is not properly encrypted, violating data protection requirements.

Mitigation & Contingency

Mitigation: Use flutter_secure_storage (AES-256 backed by Keychain/Keystore) for the local unit list cache rather than SharedPreferences. Include only unit IDs, names, and types — no member PII.

Contingency: Disable local-storage persistence entirely and rely on in-memory cache only. Accept the trade-off of no offline hierarchy access for the security guarantee.