Persist active chapter to secure local storage
epic-organizational-hierarchy-management-foundation-task-011 — Extend ActiveChapterState BLoC to persist the selected chapter ID to flutter_secure_storage so it survives app restarts. On app startup, restore the last selected chapter by loading the persisted ID and resolving it through OrganizationUnitRepository. On logout event, clear the persisted chapter ID and reset state to NoChapterSelected.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Inject a FlutterSecureStorage instance into the Cubit constructor for testability — do not instantiate it internally. Create a thin ChapterStorageService abstraction around flutter_secure_storage to isolate the key naming logic and make mocking trivial. Key pattern: 'active_chapter_${userId}'. On startup orchestration: the app's root widget (or an AppStartupProvider) should await cubit.restoreFromStorage() before routing to the home screen — use a FutureProvider in Riverpod to gate navigation.
Handle the iOS first-launch keychain loss edge case: wrap the read in try/catch and treat PlatformException as empty storage. Do not await the write in SelectChapter — fire-and-forget with unawaited() to keep state transitions synchronous.
Testing Requirements
Unit tests (flutter_test + bloc_test): (1) SelectChapter triggers a write to mock flutter_secure_storage with correct key and value. (2) ReloadFromStorage reads from mock storage, resolves unit via mock HierarchyCache, emits ChapterSelected. (3) ReloadFromStorage with stale (deleted) unit ID emits NoChapterSelected and clears storage. (4) Logout event deletes the storage key and emits NoChapterSelected.
(5) flutter_secure_storage read failure is caught gracefully. Mock flutter_secure_storage using the package's built-in test mock or a manual fake implementing FlutterSecureStorageInterface. Verify storage.delete() is called on logout in the correct order (before state emission).
Recursive CTE queries for large hierarchies (1,400+ nodes) may exceed Supabase query timeouts or produce unacceptably slow responses, degrading tree load time beyond the 1-second target.
Mitigation & Contingency
Mitigation: Implement Supabase RPC functions for subtree fetches rather than client-side recursive calls. Use materialized path or closure table as a supplemental index for depth-first traversal. Benchmark with realistic NHF data volumes during development.
Contingency: Fall back to a pre-computed flat unit list stored in the hierarchy cache with client-side tree reconstruction, trading freshness for speed. Add a background refresh job to keep the cache warm.
Concurrent writes from multiple admin sessions could cause cache staleness, leading to stale tree views and incorrect ancestor path computations that corrupt aggregation results.
Mitigation & Contingency
Mitigation: Use optimistic versioning on cache entries with a short TTL (5 minutes) as a safety net. Subscribe to Supabase Realtime on the organization_units table to push invalidation events to all connected clients.
Contingency: Provide a manual 'Refresh Hierarchy' action in the admin portal that forces a full cache bust, and display a staleness warning banner when the cache age exceeds the TTL.
Persisting the flat unit list to local storage may expose organization structure data if the device is compromised or the storage is not properly encrypted, violating data protection requirements.
Mitigation & Contingency
Mitigation: Use flutter_secure_storage (AES-256 backed by Keychain/Keystore) for the local unit list cache rather than SharedPreferences. Include only unit IDs, names, and types — no member PII.
Contingency: Disable local-storage persistence entirely and rely on in-memory cache only. Accept the trade-off of no offline hierarchy access for the security guarantee.