Implement SecureStorageAdapter for iOS Keychain and Android Keystore
epic-organization-selection-and-onboarding-foundation-task-002 — Implement the concrete SecureStorageAdapter using flutter_secure_storage under the hood, ensuring AES-256 encryption on Android Keystore and Keychain on iOS. Write platform-specific configuration (accessibility, background access flags). Expose a Riverpod provider so dependent components can inject it via DI.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
The key mapping from SecureStorageKey enum to String must be centralized — use a const Map
Document the iOS accessibility setting choice in a comment — future developers must not change it without understanding the background-fetch implications for session restoration.
Testing Requirements
Unit tests for the concrete implementation are handled in the dedicated test task (task-003). For this task: verify compilation on both platforms, and manually confirm on a physical or emulator device that a value written survives an app restart (hot restart is insufficient — cold start required). Write one smoke test using FakeAsync that confirms the Riverpod provider resolves to a FlutterSecureStorageAdapter instance. Verify the provider override pattern works in a ProviderContainer test so downstream tests can substitute the fake.
iOS Keychain and Android Keystore have meaningfully different failure modes and permission models. The secure storage plugin may throw platform-specific exceptions (e.g., biometric enrollment required, Keystore wipe after device re-enrolment) that crash higher-level flows if not caught at the adapter boundary.
Mitigation & Contingency
Mitigation: Wrap all storage plugin calls in try/catch at the adapter layer and expose a typed StorageResult<T> instead of throwing. Write integration tests on real device simulators for both platforms in CI using Fastlane. Document the exception matrix during spike.
Contingency: If a platform-specific failure cannot be handled gracefully, fall back to in-memory-only storage for the current session and surface a non-blocking warning to the user; log the event for investigation.
Setting a session-level Postgres variable (app.current_org_id) via a Supabase RPC requires that RLS policies on every table reference this variable. If the Supabase project schema has not yet defined these policies, the configurator will set the variable but queries will return unfiltered data, giving a false sense of security.
Mitigation & Contingency
Mitigation: Include a smoke-test RPC in the SupabaseRLSTenantConfigurator that verifies the variable is readable from a policy-scoped query before marking setup as complete. Coordinate with the database migration task to ensure RLS policies reference app.current_org_id before the configurator is shipped.
Contingency: If RLS policies are not in place at integration time, gate all data-fetching components behind a runtime check in SupabaseRLSTenantConfigurator.isRlsScopeVerified(); block data access and surface a developer warning until policies are confirmed.
Fetching feature flags from Supabase on every cold start adds network latency before the first branded screen renders. On slow connections this may cause a perceptible blank-screen gap or cause the app to render with default (unflagged) state before flags arrive.
Mitigation & Contingency
Mitigation: Persist the last-known flag set to disk in the FeatureFlagProvider and serve stale-while-revalidate on startup. Gate flag refresh behind a configurable TTL (default 15 minutes) so network calls are not made on every launch.
Contingency: If stale flags cause a feature to appear that should be hidden, add a post-load re-evaluation pass that reconciles the live flag set with the rendered widget tree and triggers a targeted rebuild where needed.