Implement OrgBrandingCache with memory and disk layers
epic-organization-selection-and-onboarding-foundation-task-006 — Build the OrgBrandingCache that fetches organization logo URLs and design-token overrides (primary color, font family, border radius) from Supabase on first org selection, stores them in a two-tier cache (in-memory Map + shared_preferences or Hive for disk persistence), and exposes a Riverpod AsyncNotifier. The cache must pre-warm during org selection so branded screens render without a blocking network call.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
Model the cache as a Riverpod `AsyncNotifier
Default tokens should reference the existing design token system constants (e.g., `AppColors.brandPrimary`) rather than hardcoded hex values. Avoid storing `Color` objects directly to disk — serialize as hex strings and parse on read. Pre-warm pattern: in the org selection screen's `onOrgSelected` handler, `await orgBrandingCache.warmFor(orgId)` before triggering navigation.
Testing Requirements
Unit tests required (see task-007). Additionally verify: (1) `OrgBrandingData` value object equality and copyWith behavior, (2) serialization helpers for disk persistence, (3) Riverpod provider graph — ensure `OrgBrandingCache` provider correctly declares its dependencies.
No widget tests in this task — widget integration is covered by OrgCardWidget tests. Mock the Supabase client using a Dart interface/abstract class to enable deterministic unit testing without network calls.
iOS Keychain and Android Keystore have meaningfully different failure modes and permission models. The secure storage plugin may throw platform-specific exceptions (e.g., biometric enrollment required, Keystore wipe after device re-enrolment) that crash higher-level flows if not caught at the adapter boundary.
Mitigation & Contingency
Mitigation: Wrap all storage plugin calls in try/catch at the adapter layer and expose a typed StorageResult<T> instead of throwing. Write integration tests on real device simulators for both platforms in CI using Fastlane. Document the exception matrix during spike.
Contingency: If a platform-specific failure cannot be handled gracefully, fall back to in-memory-only storage for the current session and surface a non-blocking warning to the user; log the event for investigation.
Setting a session-level Postgres variable (app.current_org_id) via a Supabase RPC requires that RLS policies on every table reference this variable. If the Supabase project schema has not yet defined these policies, the configurator will set the variable but queries will return unfiltered data, giving a false sense of security.
Mitigation & Contingency
Mitigation: Include a smoke-test RPC in the SupabaseRLSTenantConfigurator that verifies the variable is readable from a policy-scoped query before marking setup as complete. Coordinate with the database migration task to ensure RLS policies reference app.current_org_id before the configurator is shipped.
Contingency: If RLS policies are not in place at integration time, gate all data-fetching components behind a runtime check in SupabaseRLSTenantConfigurator.isRlsScopeVerified(); block data access and surface a developer warning until policies are confirmed.
Fetching feature flags from Supabase on every cold start adds network latency before the first branded screen renders. On slow connections this may cause a perceptible blank-screen gap or cause the app to render with default (unflagged) state before flags arrive.
Mitigation & Contingency
Mitigation: Persist the last-known flag set to disk in the FeatureFlagProvider and serve stale-while-revalidate on startup. Gate flag refresh behind a configurable TTL (default 15 minutes) so network calls are not made on every launch.
Contingency: If stale flags cause a feature to appear that should be hidden, add a post-load re-evaluation pass that reconciles the live flag set with the rendered widget tree and triggers a targeted rebuild where needed.