high priority low complexity testing pending testing specialist Tier 3

Acceptance Criteria

Test: cold-start fetch — Supabase client is called exactly once, returned data is stored in both memory and disk cache
Test: memory-cache hit — second call to `warmFor(orgId)` does not invoke the Supabase client; returns immediately with cached value
Test: disk-cache restore — after simulating app restart (clearing in-memory map), `warmFor(orgId)` reads from the mocked disk adapter without calling Supabase
Test: org change invalidation — after `warmFor(org_A)` then `warmFor(org_B)`, querying org_A triggers a fresh Supabase fetch (memory evicted), org_B is served from cache
Test: network unavailable — when Supabase client throws a network exception, `OrgBrandingCache` returns `OrgBrandingData.defaults()` without rethrowing
Test: disk unavailable — when storage adapter throws, cache continues operating with in-memory layer only and returns defaults on cold start
All 6 core scenarios have dedicated named test cases
Supabase client mock verifies call count using `verify(mockClient.from(...).select(...)).called(1)` or equivalent
Tests run in under 500ms total (all synchronous/fake-async, no real I/O)

Technical Requirements

frameworks
flutter_test
Riverpod
mockito or mocktail
data models
OrgBrandingData
performance requirements
Full unit test suite must complete within 500ms
security requirements
Mock data must not contain real organization credentials or real logo URLs

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

To make `OrgBrandingCache` testable, its Supabase client and storage adapter dependencies must be injected — either via Riverpod provider overrides or constructor parameters on the notifier. Avoid calling `Supabase.instance` directly inside the notifier. If using Riverpod, define a `supabaseClientProvider` and `storageAdapterProvider` that can be overridden in tests via `ProviderContainer(overrides: [...])`. The `MockStorageAdapter` should implement a simple `read(key)/write(key, value)/delete(key)` interface backed by a `Map`.

For the disk-restore test, manually populate the mock storage adapter before constructing the cache, then call `warmFor(orgId)` and assert Supabase was not called.

Testing Requirements

Pure unit tests using `flutter_test` with mocked dependencies. Use `mocktail` (preferred) or `mockito` for mock generation. Create a `MockSupabaseClient` and a `MockStorageAdapter` that implement the same abstract interfaces used by `OrgBrandingCache`. Use `ProviderContainer` from Riverpod test utilities to instantiate the notifier with overridden providers.

Use `fakeAsync` for any timer-based behavior (e.g., TTL expiry if applicable). Each test must be fully isolated — no shared mutable state between tests. Organize tests in a `group('OrgBrandingCache', () { ... })` block with nested groups per scenario.

Component
Organization Branding Cache
data low
Epic Risks (3)
high impact medium prob technical

iOS Keychain and Android Keystore have meaningfully different failure modes and permission models. The secure storage plugin may throw platform-specific exceptions (e.g., biometric enrollment required, Keystore wipe after device re-enrolment) that crash higher-level flows if not caught at the adapter boundary.

Mitigation & Contingency

Mitigation: Wrap all storage plugin calls in try/catch at the adapter layer and expose a typed StorageResult<T> instead of throwing. Write integration tests on real device simulators for both platforms in CI using Fastlane. Document the exception matrix during spike.

Contingency: If a platform-specific failure cannot be handled gracefully, fall back to in-memory-only storage for the current session and surface a non-blocking warning to the user; log the event for investigation.

high impact medium prob integration

Setting a session-level Postgres variable (app.current_org_id) via a Supabase RPC requires that RLS policies on every table reference this variable. If the Supabase project schema has not yet defined these policies, the configurator will set the variable but queries will return unfiltered data, giving a false sense of security.

Mitigation & Contingency

Mitigation: Include a smoke-test RPC in the SupabaseRLSTenantConfigurator that verifies the variable is readable from a policy-scoped query before marking setup as complete. Coordinate with the database migration task to ensure RLS policies reference app.current_org_id before the configurator is shipped.

Contingency: If RLS policies are not in place at integration time, gate all data-fetching components behind a runtime check in SupabaseRLSTenantConfigurator.isRlsScopeVerified(); block data access and surface a developer warning until policies are confirmed.

medium impact medium prob technical

Fetching feature flags from Supabase on every cold start adds network latency before the first branded screen renders. On slow connections this may cause a perceptible blank-screen gap or cause the app to render with default (unflagged) state before flags arrive.

Mitigation & Contingency

Mitigation: Persist the last-known flag set to disk in the FeatureFlagProvider and serve stale-while-revalidate on startup. Gate flag refresh behind a configurable TTL (default 15 minutes) so network calls are not made on every launch.

Contingency: If stale flags cause a feature to appear that should be hidden, add a post-load re-evaluation pass that reconciles the live flag set with the rendered widget tree and triggers a targeted rebuild where needed.