high priority low complexity infrastructure pending backend specialist Tier 3

Acceptance Criteria

`FeatureFlagProvider.isEnabled(flagKey)` returns a synchronous bool without awaiting network I/O — flags are pre-loaded before any consumer calls this method
Flags are fetched from the Supabase `org_feature_flags` table filtered by the current org ID immediately after org selection
Fetched flags are stored in memory and in `SecureStorageAdapter` with a configurable TTL (default: 15 minutes)
After TTL expiry, the next `isEnabled` call triggers a background re-fetch; the stale value is returned until the refresh completes (stale-while-revalidate pattern)
When the org changes, all cached flags for the previous org are cleared before loading new org flags
Percentage-based rollout: a flag with `rollout_percentage = 50` returns `true` for approximately 50% of users — user assignment is deterministic (same user always gets same result)
Org-level override: if `org_override = true/false` is set, it takes precedence over percentage rollout
When Supabase is unreachable and no cached flags exist, `isEnabled` returns `false` for all flags (safe default)
Provider is accessible in the widget tree via `ref.watch(featureFlagProvider)` and `ref.read(featureFlagProvider.notifier).isEnabled(key)`

Technical Requirements

frameworks
Flutter
Riverpod
Supabase Flutter SDK
flutter_secure_storage
apis
Supabase REST API — `org_feature_flags` table select with org_id filter
data models
org_feature_flags (flag_key, is_enabled, rollout_percentage, org_override, org_id)
performance requirements
`isEnabled()` must execute in under 0.1ms (synchronous Map lookup — no async in the hot path)
Background re-fetch after TTL must not block the UI thread
Initial flag load must complete before the org home screen renders
security requirements
Flags are cached in `flutter_secure_storage` (not shared_preferences) because flag state may reveal unreleased product roadmap
Cache keys must be namespaced per org to prevent cross-org flag leakage: `ff_{orgId}_{flagKey}`
Percentage rollout user assignment must use a stable, non-reversible hash of the user ID — not a random value

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Represent the provider as a Riverpod `Notifier` where `FeatureFlagState` holds a `Map` of resolved flag values (already evaluated — boolean, not raw flag objects). Evaluation (rollout percentage → bool) happens at load time, not at `isEnabled()` call time, keeping the hot path O(1). For deterministic percentage assignment, use `(userId.hashCode.abs() % 100) < rolloutPercentage`. Org override evaluation order: `orgOverride != null ?

orgOverride : (rollout evaluation)`. TTL management: store a `DateTime loadedAt` in `FeatureFlagState`; in `isEnabled`, check if `DateTime.now().difference(loadedAt) > ttl` and if so, schedule a background `_refresh()` via `Future.microtask`. Define a `FeatureFlagRepository` abstraction that wraps Supabase access — this makes the unit tests in task-009 straightforward. The `SecureStorageAdapter` used here should be the same interface as defined for `OrgBrandingCache` to maintain consistency.

Testing Requirements

Unit tests required (see task-009). For implementation verification: manually test that `isEnabled` returns synchronously after `loadForOrg` completes by wrapping a widget test with `ProviderScope` and asserting the flag value is available in the first build cycle without `FutureBuilder`. Verify TTL behavior with `fakeAsync` advancing time past the TTL boundary. Verify percentage rollout determinism by calling `isEnabled` 100 times with the same user ID and asserting the result never changes.

Component
Feature Flag Provider
infrastructure low
Epic Risks (3)
high impact medium prob technical

iOS Keychain and Android Keystore have meaningfully different failure modes and permission models. The secure storage plugin may throw platform-specific exceptions (e.g., biometric enrollment required, Keystore wipe after device re-enrolment) that crash higher-level flows if not caught at the adapter boundary.

Mitigation & Contingency

Mitigation: Wrap all storage plugin calls in try/catch at the adapter layer and expose a typed StorageResult<T> instead of throwing. Write integration tests on real device simulators for both platforms in CI using Fastlane. Document the exception matrix during spike.

Contingency: If a platform-specific failure cannot be handled gracefully, fall back to in-memory-only storage for the current session and surface a non-blocking warning to the user; log the event for investigation.

high impact medium prob integration

Setting a session-level Postgres variable (app.current_org_id) via a Supabase RPC requires that RLS policies on every table reference this variable. If the Supabase project schema has not yet defined these policies, the configurator will set the variable but queries will return unfiltered data, giving a false sense of security.

Mitigation & Contingency

Mitigation: Include a smoke-test RPC in the SupabaseRLSTenantConfigurator that verifies the variable is readable from a policy-scoped query before marking setup as complete. Coordinate with the database migration task to ensure RLS policies reference app.current_org_id before the configurator is shipped.

Contingency: If RLS policies are not in place at integration time, gate all data-fetching components behind a runtime check in SupabaseRLSTenantConfigurator.isRlsScopeVerified(); block data access and surface a developer warning until policies are confirmed.

medium impact medium prob technical

Fetching feature flags from Supabase on every cold start adds network latency before the first branded screen renders. On slow connections this may cause a perceptible blank-screen gap or cause the app to render with default (unflagged) state before flags arrive.

Mitigation & Contingency

Mitigation: Persist the last-known flag set to disk in the FeatureFlagProvider and serve stale-while-revalidate on startup. Gate flag refresh behind a configurable TTL (default 15 minutes) so network calls are not made on every launch.

Contingency: If stale flags cause a feature to appear that should be hidden, add a post-load re-evaluation pass that reconciles the live flag set with the rendered widget tree and triggers a targeted rebuild where needed.