critical priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

StatsAsyncNotifier extends AsyncNotifier<StatsSnapshot> and is registered as a provider (statsAsyncNotifierProvider)
build() method calls the stats repository and returns a Future<StatsSnapshot> — initial state is AsyncLoading while the fetch is in progress
ref.watch(selectedTimeWindowProvider) is called inside build() so that any time window change triggers Riverpod to re-run build() automatically
A public invalidate() method exists that calls ref.invalidateSelf() to trigger a fresh fetch without requiring callers to import the provider ref
When the repository throws a network error, the state transitions to AsyncError with the original exception and a StackTrace
When the repository succeeds after a previous error, the state transitions to AsyncData<StatsSnapshot>
Time window changes produce an AsyncLoading state followed by AsyncData (no stale data visible during re-fetch)
Unit tests cover: initial fetch success, initial fetch failure, time window change triggers re-fetch, invalidate() triggers re-fetch
The notifier does not hold a direct reference to the Supabase client — it uses the stats repository abstraction only

Technical Requirements

frameworks
flutter
riverpod (AsyncNotifier)
flutter_riverpod
apis
StatsRepository (internal abstraction over Supabase)
data models
StatsSnapshot
TimeWindow
AsyncValue<StatsSnapshot>
performance requirements
State update from AsyncLoading to AsyncData must complete within the repository response time — no additional processing overhead
Re-fetch on time window change must debounce rapid successive changes by at least 300ms to avoid redundant Supabase queries
security requirements
Notifier must not cache or log StatsSnapshot data to persistent storage
Repository errors must not expose raw Supabase error messages to the UI — wrap in domain exceptions

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

AsyncNotifier.build() is the correct lifecycle hook — do NOT use a constructor or initState-style pattern. Place the debounce logic using a Timer inside the notifier if rapid time window changes are expected; cancel the timer in ref.onDispose. The invalidate() method should call `ref.invalidateSelf()` — this is the Riverpod-idiomatic way to force a re-build. Avoid using state = AsyncLoading() manually before the re-fetch; ref.invalidateSelf() handles this automatically.

If the project uses riverpod_generator, annotate with @riverpod and let code generation produce the provider — this is preferred over manual provider declaration for consistency. Ensure selectedTimeWindowProvider is watched (ref.watch) not read (ref.read) so Riverpod tracks the dependency correctly.

Testing Requirements

Unit tests using flutter_test and riverpod's ProviderContainer for isolated testing. Use mockito to mock StatsRepository. Test scenarios: (1) On creation, state transitions from AsyncLoading to AsyncData with the returned snapshot. (2) Repository throws → state becomes AsyncError.

(3) After error, invalidate() is called → state becomes AsyncLoading then AsyncData. (4) selectedTimeWindowProvider changes → build() is re-run and state re-fetches. Use ProviderContainer with overrides to inject the mock repository. Verify teardown: after container.dispose(), no callbacks fire.

Target ≥ 90% branch coverage for the notifier class.

Component
Stats Async Notifier
service medium
Epic Risks (2)
medium impact medium prob technical

Supabase realtime channel subscriptions that are not properly disposed on screen close can accumulate in memory across navigation events, causing duplicate invalidation calls, ghost fetches, and eventual memory leaks on long sessions.

Mitigation & Contingency

Mitigation: Implement StatsCacheInvalidator as a Riverpod provider with an explicit ref.onDispose callback that cancels the realtime channel subscription. Write a widget test that navigates away and back multiple times and asserts that only one subscription is active at any given time.

Contingency: If subscription leaks are found in production, add a global subscription registry that enforces at-most-one subscription per channel key, and schedule a dispose sweep on app background events.

medium impact low prob scope

Debouncing rapid inserts may swallow the invalidation signal if the debounce window outlasts the Supabase realtime event delivery window, resulting in the dashboard showing stale totals after a bulk registration completes.

Mitigation & Contingency

Mitigation: Set the debounce window to 800ms (shorter than the typical Supabase realtime delivery latency of 1-2s for batched events) and ensure the leading-edge invalidation fires immediately while trailing duplicates are suppressed. Integration-test with a 20-record bulk insert.

Contingency: If debounce timing proves unreliable, replace debounce with a trailing-edge timer reset on each event and add a guaranteed invalidation 5 seconds after the last event regardless of subsequent events.