high priority medium complexity backend pending backend specialist Tier 5

Acceptance Criteria

A `CachedStatsRepository` decorator wraps `StatsRepository` and intercepts all three fetch methods
Cache key is computed as `SHA-256(timeWindow.name + sorted(chapterIds).join(','))` truncated to 16 hex chars — consistent across app restarts
On cache hit with age < 15 minutes: cached data is returned immediately AND a background re-fetch is triggered; if new data differs, the Riverpod state is updated via a `StreamController` or `StateNotifier`
On cache hit with age >= 15 minutes (stale): behave as cache miss
On cache miss: fetch from network, store result with timestamp, return result
When device is offline (no network): return cached data regardless of TTL with a `CacheSource.stale` flag set on the result; never throw a connectivity exception to the caller
`clearCache()` removes all stats-related cache entries and is callable from the Stats Cache Invalidator
Cache storage uses `hive` with a dedicated box named `stats_cache`; each entry is a JSON-serialised map with `data` and `cachedAt` fields
Cache box is opened once at app startup via a Riverpod `FutureProvider` — not lazily inside each repository call
No PII or sensitive user data is stored in the cache — only aggregated statistical values

Technical Requirements

frameworks
Flutter
Riverpod
hive / hive_flutter
connectivity_plus (for offline detection)
apis
StatsRepository (wraps existing implementation)
data models
StatsSnapshot
PeerMentorStatRow
ChartDataPoint
CacheEntry<T>
performance requirements
Cache read must complete in under 10 ms (synchronous Hive box lookup)
Background re-fetch must not block the UI thread — use unawaited Future or Isolate if data processing is heavy
Cache box must not grow unbounded — evict entries older than 1 hour on app startup
security requirements
Hive box must NOT use HiveAesCipher with a hardcoded key — if encryption is needed, use flutter_secure_storage to store the key
Cache must not store user identifiers — keys are derived from scope parameters only
clearCache() must be accessible only to trusted internal callers (not exposed via any public API or deep link)

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Use the Decorator pattern: `CachedStatsRepository implements StatsRepository` and takes an inner `StatsRepository` as a constructor parameter. Register via Riverpod as `cachedStatsRepositoryProvider` that overrides `statsRepositoryProvider`. Stale-while-revalidate is best implemented with a `StreamController.broadcast()` per cache key — the repository emits the cached value synchronously, then emits again after the network response if values differ. Use `jsonEncode`/`jsonDecode` for serialisation rather than Hive TypeAdapters to avoid code-gen complexity for this cache layer.

Cache key hashing: sort `chapterIds` alphabetically before hashing to ensure key stability regardless of list order. Eviction on startup: iterate all keys in the Hive box and delete entries where `cachedAt` is older than 1 hour — this prevents unbounded disk usage over time. The `connectivity_plus` package provides a stream of `ConnectivityResult`; check it before attempting network fetch and set `CacheSource.stale` on the returned result wrapper.

Testing Requirements

Unit tests using `mocktail`: (1) cache miss — delegates to inner repository and stores result; (2) cache hit within TTL — returns cached data and fires background fetch; (3) cache hit beyond TTL — treats as miss; (4) offline state — returns stale cache with `CacheSource.stale` flag; (5) clearCache() empties all stats entries; (6) background re-fetch emits update when data changes. Use a fake `HiveInterface` or in-memory map for Hive in unit tests. Integration test: cold-start app, fetch data, disable network, re-open stats screen — verify cached data is shown within 50 ms.

Component
Stats Repository
data medium
Epic Risks (3)
medium impact medium prob technical

Materialized views over large activity tables may have refresh latency exceeding the 2-second SLA under high insert load, causing stale data to appear on the dashboard immediately after a peer mentor registers an activity.

Mitigation & Contingency

Mitigation: Design the materialized view refresh trigger to run asynchronously via a Supabase Edge Function rather than a synchronous trigger, and set a maximum staleness tolerance of 5 seconds documented in the feature spec. Add a CONCURRENTLY refresh strategy so reads are never blocked.

Contingency: If refresh latency cannot meet SLA, fall back to a regular (non-materialized) view for the dashboard and accept slightly higher query cost per request. Revisit materialized approach once Supabase pg_cron or background workers are available.

high impact medium prob integration

The aggregation counting rules for the dashboard may diverge from those used in the Bufdir export pipeline (e.g., which activity types count, how duplicate registrations are handled), creating a reconciliation burden for coordinators at reporting time.

Mitigation & Contingency

Mitigation: Run the BufDir Alignment Validator against a shared reference dataset before any view is merged to main. Encode the counting rules as a shared Supabase function called by both the stats views and the export query builder so there is a single source of truth.

Contingency: If divergence is discovered post-launch, ship a visible banner on the dashboard stating that numbers are indicative and may differ from the export until the reconciliation fix is deployed. Prioritize the fix as a P0 defect.

high impact low prob security

Multi-chapter coordinators (up to 5 chapters per NHF requirement) require RLS policies that filter on an array of chapter IDs, which is more complex than single-value RLS and could be misconfigured, leaking data across chapters or blocking legitimate access.

Mitigation & Contingency

Mitigation: Write integration tests that verify cross-chapter isolation for a coordinator assigned to chapters A and B cannot see data from chapter C. Use parameterized RLS policies with auth.uid()-based chapter lookup to avoid hardcoded values.

Contingency: If RLS misconfiguration is detected in testing, temporarily restrict coordinator queries to single-chapter scope (coordinator's primary chapter) and ship multi-chapter support as a fast-follow patch once RLS logic is verified.