critical priority high complexity backend pending backend specialist Tier 2

Acceptance Criteria

AdminStatisticsService.getKpis(nodeId) returns an AdminKpiSnapshot containing all 5 core metrics for the given org subtree
activePeerMentors: count of users with role=peer_mentor AND status=active in the resolved subtree
monthlyActivities: count of activity records with created_at in the current calendar month for the resolved subtree
pendingReimbursements: count of reimbursement records with status=pending for the resolved subtree
pausedMentors: count of users with role=peer_mentor AND status=paused in the resolved subtree
expiringCertifications: count of certification records expiring within the next 30 days for the resolved subtree
Each metric query uses AdminRlsGuard.buildOrgFilter() to scope results — no hardcoded org IDs
Results are cached per nodeId with a 2-minute TTL; cache key includes nodeId and current calendar month
Calling getKpis() with a nodeId not present in the org tree throws OrgNodeNotFoundException
All 5 metrics are fetched in parallel (Future.wait) to minimise latency
Total round-trip from cache miss to returned snapshot: < 3 seconds on a standard mobile connection
Unit tests: mock Supabase client, assert correct filter applied per metric, assert parallel execution, assert cache hit skips Supabase calls

Technical Requirements

frameworks
Flutter
Riverpod
supabase_flutter
apis
Supabase PostgREST: users table (role, status, org_id)
Supabase PostgREST: activities table (created_at, org_id)
Supabase PostgREST: reimbursements table (status, org_id)
Supabase PostgREST: certifications table (expires_at, org_id)
data models
AdminKpiSnapshot (nodeId, activePeerMentors, monthlyActivities, pendingReimbursements, pausedMentors, expiringCertifications, computedAt)
KpiCache (Map<String, CachedSnapshot> keyed by nodeId+month)
performance requirements
All 5 Supabase count queries executed in parallel via Future.wait
Cache miss total latency: < 3s on 4G connection
Cache hit latency: < 5ms
security requirements
All queries run as authenticated user — never with service_role key on the client
Count queries must use .count(CountOption.exact) with RLS enforced, not raw SELECT COUNT(*) bypassing RLS
Cached snapshots must not persist beyond the app session (in-memory only)
ui components
AdminDashboardKpiRow (consumes AdminKpiSnapshot from Riverpod provider)

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Use Supabase's .count(CountOption.exact) on filtered queries — this returns the count in the response headers without transferring row data, minimising bandwidth. Structure each metric as a private async method (_fetchActivePeerMentors, etc.) returning Future; the public getKpis() calls Future.wait on all 5. For the cache key, use '${nodeId}_${DateTime.now().year}_${DateTime.now().month}' — this naturally invalidates the monthly-activities cache at month rollover. The expiringCertifications metric needs a date range filter: .gte('expires_at', today).lte('expires_at', today+30days) — use UTC dates throughout.

Expose the service via a Riverpod AsyncNotifierProvider that takes nodeId as a family parameter for per-node caching in the provider layer.

Testing Requirements

Unit tests (flutter_test) with a mocked Supabase client: verify each metric applies the correct table, filter, and count option. Verify Future.wait parallelism by asserting all 5 mock calls are initiated before any resolves. Test cache: second call within TTL returns cached value without additional mock invocations. Test OrgNodeNotFoundException for unknown nodeId.

Integration test against Supabase test project: seed known data, call getKpis(), assert exact counts match seeded values for each metric.

Component
Admin Statistics Service
service high
Epic Risks (4)
medium impact high prob technical

OrgHierarchyNavigator rendering NHF's full 1,400-chapter tree in a single widget may cause Flutter frame-rate drops below 60 fps on mid-range devices, making the navigator unusable for NHF national admins.

Mitigation & Contingency

Mitigation: Implement lazy expansion: only load immediate children on node expand rather than the full tree upfront. Use virtual scrolling for long sibling lists. Test with a synthetic 1,400-node dataset on a low-end Android device during development.

Contingency: If lazy expansion is insufficient, replace the tree widget with a paginated drill-down navigator (select level → select child) that avoids rendering more than 50 nodes at a time.

medium impact medium prob dependency

Bufdir may update their required export column structure or file format during or after development. If the AdminExportService hardcodes the current Bufdir schema, any format change requires a code release rather than a config update.

Mitigation & Contingency

Mitigation: Drive the Bufdir column mapping from a configuration repository rather than hardcoded constants. Abstract column definitions into a named schema config so that format changes require only a config update and re-deployment without service logic changes.

Contingency: If Bufdir format changes post-launch, release a config update within one sprint. If the change is structural (new required sections), scope a targeted service update and communicate timeline to partner organisations.

high impact medium prob integration

Role transition side-effects in UserManagementService (e.g., certification expiry removing mentor from chapter listing, pause triggering coordinator notification) may interact with external services like HLF's website sync. Incomplete side-effect handling could leave the system in an inconsistent state.

Mitigation & Contingency

Mitigation: Model side-effects as explicit domain events published after the primary state change is persisted. Implement event handlers as idempotent operations so re-processing is safe. Write integration tests that assert all side-effects fire correctly for each role transition type.

Contingency: If a side-effect fails after the primary change is persisted, log the failure with full context and trigger a manual reconciliation alert to the on-call team. Provide an admin-accessible re-trigger action for failed side-effects.

medium impact medium prob scope

If AdminStatisticsService cache TTL is set too long, org_admin may see significantly stale KPI values (e.g., a mentor newly paused an hour ago still appears as active), undermining trust in the dashboard.

Mitigation & Contingency

Mitigation: Default cache TTL to 5 minutes with a manual refresh action on the dashboard. Implement cache invalidation triggered by UserManagementService write operations that affect counted entities.

Contingency: If staleness causes org admin complaints post-launch, reduce TTL to 60 seconds and introduce a real-time Supabase subscription for high-impact counters (paused mentors, expiring certifications).