Implement Bufdir alignment validation checks
epic-activity-statistics-dashboard-data-foundation-task-012 — Build the Bufdir Alignment Validator service that compares aggregated totals from the stats views against an independent SQL query that replicates the Bufdir export pipeline logic. The validator runs as a background check and emits a BufdirAlignmentResult(delta, percentageDrift) object. If any numeric field drifts by more than 0.01 %, it logs a structured warning via the app analytics service. This validator is the automated guardrail ensuring zero reconciliation delta between the dashboard and Bufdir export.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 5 - 253 tasks
Can start after Tier 4 completes
Implementation Notes
The independence of the two queries is the entire point of this validator — if both queries use the same materialized view, drift will never be detected. The RPC function (`rpc_bufdir_export_totals`) must query the `activities` base table directly, replicating the aggregation logic from the Bufdir export pipeline, not the materialized view. This should be coordinated with task-004. Percentage drift formula: `percentageDrift = (abs(viewTotal - rpcTotal) / max(viewTotal, 1)) * 100`.
Use `max(viewTotal, 1)` as denominator to avoid division by zero while still reporting meaningful drift when viewTotal is 0 but rpcTotal is non-zero. The 5-minute rate limit should be implemented as a simple in-memory `DateTime? lastValidationRun` field on the service — no persistence needed since drift is a transient concern. Register the validator as a Riverpod provider and trigger it from the Stats BLoC's data-loading state transition.
Ensure `analyticsService.logWarning` calls include `chapter_id` (not user ID) and the reporting period for traceability without PII exposure.
Testing Requirements
Unit tests with mocked `StatsRepository` and mocked Supabase RPC client: (1) aligned case — both sources return same totals, `isAligned = true`, no analytics warning logged; (2) drift within threshold (0.005%) — `isAligned = true`, no warning; (3) drift above threshold (0.02%) — `isAligned = false`, `analyticsService.logWarning` called with correct field name and drift value; (4) RPC failure — `isAligned = false`, `analyticsService.logError` called, no exception propagated to caller; (5) zero-value stats — `percentageDrift = 0.0`, no division-by-zero; (6) feature flag disabled — validator returns immediately with `isAligned = true` without calling either data source. Integration test: seed known totals in local Supabase, run validator, assert `isAligned = true`.
Materialized views over large activity tables may have refresh latency exceeding the 2-second SLA under high insert load, causing stale data to appear on the dashboard immediately after a peer mentor registers an activity.
Mitigation & Contingency
Mitigation: Design the materialized view refresh trigger to run asynchronously via a Supabase Edge Function rather than a synchronous trigger, and set a maximum staleness tolerance of 5 seconds documented in the feature spec. Add a CONCURRENTLY refresh strategy so reads are never blocked.
Contingency: If refresh latency cannot meet SLA, fall back to a regular (non-materialized) view for the dashboard and accept slightly higher query cost per request. Revisit materialized approach once Supabase pg_cron or background workers are available.
The aggregation counting rules for the dashboard may diverge from those used in the Bufdir export pipeline (e.g., which activity types count, how duplicate registrations are handled), creating a reconciliation burden for coordinators at reporting time.
Mitigation & Contingency
Mitigation: Run the BufDir Alignment Validator against a shared reference dataset before any view is merged to main. Encode the counting rules as a shared Supabase function called by both the stats views and the export query builder so there is a single source of truth.
Contingency: If divergence is discovered post-launch, ship a visible banner on the dashboard stating that numbers are indicative and may differ from the export until the reconciliation fix is deployed. Prioritize the fix as a P0 defect.
Multi-chapter coordinators (up to 5 chapters per NHF requirement) require RLS policies that filter on an array of chapter IDs, which is more complex than single-value RLS and could be misconfigured, leaking data across chapters or blocking legitimate access.
Mitigation & Contingency
Mitigation: Write integration tests that verify cross-chapter isolation for a coordinator assigned to chapters A and B cannot see data from chapter C. Use parameterized RLS policies with auth.uid()-based chapter lookup to avoid hardcoded values.
Contingency: If RLS misconfiguration is detected in testing, temporarily restrict coordinator queries to single-chapter scope (coordinator's primary chapter) and ship multi-chapter support as a fast-follow patch once RLS logic is verified.