Implement peer mentor stats aggregation for evaluation
epic-achievement-badges-evaluation-engine-task-006 — Implement the stats aggregation layer within the badge evaluation service that computes a peer mentor's current totals (activity counts per type, honorar counts, streak data, certifications) from the Supabase database. This aggregation feeds all three evaluator types and must be efficient enough to complete within edge function timeout limits.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
The streak calculation is the most algorithmically complex part. Approach: fetch all activity `created_at` timestamps, extract unique calendar dates (normalize to UTC date strings), sort ascending, then scan linearly counting consecutive days. Store both `currentStreakDays` (streak ending on today or yesterday) and `longestStreakDays` (all-time best). For Blindeforbundet's honorar milestone logic: count activities where `honorar = true` — the evaluator (task-004 is training, task-002/003 cover threshold/streak) will compare this count against the badge definition threshold (3 or 15).
Keep the aggregation layer agnostic of specific thresholds — it just counts. Consider a Postgres RPC function to push aggregation server-side if payload sizes become large, but start with client-side aggregation for simplicity and optimize if needed.
Testing Requirements
Unit tests for streak calculation logic are highest priority — test with: (1) activities on consecutive days, (2) gap of 1 day breaking the streak, (3) multiple activities on same day counting as 1 streak day, (4) activities in non-chronological insertion order, (5) single activity (streak = 1), (6) no activities (streak = 0). Integration tests (against a Supabase test project or local Supabase CLI) should verify correct counts for a seeded dataset with known totals. Use `deno test --allow-net` for integration tests gated behind an env flag.
Supabase Edge Functions may experience cold start latency of 500ms–2s when they have not been invoked recently. If evaluation latency consistently exceeds the 2-second UI expectation, the celebration overlay timing SLA cannot be met without the optimistic UI fallback from the UI epic.
Mitigation & Contingency
Mitigation: Keep the edge function warm by scheduling a lightweight health-check invocation every 5 minutes in production. Optimise the function size to minimise Deno module load time. Implement the optimistic UI path in badge-bloc (from the UI epic) as the primary UX path so cold start only affects server-side reconciliation, not perceived responsiveness.
Contingency: If cold starts remain problematic, migrate badge evaluation to a Supabase database function (pl/pgsql) triggered directly by a database trigger on activity insert, eliminating the Edge Function overhead entirely for the evaluation logic while keeping Edge Function only for FCM notification dispatch.
Supabase database webhooks can fail silently if the edge function returns a non-2xx response or times out. A missed webhook means a peer mentor does not receive a badge they earned, which is both a functional defect and a trust issue for organisations relying on milestone tracking.
Mitigation & Contingency
Mitigation: Implement idempotent webhook processing: the edge function reads the activity ID from the webhook payload and checks whether evaluation for this activity has already run (via an audit log query) before proceeding. Add Supabase webhook retry configuration (3 retries with exponential backoff). Monitor webhook failure rates via Supabase logs alert.
Contingency: Implement a nightly reconciliation job (Supabase scheduled function) that scans all activities from the past 24 hours, re-evaluates badge criteria for any peer mentor with no corresponding evaluation log entry, and awards any missing badges. Alert operations if reconciliation awards more than 5% of badges, indicating systematic webhook failure.
The evaluation service loads badge definitions per organisation, but a misconfigured RLS policy or incorrect organisation scoping in the edge function could cause one organisation's badge criteria to be evaluated against another organisation's peer mentor activity data, leading to incorrect or cross-contaminated badge awards.
Mitigation & Contingency
Mitigation: The edge function must extract organisation_id from the webhook payload activity record and pass it explicitly to every database query. Write a security test that seeds two organisations with distinct badge definitions and verifies that evaluating a peer mentor in org A never reads or awards org B definitions. Use Supabase service role key only within the edge function, never the anon key.
Contingency: If cross-org contamination is detected in audit logs, immediately disable the edge function webhook, run a targeted SQL query to identify and revoke incorrectly awarded badges, notify affected organisations, and perform a full security review of all RLS policies on badge-related tables before re-enabling.