Implement evaluation audit logging in edge function
epic-achievement-badges-evaluation-engine-task-012 — Add structured audit logging to the edge function that records each evaluation run: timestamp, peer_mentor_id, org_id, badges evaluated, criteria outcomes, badges awarded, and any errors encountered. Persist logs to a Supabase badge_evaluation_logs table for debugging and compliance review.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 5 - 253 tasks
Can start after Tier 4 completes
Implementation Notes
Create the badge_evaluation_logs table via a Supabase migration file, not inline SQL. Use `performance.now()` in Deno at the start and end of the evaluation pipeline to compute duration_ms. Structure the log insert as a dedicated `insertEvaluationLog(log: EvaluationLog)` function imported into the edge function so it can be mocked in tests. Call this function after `res.respond()` or after building the response object — do not block the response on the log write.
If using `EdgeRuntime.waitUntil()` is available in Supabase's Deno environment, use it for the fire-and-forget pattern; otherwise wrap in a void promise with a catch handler. Add database indexes: `CREATE INDEX idx_eval_logs_org ON badge_evaluation_logs(org_id)` and `CREATE INDEX idx_eval_logs_mentor ON badge_evaluation_logs(peer_mentor_id)` in the migration.
Testing Requirements
Unit tests: (1) verify log record fields are correctly populated for a successful run — check badges_evaluated list, criteria_outcomes map, badges_awarded list, and duration_ms is a positive integer, (2) verify error_message is set and badges_awarded is empty when evaluation throws, (3) verify log insert failure does not propagate to function response (mock Supabase client insert to throw and assert function still returns 200). Integration test with real Supabase: run a full evaluation cycle and query badge_evaluation_logs to assert a row was inserted with correct org_id and peer_mentor_id. Verify RLS policy by attempting to read logs as a peer mentor role user and asserting zero rows returned.
Supabase Edge Functions may experience cold start latency of 500ms–2s when they have not been invoked recently. If evaluation latency consistently exceeds the 2-second UI expectation, the celebration overlay timing SLA cannot be met without the optimistic UI fallback from the UI epic.
Mitigation & Contingency
Mitigation: Keep the edge function warm by scheduling a lightweight health-check invocation every 5 minutes in production. Optimise the function size to minimise Deno module load time. Implement the optimistic UI path in badge-bloc (from the UI epic) as the primary UX path so cold start only affects server-side reconciliation, not perceived responsiveness.
Contingency: If cold starts remain problematic, migrate badge evaluation to a Supabase database function (pl/pgsql) triggered directly by a database trigger on activity insert, eliminating the Edge Function overhead entirely for the evaluation logic while keeping Edge Function only for FCM notification dispatch.
Supabase database webhooks can fail silently if the edge function returns a non-2xx response or times out. A missed webhook means a peer mentor does not receive a badge they earned, which is both a functional defect and a trust issue for organisations relying on milestone tracking.
Mitigation & Contingency
Mitigation: Implement idempotent webhook processing: the edge function reads the activity ID from the webhook payload and checks whether evaluation for this activity has already run (via an audit log query) before proceeding. Add Supabase webhook retry configuration (3 retries with exponential backoff). Monitor webhook failure rates via Supabase logs alert.
Contingency: Implement a nightly reconciliation job (Supabase scheduled function) that scans all activities from the past 24 hours, re-evaluates badge criteria for any peer mentor with no corresponding evaluation log entry, and awards any missing badges. Alert operations if reconciliation awards more than 5% of badges, indicating systematic webhook failure.
The evaluation service loads badge definitions per organisation, but a misconfigured RLS policy or incorrect organisation scoping in the edge function could cause one organisation's badge criteria to be evaluated against another organisation's peer mentor activity data, leading to incorrect or cross-contaminated badge awards.
Mitigation & Contingency
Mitigation: The edge function must extract organisation_id from the webhook payload activity record and pass it explicitly to every database query. Write a security test that seeds two organisations with distinct badge definitions and verifies that evaluating a peer mentor in org A never reads or awards org B definitions. Use Supabase service role key only within the edge function, never the anon key.
Contingency: If cross-org contamination is detected in audit logs, immediately disable the edge function webhook, run a targeted SQL query to identify and revoke incorrectly awarded badges, notify affected organisations, and perform a full security review of all RLS policies on badge-related tables before re-enabling.