critical priority high complexity backend pending backend specialist Tier 3

Acceptance Criteria

Orchestrator accepts `{ peer_mentor_id, org_id }` as input and returns `string[]` of badge IDs newly earned (may be empty)
Orchestrator loads badge definitions via BadgeDefinitionLoader (task-005) before running any evaluations
Orchestrator invokes PeerMentorStatsAggregator (task-006) exactly once per call, sharing the result across all evaluator invocations
Orchestrator selects the correct evaluator class based on `definition.criteria_type` ('threshold' → ThresholdCriteriaEvaluator, 'streak' → StreakCriteriaEvaluator, 'training_completion' → TrainingCompletionCriteriaEvaluator)
Orchestrator queries already-awarded badges for the peer mentor and excludes them from the result — a badge already held is never returned again
If a badge definition has an unknown `criteria_type`, orchestrator logs a warning and skips that definition (does not throw)
Orchestrator returns an empty array when the peer mentor already holds all badges they qualify for
Orchestrator returns an empty array when there are no enabled badge definitions for the org
Orchestrator is stateless between calls — no instance-level mutable state
Full orchestration (load + aggregate + evaluate all definitions) completes within 5 seconds under normal Supabase latency
All sub-component dependencies (loader, aggregator, evaluators) are injected via constructor (not instantiated internally) to enable unit testing

Technical Requirements

frameworks
Supabase Edge Functions (Deno runtime)
supabase-js v2
apis
Supabase PostgREST: `GET /rest/v1/awarded_badges?peer_mentor_id=eq.{id}&org_id=eq.{orgId}&select=badge_id` (to filter already-held badges)
data models
badge_definitions
awarded_badges (peer_mentor_id, org_id, badge_id, awarded_at)
PeerMentorStats (in-memory)
performance requirements
Total wall-clock time for orchestration must not exceed 5 seconds (budget: 1s loader, 3s aggregation, 1s awarded-badge lookup + evaluation)
Evaluations are O(n) over badge definitions — no nested database calls during evaluation loop
security requirements
org_id must always be passed to loader and awarded-badge query — never derive org from peer mentor record alone
Returned badge IDs are internal UUIDs — safe to return to calling edge function, which controls downstream award logic

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

The orchestrator is the composition root for the evaluation pipeline. Implement as a `BadgeEvaluationService` class with a single public method `evaluate(peerId: string, orgId: string): Promise`. Constructor-inject: `BadgeDefinitionLoader`, `PeerMentorStatsAggregator`, and a `Map` (criteria_type → evaluator). This design makes the criteria_type dispatch table explicit and extensible without if/else chains.

The awarded-badges query should use the same Supabase client passed to all sub-components. Run the awarded-badge filter as a Set difference (`new Set(qualifyingIds).difference(new Set(alreadyAwardedIds))`) for clarity. Do not write any badge award records here — this service is read-only and returns candidates only; the caller (edge function handler) handles the write.

Testing Requirements

Integration test scenarios required: (1) peer mentor with stats meeting 2 of 3 enabled badge criteria — verify exactly 2 IDs returned, (2) peer mentor already holding all qualifying badges — verify empty array, (3) org with no enabled badges — verify empty array, (4) unknown criteria_type in one definition — verify it is skipped and other badges still evaluated, (5) aggregation layer returns zero-values — verify no badges awarded. Unit tests should mock all sub-dependencies (loader, aggregator, evaluators) and verify orchestrator call sequencing and filtering logic in isolation.

Component
Badge Evaluation Service
service high
Epic Risks (3)
medium impact medium prob technical

Supabase Edge Functions may experience cold start latency of 500ms–2s when they have not been invoked recently. If evaluation latency consistently exceeds the 2-second UI expectation, the celebration overlay timing SLA cannot be met without the optimistic UI fallback from the UI epic.

Mitigation & Contingency

Mitigation: Keep the edge function warm by scheduling a lightweight health-check invocation every 5 minutes in production. Optimise the function size to minimise Deno module load time. Implement the optimistic UI path in badge-bloc (from the UI epic) as the primary UX path so cold start only affects server-side reconciliation, not perceived responsiveness.

Contingency: If cold starts remain problematic, migrate badge evaluation to a Supabase database function (pl/pgsql) triggered directly by a database trigger on activity insert, eliminating the Edge Function overhead entirely for the evaluation logic while keeping Edge Function only for FCM notification dispatch.

high impact low prob integration

Supabase database webhooks can fail silently if the edge function returns a non-2xx response or times out. A missed webhook means a peer mentor does not receive a badge they earned, which is both a functional defect and a trust issue for organisations relying on milestone tracking.

Mitigation & Contingency

Mitigation: Implement idempotent webhook processing: the edge function reads the activity ID from the webhook payload and checks whether evaluation for this activity has already run (via an audit log query) before proceeding. Add Supabase webhook retry configuration (3 retries with exponential backoff). Monitor webhook failure rates via Supabase logs alert.

Contingency: Implement a nightly reconciliation job (Supabase scheduled function) that scans all activities from the past 24 hours, re-evaluates badge criteria for any peer mentor with no corresponding evaluation log entry, and awards any missing badges. Alert operations if reconciliation awards more than 5% of badges, indicating systematic webhook failure.

high impact low prob security

The evaluation service loads badge definitions per organisation, but a misconfigured RLS policy or incorrect organisation scoping in the edge function could cause one organisation's badge criteria to be evaluated against another organisation's peer mentor activity data, leading to incorrect or cross-contaminated badge awards.

Mitigation & Contingency

Mitigation: The edge function must extract organisation_id from the webhook payload activity record and pass it explicitly to every database query. Write a security test that seeds two organisations with distinct badge definitions and verifies that evaluating a peer mentor in org A never reads or awards org B definitions. Use Supabase service role key only within the edge function, never the anon key.

Contingency: If cross-org contamination is detected in audit logs, immediately disable the edge function webhook, run a targeted SQL query to identify and revoke incorrectly awarded badges, notify affected organisations, and perform a full security review of all RLS policies on badge-related tables before re-enabling.