Implement PeerMentorStatsAggregator Supabase queries
epic-achievement-badges-services-task-004 — Implement the core Supabase query layer inside PeerMentorStatsAggregator. Write efficient index-targeted queries for: total assignment count per mentor, current streak length (consecutive weeks with activity), training/course completions, and honorar-milestone counts (3rd and 15th assignment thresholds for Blindeforbundet). Avoid full-table scans by using indexed mentor_id + date filters.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Streak calculation algorithm: fetch activity_date values (distinct by ISO week) ordered descending with a limit of 104 (2 years of weeks). Iterate from most recent week backward; increment streak counter while each consecutive week is exactly 1 ISO week before the previous. Stop on first gap. Use the intl package's DateFormat or manual ISO week calculation (DateTime does not have a built-in isoWeekNumber in Dart — implement as an extension method: `extension on DateTime { int get isoWeekNumber { ...
} }`). For assignment count by period (weekly/monthly), use Supabase's date truncation via RPC or fetch dates client-side and group in Dart — prefer server-side grouping via a Postgres function if available to reduce data transfer. Honorar milestone for Blindeforbundet: the 3rd and 15th assignment thresholds trigger 'kontorhonorar' payments — this data feeds directly into the payment workflow so accuracy is critical. Consider caching aggregated stats in a Supabase materialized view or a dedicated stats table updated by database triggers for scale, but implement direct queries first as the baseline.
Inject the Supabase client via constructor for testability.
Testing Requirements
Two layers of testing required: (1) Unit tests with a mocked Supabase client stub that returns fixture data — assert that aggregateStats correctly maps query results to PeerMentorStats model fields, that streak calculation logic handles edge cases (no activity, single week, gap of exactly one week, gap of two weeks), and that honorar thresholds are evaluated correctly. (2) Integration test against a real Supabase test project (or local Supabase CLI instance) that seeds known data for a test mentor and asserts correct aggregate values are returned. Integration tests should be tagged and excluded from CI unit test runs. Place unit tests in test/features/badges/data/services/ and integration tests in integration_test/badges/.
peer-mentor-stats-aggregator must compute streaks and threshold counts across potentially hundreds of activity records per peer mentor. Naive queries (full table scans or N+1 patterns) will cause slow badge evaluation, especially when triggered on every activity save for all active peer mentors.
Mitigation & Contingency
Mitigation: Design aggregation queries using Supabase RPCs with window functions or materialised views from the start. Add database indexes on (peer_mentor_id, activity_date, activity_type) before writing any service code. Profile all aggregation queries against a dataset of 500+ activities during development.
Contingency: If query performance is insufficient at launch, implement incremental stat caching: maintain a peer_mentor_stats snapshot table updated on each activity insert via a database trigger, so the aggregator reads from pre-computed values rather than scanning raw activity rows.
badge-award-service must be idempotent, but if two concurrent edge function invocations evaluate the same peer mentor simultaneously (e.g., from a rapid double-save), both could pass the uniqueness check before either commits, resulting in duplicate badge records.
Mitigation & Contingency
Mitigation: Rely on the database-level uniqueness constraint (peer_mentor_id, badge_definition_id) as the final guard. In the service layer, use an upsert with ON CONFLICT DO NOTHING and return the existing record. Add a Postgres advisory lock or serialisable transaction for the award sequence during the edge function integration epic.
Contingency: If duplicate records are discovered in production, run a deduplication migration to remove extras (keeping earliest earned_at) and add a unique index if not already present. Alert engineering via Supabase database webhook on constraint violations.
The badge-configuration-service must validate org admin-supplied criteria JSON on save, but the full range of valid criteria types (threshold, streak, training-completion, tier-based) may not be fully enumerated during development, leading to either over-permissive or over-restrictive validation that frustrates admins.
Mitigation & Contingency
Mitigation: Define a versioned Dart sealed class hierarchy for CriteriaType before writing the validation logic. Review the hierarchy with product against all known badge types across NHF, Blindeforbundet, and HLF before implementation. Build the validator against the sealed class so new criteria types require an explicit code addition.
Contingency: If admins encounter validation rejections for legitimate criteria, expose a 'criteria_raw' escape hatch (JSON passthrough, admin-only) with a product warning, and schedule a sprint to formalise the new criteria type properly.