critical priority high complexity database pending database specialist Tier 1

Acceptance Criteria

PeerMentorStatsAggregator class exposes an async method aggregateStats(String mentorId, String orgId) returning PeerMentorStats
Assignment count query uses a Supabase .select('count') with .eq('mentor_id', mentorId).eq('org_id', orgId) filter — no full-table scan
Streak calculation correctly identifies consecutive ISO week numbers with at least one activity; a gap of one week resets the streak
Streak query fetches only the activity dates for the mentor (not full rows) using .select('activity_date').order('activity_date', ascending: false) with a LIMIT to avoid fetching unbounded history
Training completion query fetches only completed course IDs for the mentor using .eq('completion_status', 'completed') filter
Honorar milestone query counts assignments with type filters relevant to Blindeforbundet's honorar-eligible assignment types
HonorarMilestone.hasReachedThirdAssignment is true when totalAssignmentCount >= 3
HonorarMilestone.hasReachedFifteenthAssignment is true when totalAssignmentCount >= 15
All queries include org_id filter to enforce multi-tenant data isolation at the query level (in addition to RLS)
The aggregator catches Supabase errors and throws a typed AggregationException with the source error attached
Integration test confirms aggregateStats returns correct PeerMentorStats for a seeded test mentor with known assignment history
Query execution time for a mentor with 500 assignments is under 300ms (verified in integration test with timing assertion)

Technical Requirements

frameworks
Flutter
Supabase Dart client (supabase_flutter)
apis
Supabase PostgREST API
Supabase RLS (Row Level Security)
data models
PeerMentorStats
AssignmentCount
StreakData
TrainingCompletion
HonorarMilestone
performance requirements
All queries must target indexed columns (mentor_id, org_id, activity_date) — no sequential scans
Streak calculation must use a LIMIT on date fetch (e.g., last 104 weeks = 2 years) to bound query size
Assignment count must use COUNT aggregation server-side, not client-side list length
Total aggregateStats call must complete under 500ms for typical mentor history
security requirements
All queries must include explicit org_id filter even if RLS already enforces it — defense in depth
Never expose raw SQL or Supabase credentials in error messages surfaced to UI
Use Supabase service role only in server-side contexts; use anon/user JWT in client context

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Streak calculation algorithm: fetch activity_date values (distinct by ISO week) ordered descending with a limit of 104 (2 years of weeks). Iterate from most recent week backward; increment streak counter while each consecutive week is exactly 1 ISO week before the previous. Stop on first gap. Use the intl package's DateFormat or manual ISO week calculation (DateTime does not have a built-in isoWeekNumber in Dart — implement as an extension method: `extension on DateTime { int get isoWeekNumber { ...

} }`). For assignment count by period (weekly/monthly), use Supabase's date truncation via RPC or fetch dates client-side and group in Dart — prefer server-side grouping via a Postgres function if available to reduce data transfer. Honorar milestone for Blindeforbundet: the 3rd and 15th assignment thresholds trigger 'kontorhonorar' payments — this data feeds directly into the payment workflow so accuracy is critical. Consider caching aggregated stats in a Supabase materialized view or a dedicated stats table updated by database triggers for scale, but implement direct queries first as the baseline.

Inject the Supabase client via constructor for testability.

Testing Requirements

Two layers of testing required: (1) Unit tests with a mocked Supabase client stub that returns fixture data — assert that aggregateStats correctly maps query results to PeerMentorStats model fields, that streak calculation logic handles edge cases (no activity, single week, gap of exactly one week, gap of two weeks), and that honorar thresholds are evaluated correctly. (2) Integration test against a real Supabase test project (or local Supabase CLI instance) that seeds known data for a test mentor and asserts correct aggregate values are returned. Integration tests should be tagged and excluded from CI unit test runs. Place unit tests in test/features/badges/data/services/ and integration tests in integration_test/badges/.

Component
Peer Mentor Stats Aggregator
service medium
Epic Risks (3)
high impact medium prob technical

peer-mentor-stats-aggregator must compute streaks and threshold counts across potentially hundreds of activity records per peer mentor. Naive queries (full table scans or N+1 patterns) will cause slow badge evaluation, especially when triggered on every activity save for all active peer mentors.

Mitigation & Contingency

Mitigation: Design aggregation queries using Supabase RPCs with window functions or materialised views from the start. Add database indexes on (peer_mentor_id, activity_date, activity_type) before writing any service code. Profile all aggregation queries against a dataset of 500+ activities during development.

Contingency: If query performance is insufficient at launch, implement incremental stat caching: maintain a peer_mentor_stats snapshot table updated on each activity insert via a database trigger, so the aggregator reads from pre-computed values rather than scanning raw activity rows.

medium impact low prob technical

badge-award-service must be idempotent, but if two concurrent edge function invocations evaluate the same peer mentor simultaneously (e.g., from a rapid double-save), both could pass the uniqueness check before either commits, resulting in duplicate badge records.

Mitigation & Contingency

Mitigation: Rely on the database-level uniqueness constraint (peer_mentor_id, badge_definition_id) as the final guard. In the service layer, use an upsert with ON CONFLICT DO NOTHING and return the existing record. Add a Postgres advisory lock or serialisable transaction for the award sequence during the edge function integration epic.

Contingency: If duplicate records are discovered in production, run a deduplication migration to remove extras (keeping earliest earned_at) and add a unique index if not already present. Alert engineering via Supabase database webhook on constraint violations.

medium impact medium prob scope

The badge-configuration-service must validate org admin-supplied criteria JSON on save, but the full range of valid criteria types (threshold, streak, training-completion, tier-based) may not be fully enumerated during development, leading to either over-permissive or over-restrictive validation that frustrates admins.

Mitigation & Contingency

Mitigation: Define a versioned Dart sealed class hierarchy for CriteriaType before writing the validation logic. Review the hierarchy with product against all known badge types across NHF, Blindeforbundet, and HLF before implementation. Build the validator against the sealed class so new criteria types require an explicit code addition.

Contingency: If admins encounter validation rejections for legitimate criteria, expose a 'criteria_raw' escape hatch (JSON passthrough, admin-only) with a product warning, and schedule a sprint to formalise the new criteria type properly.