critical priority medium complexity testing pending testing specialist Tier 3

Acceptance Criteria

Test suite covers the 3rd assignment threshold: mentor with exactly 2 assignments does NOT trigger threshold, mentor with exactly 3 does
Test suite covers the 15th assignment threshold: mentor with exactly 14 assignments does NOT trigger, exactly 15 does
Streak computation tests cover: 0 sessions (streak=0), 1 consecutive week (streak=1), 3 consecutive weeks (streak=3), break after week 2 then 2 more consecutive weeks (streak=2), and all sessions in the same week (streak=1)
Training completion tests cover: 0 completions, partial completion (modules done but course not finished), and full completion
A test verifies that the Supabase query builder receives a filter on mentor_id and a date range parameter (proving index paths are used)
All tests pass with a mocked Supabase client — no real network calls
Test file achieves 90%+ branch coverage on PeerMentorStatsAggregator business logic methods as reported by flutter_test coverage
Each test has a descriptive name following the pattern: given_<state>_when_<action>_then_<outcome>

Technical Requirements

frameworks
Flutter
flutter_test
apis
Supabase (mocked)
data models
MentorStats
ActivitySession
TrainingCompletion
performance requirements
Full test suite must complete in under 10 seconds
security requirements
No real Supabase credentials or mentor PII in test fixtures — use anonymised fake data

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Structure the test file with one top-level group per public method of PeerMentorStatsAggregator. For streak computation, construct date sequences programmatically (e.g., `DateTime.utc(2025, 1, 6)` for a Monday) to avoid fragile hardcoded dates. For Supabase query verification, capture the query parameters in the mock and assert on the filter keys — do not assert on SQL strings. If PeerMentorStatsAggregator is not currently injectable (hardcodes Supabase client), refactor the constructor to accept a SupabaseClient parameter before writing tests.

Consider extracting streak and threshold logic into package-private pure functions to make them directly testable without any mock setup.

Testing Requirements

Pure unit tests using flutter_test. Create a MockSupabaseClient using mockito or manual fake implementing the Supabase query builder interface. Use table-driven test patterns (parameterized inputs) for threshold boundary tests to maximise coverage with minimal boilerplate. Generate coverage report with `flutter test --coverage` and verify lcov report shows 90%+ on lib/src/services/peer_mentor_stats_aggregator.dart.

Group tests by method under descriptive group() blocks.

Component
Peer Mentor Stats Aggregator
service medium
Epic Risks (3)
high impact medium prob technical

peer-mentor-stats-aggregator must compute streaks and threshold counts across potentially hundreds of activity records per peer mentor. Naive queries (full table scans or N+1 patterns) will cause slow badge evaluation, especially when triggered on every activity save for all active peer mentors.

Mitigation & Contingency

Mitigation: Design aggregation queries using Supabase RPCs with window functions or materialised views from the start. Add database indexes on (peer_mentor_id, activity_date, activity_type) before writing any service code. Profile all aggregation queries against a dataset of 500+ activities during development.

Contingency: If query performance is insufficient at launch, implement incremental stat caching: maintain a peer_mentor_stats snapshot table updated on each activity insert via a database trigger, so the aggregator reads from pre-computed values rather than scanning raw activity rows.

medium impact low prob technical

badge-award-service must be idempotent, but if two concurrent edge function invocations evaluate the same peer mentor simultaneously (e.g., from a rapid double-save), both could pass the uniqueness check before either commits, resulting in duplicate badge records.

Mitigation & Contingency

Mitigation: Rely on the database-level uniqueness constraint (peer_mentor_id, badge_definition_id) as the final guard. In the service layer, use an upsert with ON CONFLICT DO NOTHING and return the existing record. Add a Postgres advisory lock or serialisable transaction for the award sequence during the edge function integration epic.

Contingency: If duplicate records are discovered in production, run a deduplication migration to remove extras (keeping earliest earned_at) and add a unique index if not already present. Alert engineering via Supabase database webhook on constraint violations.

medium impact medium prob scope

The badge-configuration-service must validate org admin-supplied criteria JSON on save, but the full range of valid criteria types (threshold, streak, training-completion, tier-based) may not be fully enumerated during development, leading to either over-permissive or over-restrictive validation that frustrates admins.

Mitigation & Contingency

Mitigation: Define a versioned Dart sealed class hierarchy for CriteriaType before writing the validation logic. Review the hierarchy with product against all known badge types across NHF, Blindeforbundet, and HLF before implementation. Build the validator against the sealed class so new criteria types require an explicit code addition.

Contingency: If admins encounter validation rejections for legitimate criteria, expose a 'criteria_raw' escape hatch (JSON passthrough, admin-only) with a product warning, and schedule a sprint to formalise the new criteria type properly.