critical priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

A pure function `evaluateHonorarThreshold(previousCount, newCount, threshold)` returns true only when newCount crosses the threshold from below (previousCount < threshold && newCount >= threshold), not on subsequent increments
Milestone thresholds 3 and 15 are defined as named constants (e.g. kHonorarThreshold3, kHonorarThreshold15) and are not hardcoded inline
Assignment count is scoped per mentor per organisation period (orgId + periodId combination), not globally per mentor
Fetching counts for a mentor in one org/period does not leak counts from a different org/period
When a mentor completes their 3rd assignment, exactly one threshold-crossed event is emitted; completing their 4th does not re-emit
When a mentor completes their 15th assignment, the higher-rate threshold is triggered; subsequent assignments beyond 15 do not re-trigger
All threshold logic lives in pure functions (no side effects, no Supabase calls) so they are independently unit-testable
PeerMentorStatsAggregator exposes `getAssignmentCount(mentorId, orgId, periodId)` returning an integer
PeerMentorStatsAggregator exposes `getCrossedThresholds(mentorId, orgId, periodId)` returning a list of crossed threshold integers (e.g. [3, 15])
All methods handle the case where no records exist for the mentor/org/period by returning 0 / empty list without throwing

Technical Requirements

frameworks
Flutter
Riverpod
Dart
apis
Supabase PostgREST — query earned_badges and activity records filtered by mentor_id, org_id, period_id
data models
PeerMentorStats
AssignmentRecord
OrgPeriod
HonorarThresholdEvent
performance requirements
Assignment count query must complete in < 300 ms under normal network conditions
Pure threshold evaluation functions must execute in < 1 ms (no I/O)
Avoid N+1 queries — fetch all relevant assignments for a mentor/period in a single Supabase call
security requirements
Supabase RLS must ensure a mentor can only read their own assignment counts; coordinators can read counts for mentors within their org
Period scoping must be enforced server-side (RLS or parameterised query), not only client-side
Do not expose raw assignment content or personally identifiable fields in the stats response

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Implement threshold evaluation as a top-level or static pure function, not a method on a class with state, to maximise testability. Use a `const List kHonorarThresholds = [3, 15]` so adding future thresholds requires only one change. The crossing logic is strictly `previousCount < threshold && newCount >= threshold` — this handles bulk increments (e.g. backdated data entry) correctly.

For the Supabase query, use `.select('count')` with a `.filter()` on mentor_id, org_id and period_id rather than fetching all records and counting in Dart. Cache the result in the Riverpod provider layer (task-006), not here — this class is responsible only for data access and pure logic. Blindeforbundet's honorar rules distinguish between kontorhonorar (3rd assignment) and a higher rate (15th); model these as separate `HonorarRate` enum values so the caller can apply the correct rate without re-implementing threshold logic.

Testing Requirements

Unit tests (flutter_test) are mandatory for all pure threshold functions: test every boundary (count=2→3, count=14→15), values just below threshold (count=1→2, count=13→14), values well above threshold (count=15→16), and zero/null input. Write integration tests that mock Supabase responses and verify PeerMentorStatsAggregator returns correct counts and threshold lists. Verify that org/period isolation holds: two mentors in different orgs with the same assignment counts are evaluated independently. Target 100% branch coverage on the pure evaluation functions.

Component
Peer Mentor Stats Aggregator
service medium
Epic Risks (3)
high impact medium prob technical

peer-mentor-stats-aggregator must compute streaks and threshold counts across potentially hundreds of activity records per peer mentor. Naive queries (full table scans or N+1 patterns) will cause slow badge evaluation, especially when triggered on every activity save for all active peer mentors.

Mitigation & Contingency

Mitigation: Design aggregation queries using Supabase RPCs with window functions or materialised views from the start. Add database indexes on (peer_mentor_id, activity_date, activity_type) before writing any service code. Profile all aggregation queries against a dataset of 500+ activities during development.

Contingency: If query performance is insufficient at launch, implement incremental stat caching: maintain a peer_mentor_stats snapshot table updated on each activity insert via a database trigger, so the aggregator reads from pre-computed values rather than scanning raw activity rows.

medium impact low prob technical

badge-award-service must be idempotent, but if two concurrent edge function invocations evaluate the same peer mentor simultaneously (e.g., from a rapid double-save), both could pass the uniqueness check before either commits, resulting in duplicate badge records.

Mitigation & Contingency

Mitigation: Rely on the database-level uniqueness constraint (peer_mentor_id, badge_definition_id) as the final guard. In the service layer, use an upsert with ON CONFLICT DO NOTHING and return the existing record. Add a Postgres advisory lock or serialisable transaction for the award sequence during the edge function integration epic.

Contingency: If duplicate records are discovered in production, run a deduplication migration to remove extras (keeping earliest earned_at) and add a unique index if not already present. Alert engineering via Supabase database webhook on constraint violations.

medium impact medium prob scope

The badge-configuration-service must validate org admin-supplied criteria JSON on save, but the full range of valid criteria types (threshold, streak, training-completion, tier-based) may not be fully enumerated during development, leading to either over-permissive or over-restrictive validation that frustrates admins.

Mitigation & Contingency

Mitigation: Define a versioned Dart sealed class hierarchy for CriteriaType before writing the validation logic. Review the hierarchy with product against all known badge types across NHF, Blindeforbundet, and HLF before implementation. Build the validator against the sealed class so new criteria types require an explicit code addition.

Contingency: If admins encounter validation rejections for legitimate criteria, expose a 'criteria_raw' escape hatch (JSON passthrough, admin-only) with a product warning, and schedule a sprint to formalise the new criteria type properly.