Expose BadgeAwardService as Riverpod provider
epic-achievement-badges-services-task-010 — Wrap BadgeAwardService in a Riverpod Provider and expose awardBadge and getAwardedBadgesForMentor(mentorId) methods. Integrate with badge-repository from foundation epic. Ensure awarded badge state is reflected immediately in UI-facing providers by invalidating relevant caches post-award.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
Use `Provider
Keep this provider file small — its sole responsibility is wiring BadgeAwardService to Riverpod and coordinating post-award invalidation. All business logic stays in BadgeAwardService and the repository.
Testing Requirements
Widget/unit tests using ProviderContainer: verify `badgeAwardServiceProvider` builds without error when given mock dependencies; verify `awardBadge` delegates to the mock BadgeAwardService and returns EarnedBadge; verify that after a successful awardBadge, `peerMentorStatsProvider` is invalidated (spy on ref.invalidate or observe a rebuild); verify that on BadgeAwardException, no invalidation occurs; verify `getAwardedBadgesForMentor` calls through to badgeRepositoryProvider and returns the correct list. Use `ProviderContainer.overrideWithValue` for all dependencies.
peer-mentor-stats-aggregator must compute streaks and threshold counts across potentially hundreds of activity records per peer mentor. Naive queries (full table scans or N+1 patterns) will cause slow badge evaluation, especially when triggered on every activity save for all active peer mentors.
Mitigation & Contingency
Mitigation: Design aggregation queries using Supabase RPCs with window functions or materialised views from the start. Add database indexes on (peer_mentor_id, activity_date, activity_type) before writing any service code. Profile all aggregation queries against a dataset of 500+ activities during development.
Contingency: If query performance is insufficient at launch, implement incremental stat caching: maintain a peer_mentor_stats snapshot table updated on each activity insert via a database trigger, so the aggregator reads from pre-computed values rather than scanning raw activity rows.
badge-award-service must be idempotent, but if two concurrent edge function invocations evaluate the same peer mentor simultaneously (e.g., from a rapid double-save), both could pass the uniqueness check before either commits, resulting in duplicate badge records.
Mitigation & Contingency
Mitigation: Rely on the database-level uniqueness constraint (peer_mentor_id, badge_definition_id) as the final guard. In the service layer, use an upsert with ON CONFLICT DO NOTHING and return the existing record. Add a Postgres advisory lock or serialisable transaction for the award sequence during the edge function integration epic.
Contingency: If duplicate records are discovered in production, run a deduplication migration to remove extras (keeping earliest earned_at) and add a unique index if not already present. Alert engineering via Supabase database webhook on constraint violations.
The badge-configuration-service must validate org admin-supplied criteria JSON on save, but the full range of valid criteria types (threshold, streak, training-completion, tier-based) may not be fully enumerated during development, leading to either over-permissive or over-restrictive validation that frustrates admins.
Mitigation & Contingency
Mitigation: Define a versioned Dart sealed class hierarchy for CriteriaType before writing the validation logic. Review the hierarchy with product against all known badge types across NHF, Blindeforbundet, and HLF before implementation. Build the validator against the sealed class so new criteria types require an explicit code addition.
Contingency: If admins encounter validation rejections for legitimate criteria, expose a 'criteria_raw' escape hatch (JSON passthrough, admin-only) with a product warning, and schedule a sprint to formalise the new criteria type properly.