Integration test full badge award pipeline
epic-achievement-badges-services-task-017 — Write integration tests that exercise the full pipeline: PeerMentorStatsAggregator computes stats → RecognitionTierService evaluates eligibility → BadgeAwardService writes earned badge atomically. Use a seeded test Supabase schema with known activity records. Verify honorar milestone thresholds trigger correct badge awards and that duplicate pipeline runs do not create duplicate records.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 7 - 84 tasks
Can start after Tier 6 completes
Implementation Notes
Structure tests in three groups: (A) unit-style pipeline stage isolation (mock adjacent services), (B) pairwise integration (Aggregator + TierService, TierService + AwardService), (C) full end-to-end pipeline. For idempotency, use Supabase upsert with a unique constraint on (peer_mentor_id, badge_definition_id) and verify the constraint is tested explicitly. Seed data should be expressed as a SQL migration file committed to the repo under test/fixtures/. Use a testContainer helper that provisions a ProviderContainer with real Supabase client pointing at the test URL.
Avoid flutter_riverpod ProviderScope in integration tests — instantiate the container manually for determinism. The honorar counting logic for Blindeforbundet (3rd = office honorar, 15th = higher rate) must be captured as named constants in the production service and referenced by name in test assertions to avoid magic numbers.
Testing Requirements
Integration tests using flutter_test targeting a dedicated Supabase test environment. Seed data must be version-controlled as SQL fixture files. Cover: (1) stats aggregation correctness at each honorar threshold (3, 15 for Blindeforbundet), (2) tier eligibility evaluation returning correct tier at boundary values, (3) BadgeAwardService atomic write on success, (4) idempotency — pipeline re-run produces no duplicate badge records, (5) rollback on partial failure. Use setUp/tearDown hooks to insert and delete fixture rows.
No mocking of Supabase — all calls must hit the real test instance.
peer-mentor-stats-aggregator must compute streaks and threshold counts across potentially hundreds of activity records per peer mentor. Naive queries (full table scans or N+1 patterns) will cause slow badge evaluation, especially when triggered on every activity save for all active peer mentors.
Mitigation & Contingency
Mitigation: Design aggregation queries using Supabase RPCs with window functions or materialised views from the start. Add database indexes on (peer_mentor_id, activity_date, activity_type) before writing any service code. Profile all aggregation queries against a dataset of 500+ activities during development.
Contingency: If query performance is insufficient at launch, implement incremental stat caching: maintain a peer_mentor_stats snapshot table updated on each activity insert via a database trigger, so the aggregator reads from pre-computed values rather than scanning raw activity rows.
badge-award-service must be idempotent, but if two concurrent edge function invocations evaluate the same peer mentor simultaneously (e.g., from a rapid double-save), both could pass the uniqueness check before either commits, resulting in duplicate badge records.
Mitigation & Contingency
Mitigation: Rely on the database-level uniqueness constraint (peer_mentor_id, badge_definition_id) as the final guard. In the service layer, use an upsert with ON CONFLICT DO NOTHING and return the existing record. Add a Postgres advisory lock or serialisable transaction for the award sequence during the edge function integration epic.
Contingency: If duplicate records are discovered in production, run a deduplication migration to remove extras (keeping earliest earned_at) and add a unique index if not already present. Alert engineering via Supabase database webhook on constraint violations.
The badge-configuration-service must validate org admin-supplied criteria JSON on save, but the full range of valid criteria types (threshold, streak, training-completion, tier-based) may not be fully enumerated during development, leading to either over-permissive or over-restrictive validation that frustrates admins.
Mitigation & Contingency
Mitigation: Define a versioned Dart sealed class hierarchy for CriteriaType before writing the validation logic. Review the hierarchy with product against all known badge types across NHF, Blindeforbundet, and HLF before implementation. Build the validator against the sealed class so new criteria types require an explicit code addition.
Contingency: If admins encounter validation rejections for legitimate criteria, expose a 'criteria_raw' escape hatch (JSON passthrough, admin-only) with a product warning, and schedule a sprint to formalise the new criteria type properly.