high priority medium complexity backend pending backend specialist Tier 5

Acceptance Criteria

assignTier writes a TierAssignment record to Supabase with award_period_start, award_period_end, assigned_at, and tier fields correctly populated
award_period_start and award_period_end are derived from the org's current reporting period, not hardcoded values
Calling assignTier with the same mentorId, tier, and orgId when an active assignment already exists returns the existing record without writing a duplicate row
Assigning a different tier to a mentor who has an active assignment for the same period replaces the old assignment (marks old as superseded, inserts new)
revokeTier sets the active TierAssignment's is_active flag to false and records revoked_at timestamp
revokeTier on a mentor with no active assignment is a no-op and does not throw
Historical TierAssignment records are never deleted — soft delete only
Both operations are wrapped in Supabase transactions to prevent partial writes
assignTier returns the persisted TierAssignment object including server-generated id and timestamps
All timestamps use UTC

Technical Requirements

frameworks
Flutter
Riverpod
apis
Supabase REST API
Supabase RPC (for atomic upsert)
data models
TierAssignment
OrgReportingPeriod
TierLevel
performance requirements
Each operation must complete within 800ms including Supabase round-trip
Idempotency check must use a database unique constraint (not application-layer SELECT then INSERT) to prevent race conditions
security requirements
Only coordinator or admin roles may call assignTier/revokeTier — enforce via RLS policy on tier_assignments table
orgId must match the authenticated user's organisation
Audit log entry must be written for every assignment and revocation

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Use a Supabase RPC function for the assignment upsert to guarantee atomicity — avoid SELECT + INSERT in Dart code which is subject to TOCTOU races. The RPC should implement INSERT ... ON CONFLICT (mentor_id, org_id, award_period_start, tier) DO NOTHING and return the existing or newly created row. Define OrgReportingPeriod as a value class with start/end computed from org config (e.g., calendar year Jan 1 – Dec 31, or fiscal year).

Store reporting period boundaries as ISO 8601 date strings in UTC. For revocation, use a Supabase UPDATE with a WHERE is_active=true filter — if 0 rows updated, treat as no-op. Soft delete pattern: never DELETE from tier_assignments; always preserve history for audit and gamification queries.

Testing Requirements

Unit tests (flutter_test) with mocked Supabase client covering: (1) successful assignment returns correct TierAssignment with populated period dates, (2) duplicate assignment is a no-op and returns existing record, (3) assigning a new tier supersedes the previous active assignment, (4) successful revocation sets is_active=false, (5) revocation on mentor with no active assignment completes without error, (6) transaction rollback is triggered on Supabase error during assignment. Integration test: verify unique constraint prevents duplicate active assignments at the database level.

Component
Recognition Tier Service
service medium
Epic Risks (3)
high impact medium prob technical

peer-mentor-stats-aggregator must compute streaks and threshold counts across potentially hundreds of activity records per peer mentor. Naive queries (full table scans or N+1 patterns) will cause slow badge evaluation, especially when triggered on every activity save for all active peer mentors.

Mitigation & Contingency

Mitigation: Design aggregation queries using Supabase RPCs with window functions or materialised views from the start. Add database indexes on (peer_mentor_id, activity_date, activity_type) before writing any service code. Profile all aggregation queries against a dataset of 500+ activities during development.

Contingency: If query performance is insufficient at launch, implement incremental stat caching: maintain a peer_mentor_stats snapshot table updated on each activity insert via a database trigger, so the aggregator reads from pre-computed values rather than scanning raw activity rows.

medium impact low prob technical

badge-award-service must be idempotent, but if two concurrent edge function invocations evaluate the same peer mentor simultaneously (e.g., from a rapid double-save), both could pass the uniqueness check before either commits, resulting in duplicate badge records.

Mitigation & Contingency

Mitigation: Rely on the database-level uniqueness constraint (peer_mentor_id, badge_definition_id) as the final guard. In the service layer, use an upsert with ON CONFLICT DO NOTHING and return the existing record. Add a Postgres advisory lock or serialisable transaction for the award sequence during the edge function integration epic.

Contingency: If duplicate records are discovered in production, run a deduplication migration to remove extras (keeping earliest earned_at) and add a unique index if not already present. Alert engineering via Supabase database webhook on constraint violations.

medium impact medium prob scope

The badge-configuration-service must validate org admin-supplied criteria JSON on save, but the full range of valid criteria types (threshold, streak, training-completion, tier-based) may not be fully enumerated during development, leading to either over-permissive or over-restrictive validation that frustrates admins.

Mitigation & Contingency

Mitigation: Define a versioned Dart sealed class hierarchy for CriteriaType before writing the validation logic. Review the hierarchy with product against all known badge types across NHF, Blindeforbundet, and HLF before implementation. Build the validator against the sealed class so new criteria types require an explicit code addition.

Contingency: If admins encounter validation rejections for legitimate criteria, expose a 'criteria_raw' escape hatch (JSON passthrough, admin-only) with a product warning, and schedule a sprint to formalise the new criteria type properly.