Unit test ThresholdEvaluationService evaluation logic
epic-expense-approval-workflow-foundation-task-013 — Write comprehensive unit tests for ThresholdEvaluationService covering: auto-approval below distance threshold (under 50 km, no receipts), manual approval required above threshold, receipt requirement triggered by amount, org-specific threshold configuration overrides, and edge cases at exact threshold boundaries. Verify outputs match expected Edge Function behaviour for each scenario.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
ThresholdEvaluationService should be a pure Dart class with no Flutter dependencies — all logic is synchronous value computation. Inject OrgThresholdConfig as a constructor parameter so tests can supply controlled values without touching Supabase. The boundary condition (exactly 50 km) must be explicitly documented in both the test and the service: use `distance >= threshold` (not `>`). For org overrides, test that a config with `distanceThresholdKm: 30` is respected even when global default is 50.
Use a helper factory `makeConfig({distanceThresholdKm, receiptThresholdAmount})` to keep test setup concise. Map each test case back to the HLF requirement: automatic approval under 50 km / no receipts, manual otherwise.
Testing Requirements
Unit tests only (no integration or widget tests). Use flutter_test with mocktail for any dependency injection. Organise tests in a describe/group structure: (1) auto-approval scenarios, (2) manual-approval scenarios, (3) receipt-trigger scenarios, (4) org-override scenarios, (5) boundary/edge cases. Each group must have at least 3 test cases.
Aim for 100% branch coverage on the service class. Use parameterised test tables (forEach) for boundary cases to reduce boilerplate. No golden file tests needed.
Optimistic locking in ExpenseClaimStatusRepository may produce excessive concurrency exceptions in high-volume coordinator sessions where multiple coordinators process the same queue simultaneously, causing confusing UI errors and coordinator frustration.
Mitigation & Contingency
Mitigation: Design the locking strategy with a short retry window (1-2 automatic retries with 200ms back-off) before surfacing the error to the UI. Document the concurrency model clearly so the UI layer can display a contextual 'claim was already actioned' message rather than a generic error.
Contingency: If contention remains high under load testing, switch to a last-writer-wins update with a conflict notification rather than a hard block, and log all concurrent edits for audit purposes.
FCM device tokens stored for peer mentors may be stale (app reinstalled, token rotated) causing push notifications for claim status changes to silently fail, leaving submitters unaware their claim was approved or rejected.
Mitigation & Contingency
Mitigation: Implement token refresh on every app launch and store updated tokens in Supabase. ApprovalNotificationService should fall back to in-app Realtime delivery when FCM returns an invalid-token error and should queue a token refresh request.
Contingency: If FCM delivery rates fall below acceptable thresholds in production monitoring, add a polling fallback in the peer mentor claim list screen that checks status on foreground resume.
Supabase Realtime has per-project channel and connection limits. If many coordinators and peer mentors are simultaneously subscribed across multiple screens, the project may hit quota limits causing subscription failures.
Mitigation & Contingency
Mitigation: Design RealtimeApprovalSubscription to use a single shared channel per user session rather than per-screen subscriptions. Implement subscription reference counting so channels are only opened once and reused across screens.
Contingency: Upgrade the Supabase plan tier if limits are reached, and implement graceful degradation to polling with a 30-second interval when Realtime is unavailable.