Implement ThresholdEvaluationService shared logic
epic-expense-approval-workflow-foundation-task-007 — Implement the ThresholdEvaluationService as a pure Dart class (no Flutter dependencies) with evaluateClaim(claimAmount, distanceKm, hasReceipts, orgThresholdConfig) returning a ThresholdResult (autoApprove/requiresManual/requiresReceipt). This logic must be identical to the Edge Function implementation to prevent client-side bypass. Include org-specific threshold configuration loading.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Define ThresholdResult as a sealed class with three subclasses to enable exhaustive pattern matching in BLoC. OrgThresholdConfig should be a simple immutable data class with copyWith. Load config lazily on first call and cache — use a simple in-memory Map
Place the service in lib/domain/services/ to signal it is domain-layer, not infrastructure. Mirror the exact same threshold comparison operators (< vs <=) as the Edge Function — document which boundary belongs to which result to prevent off-by-one divergence.
Testing Requirements
Unit tests using flutter_test (dart:test compatible). Test suite: (1) pure logic tests — 10+ parameterized cases covering boundary values for each ThresholdResult variant, (2) config loading tests — mock Supabase client returning org config, default fallback when not found, error propagation on network failure, (3) parity tests — shared JSON fixture file with 20 test vectors also run against Edge Function in CI to guarantee identical behavior. Target 100% branch coverage on evaluateClaim and all ThresholdResult constructors.
Optimistic locking in ExpenseClaimStatusRepository may produce excessive concurrency exceptions in high-volume coordinator sessions where multiple coordinators process the same queue simultaneously, causing confusing UI errors and coordinator frustration.
Mitigation & Contingency
Mitigation: Design the locking strategy with a short retry window (1-2 automatic retries with 200ms back-off) before surfacing the error to the UI. Document the concurrency model clearly so the UI layer can display a contextual 'claim was already actioned' message rather than a generic error.
Contingency: If contention remains high under load testing, switch to a last-writer-wins update with a conflict notification rather than a hard block, and log all concurrent edits for audit purposes.
FCM device tokens stored for peer mentors may be stale (app reinstalled, token rotated) causing push notifications for claim status changes to silently fail, leaving submitters unaware their claim was approved or rejected.
Mitigation & Contingency
Mitigation: Implement token refresh on every app launch and store updated tokens in Supabase. ApprovalNotificationService should fall back to in-app Realtime delivery when FCM returns an invalid-token error and should queue a token refresh request.
Contingency: If FCM delivery rates fall below acceptable thresholds in production monitoring, add a polling fallback in the peer mentor claim list screen that checks status on foreground resume.
Supabase Realtime has per-project channel and connection limits. If many coordinators and peer mentors are simultaneously subscribed across multiple screens, the project may hit quota limits causing subscription failures.
Mitigation & Contingency
Mitigation: Design RealtimeApprovalSubscription to use a single shared channel per user session rather than per-screen subscriptions. Implement subscription reference counting so channels are only opened once and reused across screens.
Contingency: Upgrade the Supabase plan tier if limits are reached, and implement graceful degradation to polling with a 30-second interval when Realtime is unavailable.