critical priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

ThresholdEvaluationService exposes a single method: evaluateClaim(ExpenseClaim claim) → Future<ThresholdEvaluationResult>
ThresholdEvaluationResult contains: outcome (enum: auto_approved, requires_review), failedRules (List<ThresholdRule>), and appliedConfig (ApprovalThresholdConfig snapshot for audit)
Auto-approval outcome is returned only when ALL of the following conditions are met: claim amount ≤ org's amount_threshold_nok, travel distance ≤ org's distance_threshold_km (if travel claim), no receipt required by org OR receipt is attached (if org requires receipts above a sub-threshold)
Threshold configuration (ApprovalThresholdConfig) is fetched from Supabase per org_id and cached in memory for the session (TTL: 5 minutes) to avoid repeated DB reads on every claim submission
If threshold configuration is missing for an org, the service defaults to requires_review (fail-safe) and logs a warning
ThresholdRule is a value object identifying which rule was evaluated, its configured value, the claim's actual value, and whether it passed
The service is pure domain logic — it does not trigger any status updates or notifications (those are in tasks 003 and 006)
evaluateClaim is idempotent: calling it twice with the same claim and the same config returns the same result
Unit tests cover: amount exactly at threshold (pass), amount one unit above threshold (fail), distance below threshold with no receipt requirement (auto-approve), missing receipt when required (requires_review), missing org config (requires_review fallback)

Technical Requirements

frameworks
Flutter
Dart
supabase_flutter (for config fetch)
Riverpod for caching/DI
apis
Supabase PostgREST (org_approval_thresholds table)
data models
ExpenseClaim
ApprovalThresholdConfig
ThresholdEvaluationResult
ThresholdRule
performance requirements
evaluateClaim must complete synchronously after config is cached — no async work in the evaluation logic itself
Config cache must be invalidated if org settings are updated — listen for config table changes via Realtime or use a short TTL (5 minutes)
Cache miss (first call per session) must resolve within 500ms
security requirements
Threshold config must be fetched server-side with RLS ensuring only the coordinator's own org config is readable
Auto-approval decision must be logged as a ClaimEvent (via ClaimEventsRepository) for full auditability — the caller is responsible for this
Threshold values must be treated as authoritative server values — never allow client-side override

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Separate the pure evaluation logic (ThresholdEvaluator) from the service (ThresholdEvaluationService) that handles config fetching and caching. ThresholdEvaluator should be a class with only synchronous, pure methods — this makes it trivially testable without mocks. The service wraps the evaluator and injects the fetched config. For caching, use a simple Dart record {ApprovalThresholdConfig config, DateTime fetchedAt} stored in the provider state; invalidate when fetchedAt + TTL < now.

Per the workshop notes, HLF specifically requires that km + receipt combinations be technically impossible to combine — represent this as a mutually exclusive rule in the ThresholdRule model. Document the business logic clearly in code comments since the rules come from Norwegian org-specific requirements.

Testing Requirements

Unit tests (pure Dart, no Flutter framework needed): test all boundary conditions for amount threshold, distance threshold, and receipt requirement combinations. Use a factory method to create test ApprovalThresholdConfig instances. Test cache behavior: verify that a second call within TTL does not trigger a Supabase fetch (mock the data source and assert call count = 1). Test fail-safe: when fetchConfig throws, evaluateClaim returns requires_review.

Integration test: fetch real config from a test Supabase project and verify evaluation against a set of fixture claims. Target 95%+ line coverage on the pure evaluation logic (domain layer).

Epic Risks (3)
medium impact medium prob technical

Maintaining multi-select state across paginated list pages is architecturally complex in Flutter with Riverpod/BLoC. If the selection state is stored in the widget tree rather than the state layer, page transitions and list redraws can silently clear selections, causing coordinators to lose their multi-select and re-enter it.

Mitigation & Contingency

Mitigation: Store the selected claim ID set in a dedicated Riverpod StateNotifier outside the paginated list widget tree. The paginated list reads selection state from this provider and does not own it. Selection persists independently of list scroll position or page loads.

Contingency: If cross-page selection proves prohibitively complex, limit bulk selection to the currently visible page (add a clear warning in the UI) and prioritise single-page bulk approval for the initial release.

medium impact medium prob integration

If a coordinator has the queue open while another coordinator approves claims from the same queue (possible in large organisations with shared chapter coverage), the Realtime update may arrive out of order or be missed during a reconnect, leaving the first coordinator's view stale and allowing them to attempt to approve an already-actioned claim.

Mitigation & Contingency

Mitigation: The ApprovalWorkflowService's optimistic locking (from the foundation epic) will catch the concurrent edit at the database level. The CoordinatorReviewQueueScreen should handle the resulting ConcurrencyException by removing the claim from the local list and showing a brief snackbar: 'This claim was already actioned by another coordinator.'

Contingency: Add a queue staleness indicator (a subtle 'last updated X seconds ago' label) and a manual refresh button as a fallback for coordinators who notice inconsistencies.

low impact high prob dependency

The end-to-end test requirement that a peer mentor receives a push notification within 30 seconds of coordinator approval depends on FCM delivery latency, which is outside the application's control and can vary significantly in CI/CD environments.

Mitigation & Contingency

Mitigation: Structure end-to-end tests to verify notification intent (correct FCM payload dispatched, correct Realtime event emitted) rather than actual device delivery timing. Use test doubles for FCM delivery in automated tests and reserve real-device delivery tests for manual pre-release validation.

Contingency: If notification timing requirements must be validated in automation, instrument the ApprovalNotificationService with a test hook that records dispatch timestamps and assert against those rather than actual FCM callbacks.