high priority medium complexity testing pending testing specialist Tier 7

Acceptance Criteria

Integration tests connect to a dedicated Supabase test project (separate from production) with credentials injected via environment variables — never hardcoded
Test setup creates a clean database state before each test using tearDown/setUp with delete operations scoped to test-owned rows (using a unique test_run_id or test user ID)
Scenario 'auto-approval': After submitClaim() with distance below threshold and no extra expenses, a SELECT on the mileage_claims table confirms status = 'auto_approved' for the inserted row
Scenario 'pending-review by distance': After submitClaim() with distance above threshold, a SELECT confirms status = 'pending_review'
Scenario 'pending-review by extra expense': After submitClaim() with distance below threshold but extra expenses included, a SELECT confirms status = 'pending_review'
Scenario 'cache update on success': After a successful submitClaim(), a SELECT on the distance_prefill table (or equivalent) confirms an entry exists for the submitted route with the correct distance value
Scenario 'no coordinator notification for silent approval': After an auto-approval submitClaim(), a SELECT on coordinator_notifications (or equivalent) confirms zero rows were inserted for this claim
All integration test scenarios pass reliably when run in CI against the test Supabase instance
Tests are tagged with `@Tags(['integration'])` so they can be excluded from fast unit test runs

Technical Requirements

frameworks
Flutter
Dart
flutter_test
Supabase Dart client
apis
Supabase REST API (mileage_claims table)
Supabase REST API (distance_prefill table)
Supabase REST API (coordinator_notifications table)
MileageClaimService.submitClaim() (full production implementation, no mocks)
data models
MileageClaim
SubmissionOutcome
ClaimStatus
DistancePrefillEntry
CoordinatorNotification
performance requirements
Each integration test scenario must complete within 10 seconds against the test Supabase instance
Test suite total runtime must be under 60 seconds
security requirements
Supabase test project URL and anon key must be loaded from environment variables (SUPABASE_TEST_URL, SUPABASE_TEST_ANON_KEY) — never committed to source control
Test users must use synthetic data with no real personal information
Test cleanup must delete all rows created during the test to prevent data leakage between runs

Execution Context

Execution Tier
Tier 7

Tier 7 - 84 tasks

Can start after Tier 6 completes

Implementation Notes

Create a `test/helpers/supabase_test_client.dart` factory that reads environment variables and returns a configured SupabaseClient. Each test should generate a unique `testRunId = const Uuid().v4()` in setUp() and inject it into all created entities (e.g., as an external_ref or metadata field) for reliable cleanup. The integration test should use the real MileageClaimService with real repository implementations — only the Supabase client is pointed at the test project. For the 'no coordinator notification' assertion, query the notifications table and filter by claim_id: expect the result to be empty.

If the coordinator_notifications table does not yet exist, coordinate with the persistence task owners before writing this assertion. Avoid asserting on row count of the entire table — always filter by test-owned IDs.

Testing Requirements

Integration tests use flutter_test with the full Supabase Dart client wired to a test project. Tag tests with `@Tags(['integration'])` to separate them from unit tests. Use a `testSupabaseClient` fixture initialized once per test file in `setUpAll()`. Each test must perform cleanup in `tearDown()` using DELETE WHERE test_run_id = to avoid cross-test contamination.

Do not use transactions that auto-rollback — verify data was actually written then clean up explicitly, as this tests the real persistence path. Run with: `flutter test --tags integration`. In CI, set environment variables for the test Supabase credentials and gate these tests on a separate pipeline step from unit tests.

Component
Mileage Claim Service
service medium
Epic Risks (2)
high impact medium prob integration

The auto-approval rule requires checking whether any additional expense lines are attached to the claim. The interface between the mileage claim and any co-submitted expense items is not fully defined within this feature's component scope. If the domain model does not include an explicit additionalExpenses collection, the evaluator cannot make a correct determination, which could auto-approve claims that should require manual review.

Mitigation & Contingency

Mitigation: Define the MileageClaim domain object interface with an explicit additionalExpenses: List field (nullable/empty for mileage-only claims) before implementing the service. Coordinate with the Expense Type Selection feature team to agree on the shared domain contract.

Contingency: If the cross-feature contract cannot be finalised before implementation, implement the evaluator to treat any non-null additionalExpenses list as requiring manual review and document the assumption for review during integration testing.

medium impact medium prob technical

A peer mentor who taps the submit button multiple times rapidly (e.g. due to slow network) could cause MileageClaimService to be invoked concurrently, resulting in duplicate claim records being persisted with the same trip data.

Mitigation & Contingency

Mitigation: Implement a submission-in-progress guard in MileageClaimService using a BLoC/Cubit state flag that prevents re-entrant calls. The UI layer (implemented in Epic 4) will also disable the submit button during processing.

Contingency: Add a Supabase-level unique constraint or idempotency key on (user_id, origin, distance, submitted_at truncated to minute) to prevent duplicate rows reaching the database even if the application guard fails.