Implement receipt upload orchestration in submission service
epic-travel-expense-registration-core-services-task-005 — Build the receipt upload step within the expense submission service. Before persisting the expense record, the service must upload any attached receipt image via the ReceiptStorageAdapter, obtain a storage reference URL, and attach it to the expense payload. Handle upload failures with a typed ReceiptUploadError that allows the caller to retry without re-entering form data.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Define ReceiptStorageAdapter as an abstract class (interface) in the domain layer. The concrete Supabase implementation lives in the infrastructure layer. Inject via Riverpod so tests can swap it out. Store receipt at path: `receipts/{org_id}/{user_id}/{uuid}.{ext}`.
Return the full storage URL (or a signed URL if bucket is private) from the adapter. Use Dart's `path` package to extract file extension for format validation. Do not compress images in this service — compression is a UI-layer concern before the image reaches the service. The method signature should be: `Future
Testing Requirements
Unit tests using flutter_test with a mock ReceiptStorageAdapter (implement the interface, return controlled responses). Test cases: (1) no receipt → upload not called, payload unchanged; (2) receipt present, upload succeeds → URL attached to payload; (3) upload throws network error → ReceiptUploadError returned, no DB call; (4) file too large → ReceiptTooLarge before upload attempt; (5) invalid format → ReceiptInvalidFormat before upload attempt. Integration test against Supabase local emulator to validate bucket permissions and URL format.
Mutual exclusion rules are stored in the expense type catalogue's exclusive_groups field. If the catalogue schema or group definitions differ between HLF and Blindeforbundet, the validation service must handle multiple group configurations without hardcoding organisation-specific logic.
Mitigation & Contingency
Mitigation: Design the validation service to be purely data-driven: it reads exclusive_groups from the cached catalogue and enforces whichever groups are defined, with no hardcoded organisation names. Write parameterised unit tests covering at least 4 different catalogue configurations to verify generality.
Contingency: If an organisation requires non-standard exclusion semantics (e.g. partial exclusion within a group), introduce an exclusion_type field to the catalogue schema and extend the service, treating it as a catalogue configuration change rather than a code fork.
The attestation service subscribes to Supabase Realtime for live queue updates. On mobile, Realtime WebSocket connections can be dropped during network transitions, causing the coordinator queue to become stale without the user being aware.
Mitigation & Contingency
Mitigation: Implement connection lifecycle management: reconnect on network-change events, show a 'reconnecting' indicator when the subscription is broken, and perform a full queue refresh on reconnect rather than relying solely on delta events.
Contingency: Add a manual pull-to-refresh gesture on the attestation queue screen as a guaranteed fallback. If Realtime proves unreliable in production, switch to periodic polling (30-second interval) as a degraded but functional mode.
If a peer mentor submits a draft while offline and then submits the same claim again after connectivity is restored (thinking the first attempt failed), duplicate claims may be persisted in Supabase.
Mitigation & Contingency
Mitigation: Assign a client-generated idempotency key (UUID) to each draft at creation time. The submission service sends this key as an upsert key to Supabase, preventing duplicate inserts. The draft is marked 'submitted' locally after first successful upload.
Contingency: Implement a server-side duplicate detection trigger on the expense_claims table checking (activity_id, claimant_id, created_date) within a 24-hour window and returning the existing record ID rather than inserting a duplicate.