high priority high complexity backend pending backend specialist Tier 5

Acceptance Criteria

When connectivity is unavailable, calling submit() saves the draft locally and returns SubmissionOutcome.queuedOffline
Offline drafts are persisted to a local SQLite or Hive store so they survive app restarts
Each offline draft is assigned a locally-generated optimistic UUID
On reconnection detection, the sync queue is flushed automatically without user action
Before submitting each queued draft, it is re-validated using ExpenseValidationService; drafts that fail re-validation are moved to a 'validation_failed' state
Successfully synced drafts are removed from the local queue and a server-assigned ID replaces the optimistic ID
The service exposes a `Stream<List<DraftSyncStatus>>` that emits updates for all drafts (pending / syncing / synced / failed)
Concurrent sync is limited to 1 draft at a time to avoid hammering the API on reconnect
A draft in 'syncing' state is not retried by a second flush trigger (idempotent flush)
Draft receipt images are stored as local file paths, not base64-encoded blobs, to limit memory pressure
UI can distinguish 'no connectivity at submission time' (queued_offline) from 'submitted but server error' (submission_failed)

Technical Requirements

frameworks
Flutter
Riverpod
Dart
apis
connectivity_plus (connectivity detection)
Supabase (via ExpenseRepository for sync)
data models
ExpenseDraft
OfflineDraftRecord
DraftSyncStatus
SyncQueue
performance requirements
Local draft save must complete in under 100 ms (synchronous or near-synchronous local write)
Sync queue flush must not block the UI thread — run in a background Isolate or async compute
Stream must not emit more than one update per 500 ms to prevent UI jank (debounce)
security requirements
Locally stored drafts must be encrypted at rest using flutter_secure_storage or equivalent
Receipt image files stored locally must be saved in the app's private documents directory, not accessible to other apps
On user logout, all local drafts and receipt files must be deleted

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Use `connectivity_plus` for network state detection. Persist drafts using `sqflite` (preferred for structured data) or `hive` (simpler but less queryable). The SyncQueue should be a Riverpod StateNotifier or AsyncNotifier that owns the queue state and exposes the status stream. On app start, load any persisted drafts into the queue and listen for connectivity changes.

Use a `StreamController>` internally, debounced with RxDart or manual timer. The optimistic UUID must be a RFC 4122 v4 UUID (use the `uuid` package). When the server returns the real ID after sync, update any local references (e.g. cached data, navigation state).

For encryption of local drafts, use `flutter_secure_storage` for the encryption key and AES-256 for the draft payload. This is especially important given HLF's sensitive financial data requirements.

Testing Requirements

Unit tests using flutter_test: (1) submit() with mocked connectivity = offline → draft saved locally, queuedOffline returned; (2) flush() with one queued draft, validation passes → draft submitted, removed from queue; (3) flush() with draft that fails re-validation → draft moved to validation_failed, not submitted; (4) flush() with two drafts → only one concurrent submission (sequential); (5) connectivity stream emits online event → flush() is triggered; (6) draft stream emits correct state transitions: pending → syncing → synced. Integration test: simulate app restart with queued draft, verify draft is loaded from local storage and submitted on next online event. Use fake connectivity stream for deterministic tests.

Component
Expense Submission Service
service high
Epic Risks (3)
high impact medium prob scope

Mutual exclusion rules are stored in the expense type catalogue's exclusive_groups field. If the catalogue schema or group definitions differ between HLF and Blindeforbundet, the validation service must handle multiple group configurations without hardcoding organisation-specific logic.

Mitigation & Contingency

Mitigation: Design the validation service to be purely data-driven: it reads exclusive_groups from the cached catalogue and enforces whichever groups are defined, with no hardcoded organisation names. Write parameterised unit tests covering at least 4 different catalogue configurations to verify generality.

Contingency: If an organisation requires non-standard exclusion semantics (e.g. partial exclusion within a group), introduce an exclusion_type field to the catalogue schema and extend the service, treating it as a catalogue configuration change rather than a code fork.

medium impact high prob technical

The attestation service subscribes to Supabase Realtime for live queue updates. On mobile, Realtime WebSocket connections can be dropped during network transitions, causing the coordinator queue to become stale without the user being aware.

Mitigation & Contingency

Mitigation: Implement connection lifecycle management: reconnect on network-change events, show a 'reconnecting' indicator when the subscription is broken, and perform a full queue refresh on reconnect rather than relying solely on delta events.

Contingency: Add a manual pull-to-refresh gesture on the attestation queue screen as a guaranteed fallback. If Realtime proves unreliable in production, switch to periodic polling (30-second interval) as a degraded but functional mode.

medium impact medium prob integration

If a peer mentor submits a draft while offline and then submits the same claim again after connectivity is restored (thinking the first attempt failed), duplicate claims may be persisted in Supabase.

Mitigation & Contingency

Mitigation: Assign a client-generated idempotency key (UUID) to each draft at creation time. The submission service sends this key as an upsert key to Supabase, preventing duplicate inserts. The draft is marked 'submitted' locally after first successful upload.

Contingency: Implement a server-side duplicate detection trigger on the expense_claims table checking (activity_id, claimant_id, created_date) within a 24-hour window and returning the existing record ID rather than inserting a duplicate.