critical priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

On Supabase insert failure, the service retries the insert up to 3 times with exponential backoff (1s, 2s, 4s) before emitting ReadReceiptError
Transient network errors (socket timeout, SocketException, ClientException) trigger retries; non-retryable errors (RLS 42501, auth error) fail immediately without retry
After 3 failed attempts, ReadReceiptError is emitted with a Norwegian message indicating the audit could not be recorded
Each failed attempt (all 3, not just the final one) is written to a local persistent store (SharedPreferences or SQLite queue) with contactId, fieldKey, revealed_at, and attempt count
A separate background mechanism (connectivity change listener or app-resume hook) reads the local queue and retries pending receipts — this mechanism may be scaffolded in this task and implemented fully as a follow-up
The service remains in ReadReceiptWriting state during all retry attempts — it does not oscillate between states during retries
Unit tests mock the Supabase client to fail N times then succeed, verifying the correct number of retry attempts
The retry mechanism is covered by a unit test that verifies exponential backoff delays using a fake clock or mocked timer

Technical Requirements

frameworks
Flutter
Riverpod
apis
Supabase client.from('read_receipts').insert()
Connectivity Plus package for network state monitoring (if not already in pubspec.yaml)
SharedPreferences or local SQLite for the deferred retry queue
data models
ReadReceiptRecord
PendingReceiptQueueEntry (contactId, fieldKey, revealed_at, attemptCount)
performance requirements
Retry backoff must not block the UI thread — all retries run in an isolate-safe async context
Local queue write must complete within 100ms to not delay the error state emission
security requirements
Locally queued receipt records must not store the revealed field value — only the field key identifier
Local queue entries must be encrypted at rest if the device storage tier is unencrypted (use flutter_secure_storage for the queue if SharedPreferences is not encrypted on the target platform)
Queued entries must be deleted from local storage immediately after a successful deferred retry
Audit trail: log the timestamp of each retry attempt to the internal verbose logger for compliance investigation

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Implement retry with a helper: Future withRetry(Future Function() fn, {int maxAttempts = 3}) using a for loop with await Future.delayed(Duration(seconds: pow(2, attempt).toInt())). Catch only retryable exceptions (SocketException, TimeoutException, ClientException) inside the loop; rethrow non-retryable PostgrestException codes (42501, auth errors) immediately. For the local queue, define a PendingReceiptQueueEntry model serializable to JSON and store it in a dedicated SharedPreferences key as a JSON list (or a lightweight SQLite table if the project already has SQLite). The deferred retry mechanism: listen to ConnectivityResult changes via connectivity_plus; when connectivity is restored and the app is in the foreground, drain the queue by replaying inserts.

Use a lock/mutex (package:synchronized) to prevent concurrent queue drains. This entire mechanism is critical for GDPR Article 30 compliance — every field reveal must eventually be audited, even if the initial write fails.

Testing Requirements

Write unit tests covering: (1) first attempt succeeds — no retry, state reaches Confirmed, (2) first two attempts fail (SocketException), third succeeds — state reaches Confirmed, (3) all three attempts fail — state reaches ReadReceiptError with Norwegian message, (4) non-retryable RLS error fails immediately without retry (verify mock called exactly once), (5) each failure is written to the local queue, (6) after a successful deferred retry the queue entry is deleted. Use a fake async timer (package:fake_async or clock injection) to verify exponential backoff delays without real waiting. Write a separate integration test that simulates connectivity restoration and verifies the queue drainer picks up and replays the pending record.

Component
Read Receipt Service
service medium
Epic Risks (2)
medium impact medium prob technical

Parallel fetching of profile, activity history, and assignment status from contact-detail-service may produce race conditions where partial state is emitted to the UI before all fetches complete, resulting in flickering or incorrect loading indicators.

Mitigation & Contingency

Mitigation: Use Future.wait or a single composed BLoC event that only emits a loaded state once all three futures resolve. Define a strict state machine: initial → loading → loaded/error with no intermediate partial-loaded states emitted to the UI.

Contingency: If parallelism proves unreliable in testing, fall back to sequential fetching with a combined loading indicator. The 500ms target may need to be renegotiated with stakeholders if sequential fetching exceeds it on slow connections.

high impact low prob integration

The partial-field update pattern in contact-edit-service assumes the contact record has not changed between when the edit screen was loaded and when the save is submitted. Concurrent edits by another coordinator could cause the earlier editor's save to silently overwrite the later one.

Mitigation & Contingency

Mitigation: Include an updated_at timestamp in the PATCH request and configure Supabase to reject updates where the server-side timestamp differs from the client's version. Return a 409-equivalent error that the service maps to a user-readable conflict message.

Contingency: If optimistic locking is too complex for initial delivery, implement a simple 'reload and retry' flow: on save error, reload the contact detail and prompt the coordinator to re-apply their changes manually.