Implement RealtimeApprovalSubscription channel management
epic-expense-approval-workflow-foundation-task-008 — Implement the RealtimeApprovalSubscription class wrapping Supabase Realtime channels for the expense_claim_status table. Manage the full channel lifecycle: connect(), reconnect() with exponential backoff, dispose(). Expose a Stream<ClaimStatusEvent> for downstream BLoC consumption. Centralise this so all screens subscribe through one wrapper rather than duplicating channel setup.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Use Supabase Flutter SDK's RealtimeChannel with .onPostgresChanges() filtering on schema='public', table='expense_claim_status', event=PostgresChangeEvent.update. Expose the stream as a broadcast stream so multiple BLoC instances can subscribe without each owning a channel. Implement the exponential backoff using a recursive async function with await Future.delayed() — avoid Timer-based approaches that complicate testing. Use StreamController
Place the class in lib/infrastructure/realtime/ to clearly indicate it is an infrastructure adapter. Document that the stream never completes normally — consumers must cancel their subscription explicitly.
Testing Requirements
Unit tests using flutter_test with a mock RealtimeChannel. Test cases: (1) connect() transitions to SUBSCRIBED state and emits first event, (2) simulated disconnect triggers reconnect() with correct backoff intervals verified by fake async timer, (3) dispose() closes stream and subsequent events are silently dropped, (4) duplicate connect() calls do not create additional channels, (5) ClaimStatusEvent fields correctly mapped from Realtime payload JSON. Integration test (optional, requires Supabase test instance): verify cross-org isolation by subscribing with org-A credentials and confirming org-B updates do not appear.
Optimistic locking in ExpenseClaimStatusRepository may produce excessive concurrency exceptions in high-volume coordinator sessions where multiple coordinators process the same queue simultaneously, causing confusing UI errors and coordinator frustration.
Mitigation & Contingency
Mitigation: Design the locking strategy with a short retry window (1-2 automatic retries with 200ms back-off) before surfacing the error to the UI. Document the concurrency model clearly so the UI layer can display a contextual 'claim was already actioned' message rather than a generic error.
Contingency: If contention remains high under load testing, switch to a last-writer-wins update with a conflict notification rather than a hard block, and log all concurrent edits for audit purposes.
FCM device tokens stored for peer mentors may be stale (app reinstalled, token rotated) causing push notifications for claim status changes to silently fail, leaving submitters unaware their claim was approved or rejected.
Mitigation & Contingency
Mitigation: Implement token refresh on every app launch and store updated tokens in Supabase. ApprovalNotificationService should fall back to in-app Realtime delivery when FCM returns an invalid-token error and should queue a token refresh request.
Contingency: If FCM delivery rates fall below acceptable thresholds in production monitoring, add a polling fallback in the peer mentor claim list screen that checks status on foreground resume.
Supabase Realtime has per-project channel and connection limits. If many coordinators and peer mentors are simultaneously subscribed across multiple screens, the project may hit quota limits causing subscription failures.
Mitigation & Contingency
Mitigation: Design RealtimeApprovalSubscription to use a single shared channel per user session rather than per-screen subscriptions. Implement subscription reference counting so channels are only opened once and reused across screens.
Contingency: Upgrade the Supabase plan tier if limits are reached, and implement graceful degradation to polling with a 30-second interval when Realtime is unavailable.