Implement Bulk Approval Processor Service
epic-expense-approval-workflow-coordinator-ui-task-011 — Develop the BulkApprovalProcessor service that accepts a list of selected claim IDs and a bulk action (approve/reject) and processes them in batched transactions. Delegates per-claim logic to ApprovalWorkflowService, aggregates success/failure counts, emits bulk completion events, and handles partial failures gracefully without rolling back successful items.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
Use a StreamController
Use a CancellationToken pattern (a bool flag checked before each batch) for the cancel() functionality. The bulk_approval_events write should happen in a finally block after the loop so it is always recorded even on partial completion. Inject batch size as a constructor parameter with a default of 10 so tests can use batchSize=2 for predictable batching.
Testing Requirements
Unit tests (flutter_test): mock ApprovalWorkflowService. Test happy path: all claims succeed, result has correct counts. Test partial failure: one claim throws, others succeed, failure is recorded in result. Test empty list throws validation exception.
Test duplicate IDs are deduplicated. Test cancel() stops after current batch. Test Stream
Test batch boundary: submit 25 claims with batch size 10 and verify 3 batches are executed. Target >= 85% branch coverage.
Maintaining multi-select state across paginated list pages is architecturally complex in Flutter with Riverpod/BLoC. If the selection state is stored in the widget tree rather than the state layer, page transitions and list redraws can silently clear selections, causing coordinators to lose their multi-select and re-enter it.
Mitigation & Contingency
Mitigation: Store the selected claim ID set in a dedicated Riverpod StateNotifier outside the paginated list widget tree. The paginated list reads selection state from this provider and does not own it. Selection persists independently of list scroll position or page loads.
Contingency: If cross-page selection proves prohibitively complex, limit bulk selection to the currently visible page (add a clear warning in the UI) and prioritise single-page bulk approval for the initial release.
If a coordinator has the queue open while another coordinator approves claims from the same queue (possible in large organisations with shared chapter coverage), the Realtime update may arrive out of order or be missed during a reconnect, leaving the first coordinator's view stale and allowing them to attempt to approve an already-actioned claim.
Mitigation & Contingency
Mitigation: The ApprovalWorkflowService's optimistic locking (from the foundation epic) will catch the concurrent edit at the database level. The CoordinatorReviewQueueScreen should handle the resulting ConcurrencyException by removing the claim from the local list and showing a brief snackbar: 'This claim was already actioned by another coordinator.'
Contingency: Add a queue staleness indicator (a subtle 'last updated X seconds ago' label) and a manual refresh button as a fallback for coordinators who notice inconsistencies.
The end-to-end test requirement that a peer mentor receives a push notification within 30 seconds of coordinator approval depends on FCM delivery latency, which is outside the application's control and can vary significantly in CI/CD environments.
Mitigation & Contingency
Mitigation: Structure end-to-end tests to verify notification intent (correct FCM payload dispatched, correct Realtime event emitted) rather than actual device delivery timing. Use test doubles for FCM delivery in automated tests and reserve real-device delivery tests for manual pre-release validation.
Contingency: If notification timing requirements must be validated in automation, instrument the ApprovalNotificationService with a test hook that records dispatch timestamps and assert against those rather than actual FCM callbacks.