high priority medium complexity backend pending backend specialist Tier 4

Acceptance Criteria

BulkApprovalProcessor.process(claimIds, action, coordinatorId) accepts a non-empty list of claim IDs and a BulkAction enum (approve/reject)
Claims are processed in configurable batches (default batch size: 10) to avoid overwhelming the database
Each claim is delegated to ApprovalWorkflowService — BulkApprovalProcessor does not duplicate approval logic
If an individual claim fails (e.g., already approved, permission denied), the failure is recorded but processing continues for remaining claims
Returns a BulkApprovalResult with: total_submitted, success_count, failure_count, List<BulkClaimFailure> (claim_id + reason)
A bulk_approval_events record is emitted after all batches complete with the aggregate result and coordinator_id
Processing is sequential within a batch but batches themselves may run sequentially (not concurrent) to avoid DB connection exhaustion
Throws BulkApprovalValidationException if claimIds is empty or contains duplicates
Processing is cancellable: exposes a cancel() method that halts after the current batch completes
BLoC layer can observe progress via a Stream<BulkApprovalProgress> that emits after each batch

Technical Requirements

frameworks
Flutter (Dart)
Riverpod
Supabase
apis
ApprovalWorkflowService (internal)
Supabase REST API (bulk_approval_events table)
data models
BulkApprovalResult (total_submitted, success_count, failure_count, failures)
BulkClaimFailure (claim_id, error_code, message)
BulkApprovalProgress (processed, total, current_batch)
bulk_approval_events (id, coordinator_id, action, total, success_count, failure_count, completed_at)
performance requirements
Batch size of 10 keeps per-batch latency under 5 seconds
Stream emits progress updates at least once per batch — UI can show X/N processed
No memory accumulation — process and discard each batch result before moving to next
security requirements
Re-validate coordinator permission inside BulkApprovalProcessor before starting (not just at UI layer)
Deduplicate claimIds before processing to prevent double-approval
Cap maximum bulk size at 500 claims per invocation to prevent abuse

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Use a StreamController internally and expose it as a read-only Stream — close the controller after the final batch or on cancel. Implement batching with a simple loop: for (var i = 0; i < claimIds.length; i += batchSize). Within each batch, use Future.wait([]) only if you want concurrency within a batch — given DB load concerns, sequential await in a for loop is safer and easier to reason about. Model BulkApprovalResult and BulkClaimFailure as immutable Dart classes.

Use a CancellationToken pattern (a bool flag checked before each batch) for the cancel() functionality. The bulk_approval_events write should happen in a finally block after the loop so it is always recorded even on partial completion. Inject batch size as a constructor parameter with a default of 10 so tests can use batchSize=2 for predictable batching.

Testing Requirements

Unit tests (flutter_test): mock ApprovalWorkflowService. Test happy path: all claims succeed, result has correct counts. Test partial failure: one claim throws, others succeed, failure is recorded in result. Test empty list throws validation exception.

Test duplicate IDs are deduplicated. Test cancel() stops after current batch. Test Stream emits one event per batch. Integration test: process 3 real claims against local Supabase and verify bulk_approval_events row is created.

Test batch boundary: submit 25 claims with batch size 10 and verify 3 batches are executed. Target >= 85% branch coverage.

Epic Risks (3)
medium impact medium prob technical

Maintaining multi-select state across paginated list pages is architecturally complex in Flutter with Riverpod/BLoC. If the selection state is stored in the widget tree rather than the state layer, page transitions and list redraws can silently clear selections, causing coordinators to lose their multi-select and re-enter it.

Mitigation & Contingency

Mitigation: Store the selected claim ID set in a dedicated Riverpod StateNotifier outside the paginated list widget tree. The paginated list reads selection state from this provider and does not own it. Selection persists independently of list scroll position or page loads.

Contingency: If cross-page selection proves prohibitively complex, limit bulk selection to the currently visible page (add a clear warning in the UI) and prioritise single-page bulk approval for the initial release.

medium impact medium prob integration

If a coordinator has the queue open while another coordinator approves claims from the same queue (possible in large organisations with shared chapter coverage), the Realtime update may arrive out of order or be missed during a reconnect, leaving the first coordinator's view stale and allowing them to attempt to approve an already-actioned claim.

Mitigation & Contingency

Mitigation: The ApprovalWorkflowService's optimistic locking (from the foundation epic) will catch the concurrent edit at the database level. The CoordinatorReviewQueueScreen should handle the resulting ConcurrencyException by removing the claim from the local list and showing a brief snackbar: 'This claim was already actioned by another coordinator.'

Contingency: Add a queue staleness indicator (a subtle 'last updated X seconds ago' label) and a manual refresh button as a fallback for coordinators who notice inconsistencies.

low impact high prob dependency

The end-to-end test requirement that a peer mentor receives a push notification within 30 seconds of coordinator approval depends on FCM delivery latency, which is outside the application's control and can vary significantly in CI/CD environments.

Mitigation & Contingency

Mitigation: Structure end-to-end tests to verify notification intent (correct FCM payload dispatched, correct Realtime event emitted) rather than actual device delivery timing. Use test doubles for FCM delivery in automated tests and reserve real-device delivery tests for manual pre-release validation.

Contingency: If notification timing requirements must be validated in automation, instrument the ApprovalNotificationService with a test hook that records dispatch timestamps and assert against those rather than actual FCM callbacks.