Implement Bulk Registration Orchestrator Service
epic-coordinator-proxy-registration-bulk-orchestration-task-011 — Build the BulkRegistrationOrchestrator that fans out a single bulk submission request into individual per-mentor proxy activity records. Use Future.wait with error catching per record so that partial failures do not roll back successful submissions. Return a BulkSubmissionResult containing per-mentor success/failure status with error reasons. Integrate with ProxyRegistrationService for individual record submission and ProxyAuditLogger for bulk audit events. Support progress streaming so the UI can show incremental completion status.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Implement concurrency capping using a simple semaphore pattern with a Completer-based queue, or use the pool package (if already in pubspec) to limit concurrent futures. Model the progress stream using a StreamController
Ensure that the audit log call uses fire-and-forget semantics (unawaited(ProxyAuditLogger.logBulkSubmission(...))) so an audit failure never surfaces to the caller.
Testing Requirements
Write unit tests using flutter_test with a mock ProxyRegistrationService that simulates configurable success/failure per mentor ID and configurable latency. Test: all 10 mentors succeed, 3 of 10 fail (verify BulkStatus.partialSuccess and correct failure reasons in outcomes), all fail (BulkStatus.totalFailure), empty list (EmptySelectionError, no service calls made), concurrency cap (verify at most N service calls are in-flight simultaneously using a counter mock). Test that the progress Stream emits one update per mentor. Test that ProxyAuditLogger is called exactly once after all futures settle regardless of outcome mix.
Minimum 90% branch coverage on orchestrator logic.
Partial failures in bulk registration — where some mentors succeed and others fail — create a complex UX state that is easy to mishandle. If the UI does not clearly communicate which records succeeded and which failed, coordinators may re-submit already-saved records (creating duplicates) or miss failed records entirely (creating underreporting).
Mitigation & Contingency
Mitigation: Design the per-mentor result screen as a primary deliverable of this epic, not an afterthought. Use a clear list view with success/failure indicators per mentor name, and offer a 'Retry failed' action that pre-selects only the failed mentors for resubmission.
Contingency: If partial failure UX proves too complex to deliver within scope, implement a simpler all-or-nothing submission mode for the initial release with a clear error message listing which mentors failed, and defer the partial-retry UI to a follow-up sprint.
Submitting proxy records for a large group (e.g., 30+ mentors) as individual Supabase inserts may cause latency issues or hit rate limits, degrading the coordinator experience and potentially causing timeout failures that leave data in an inconsistent state.
Mitigation & Contingency
Mitigation: Implement the BulkRegistrationOrchestrator to batch inserts using a Supabase RPC call that accepts an array of proxy records, reducing round-trips to a single network call. Add progress indication using a stream of per-record results if the RPC supports it.
Contingency: If the RPC approach is blocked by Supabase limitations, fall back to chunked parallel inserts (5 records per batch) with retry logic, capping total submission time and surface a progress bar to manage coordinator expectations.
Unifying state management for both single and bulk proxy flows in a single BLoC risks state leakage between flows — for example, a previously selected mentor list persisting when a coordinator switches from bulk to single mode — causing confusing UI states or incorrect submissions.
Mitigation & Contingency
Mitigation: Define separate, named state subtrees within the BLoC for single-proxy state and bulk-proxy state, with explicit reset events triggered on flow entry. Write unit tests for state isolation scenarios using the bloc_test package.
Contingency: If unified BLoC state becomes unmanageable, split into two separate BLoCs (ProxySingleRegistrationBLoC and ProxyBulkRegistrationBLoC) sharing only common events via a parent coordinator Cubit.