critical priority high complexity backend pending backend specialist Tier 3

Acceptance Criteria

BulkRegistrationOrchestrator accepts a BulkSubmissionRequest containing a list of mentor IDs and a shared ProxyActivityTemplate, and returns a BulkSubmissionResult with per-mentor outcomes
Individual mentor submissions are executed concurrently using Future.wait; each future is independently error-caught so one failure does not cancel other in-flight submissions
BulkSubmissionResult contains a list of MentorSubmissionOutcome objects, each with mentorId, status (success/failure), and optional error reason
A Stream<BulkProgressUpdate> is exposed alongside the Future-based result, emitting one update per completed mentor (success or failure) so the UI can render incremental progress
ProxyAuditLogger.logBulkSubmission() is called once after all futures settle, capturing total count, success count, failure count, and the acting coordinator ID
If ALL individual submissions fail, BulkSubmissionResult.overallStatus is BulkStatus.totalFailure; if some succeed, it is BulkStatus.partialSuccess; if all succeed, BulkStatus.success
Concurrency is capped at a configurable maximum (default: 5 simultaneous Supabase writes) to avoid rate-limiting; excess requests are queued
The orchestrator is stateless and injectable via Riverpod; it does not cache results between calls
Empty mentor list returns BulkSubmissionResult with overallStatus totalFailure and a typed EmptySelectionError without making any network calls
Unit tests cover: all-success, partial failure, all-failure, empty list, concurrency cap enforcement (mock delay), audit log call verification

Technical Requirements

frameworks
Flutter
Riverpod
Dart async (Future.wait, StreamController)
apis
ProxyRegistrationService.submit() per mentor
ProxyAuditLogger.logBulkSubmission()
Supabase (via ProxyRegistrationService, no direct calls from orchestrator)
data models
BulkSubmissionRequest
BulkSubmissionResult
MentorSubmissionOutcome
BulkProgressUpdate
BulkStatus (enum)
ProxyActivityTemplate
performance requirements
With default concurrency cap of 5, a 20-mentor bulk submission should complete in ≤4× the time of a single submission (i.e. 4 batches of 5)
Progress stream must emit within 50 ms of each individual future settling to keep UI feedback responsive
Orchestrator must not accumulate memory proportional to mentor list size; use streaming result pattern rather than buffering all outcomes before returning
security requirements
acting_coordinator_id must be injected from the authenticated session at orchestrator initialisation, not passed in BulkSubmissionRequest
Concurrency cap prevents the orchestrator from being used as a DoS vector against Supabase; cap must be enforced even if caller increases batch size
Each individual ProxyRegistrationService call enforces its own RLS — orchestrator trusts service-level security and does not re-validate

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Implement concurrency capping using a simple semaphore pattern with a Completer-based queue, or use the pool package (if already in pubspec) to limit concurrent futures. Model the progress stream using a StreamController; close it after all futures settle and before returning the BulkSubmissionResult Future. The stream and the Future should both be returned in a BulkSubmissionHandle wrapper record (Dart 3 records syntax: (Future, Stream)) to allow the BLoC to subscribe to both. Avoid using Isolates — the concurrency here is I/O-bound (Supabase HTTP), not CPU-bound, so async/await with a semaphore is sufficient and simpler.

Ensure that the audit log call uses fire-and-forget semantics (unawaited(ProxyAuditLogger.logBulkSubmission(...))) so an audit failure never surfaces to the caller.

Testing Requirements

Write unit tests using flutter_test with a mock ProxyRegistrationService that simulates configurable success/failure per mentor ID and configurable latency. Test: all 10 mentors succeed, 3 of 10 fail (verify BulkStatus.partialSuccess and correct failure reasons in outcomes), all fail (BulkStatus.totalFailure), empty list (EmptySelectionError, no service calls made), concurrency cap (verify at most N service calls are in-flight simultaneously using a counter mock). Test that the progress Stream emits one update per mentor. Test that ProxyAuditLogger is called exactly once after all futures settle regardless of outcome mix.

Minimum 90% branch coverage on orchestrator logic.

Component
Bulk Registration Orchestrator
service high
Epic Risks (3)
high impact high prob scope

Partial failures in bulk registration — where some mentors succeed and others fail — create a complex UX state that is easy to mishandle. If the UI does not clearly communicate which records succeeded and which failed, coordinators may re-submit already-saved records (creating duplicates) or miss failed records entirely (creating underreporting).

Mitigation & Contingency

Mitigation: Design the per-mentor result screen as a primary deliverable of this epic, not an afterthought. Use a clear list view with success/failure indicators per mentor name, and offer a 'Retry failed' action that pre-selects only the failed mentors for resubmission.

Contingency: If partial failure UX proves too complex to deliver within scope, implement a simpler all-or-nothing submission mode for the initial release with a clear error message listing which mentors failed, and defer the partial-retry UI to a follow-up sprint.

medium impact medium prob technical

Submitting proxy records for a large group (e.g., 30+ mentors) as individual Supabase inserts may cause latency issues or hit rate limits, degrading the coordinator experience and potentially causing timeout failures that leave data in an inconsistent state.

Mitigation & Contingency

Mitigation: Implement the BulkRegistrationOrchestrator to batch inserts using a Supabase RPC call that accepts an array of proxy records, reducing round-trips to a single network call. Add progress indication using a stream of per-record results if the RPC supports it.

Contingency: If the RPC approach is blocked by Supabase limitations, fall back to chunked parallel inserts (5 records per batch) with retry logic, capping total submission time and surface a progress bar to manage coordinator expectations.

medium impact medium prob technical

Unifying state management for both single and bulk proxy flows in a single BLoC risks state leakage between flows — for example, a previously selected mentor list persisting when a coordinator switches from bulk to single mode — causing confusing UI states or incorrect submissions.

Mitigation & Contingency

Mitigation: Define separate, named state subtrees within the BLoC for single-proxy state and bulk-proxy state, with explicit reset events triggered on flow entry. Write unit tests for state isolation scenarios using the bloc_test package.

Contingency: If unified BLoC state becomes unmanageable, split into two separate BLoCs (ProxySingleRegistrationBLoC and ProxyBulkRegistrationBLoC) sharing only common events via a parent coordinator Cubit.