high priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

CoordinatorRecruitmentDashboardBloc exposes a LoadDashboard event that triggers parallel fetching of ReferralStats for all mentors in the coordinator's scope
The BLoC emits DashboardLoading state immediately after LoadDashboard is dispatched
On successful fetch, the BLoC emits DashboardLoaded(mentorStats: List<MentorReferralStats>, activeFilter: DateRangeFilter) state
On service error, the BLoC emits DashboardError(message: String) state with a user-readable error message (not a stack trace)
A FilterChanged(DateRangeFilter) event causes the BLoC to re-fetch all mentor stats with the new date range, emitting DashboardLoading then DashboardLoaded/DashboardError
Concurrent FilterChanged events cancel the previous in-flight request (use switchMap / EventTransformer.restartable() equivalent)
The BLoC correctly passes the DateRangeFilter to ReferralAttributionService.getAggregatedStatsForMentor(mentorId, filter) for every mentor
If the coordinator has zero mentors in scope, the BLoC emits DashboardLoaded with an empty list (not an error)
The BLoC is closed (dispose/close called) when the widget using it is removed from the tree — no memory leaks
The BLoC does not expose the Supabase client directly — all data access goes through ReferralAttributionService

Technical Requirements

frameworks
Flutter
BLoC
apis
ReferralAttributionService.getAggregatedStatsForMentor(mentorId: String, filter: DateRangeFilter) → Future<ReferralStats>
ReferralAttributionService.getMentorsForCoordinator(coordinatorId: String) → Future<List<String>>
Supabase (accessed via service layer only)
data models
ReferralStats
DateRangeFilter
MentorReferralStats
CoordinatorRecruitmentDashboardState (loading, loaded, error)
performance requirements
Mentor stats must be fetched in parallel using Future.wait() — not sequentially — to minimise total load time
For coordinators with more than 20 mentors, batch the parallel calls into groups of 10 to avoid Supabase connection pool exhaustion
FilterChanged should use EventTransformer.restartable() (from bloc_concurrency) to cancel in-flight requests on rapid filter changes
security requirements
The coordinator's scope (list of mentor IDs) must be validated server-side via Supabase RLS — the BLoC must not rely solely on client-side scope filtering
The coordinatorId used for fetching must come from the authenticated session, not from a URL parameter or user input

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use `bloc_concurrency` package's `restartable()` transformer on the FilterChanged event handler to implement cancellation: `on(_onFilterChanged, transformer: restartable())`. For parallel fetching, use `Future.wait(mentorIds.map((id) => _service.getAggregatedStatsForMentor(id, event.filter)).toList())`. Wrap the Future.wait in a try/catch and emit DashboardError with a localized message on failure — do not rethrow. Store the current DateRangeFilter in the BLoC state so the UI can reflect the active filter without additional service calls.

The initial filter should default to `DateRangeFilter.lastThirtyDays()` (defined in task-001). Consider using a Cubit instead of full BLoC if the event set remains small (LoadDashboard + FilterChanged) — Cubit's method-based API is less boilerplate for this pattern.

Testing Requirements

Write bloc_test unit tests covering: (1) LoadDashboard emits [DashboardLoading, DashboardLoaded] on successful service response, (2) LoadDashboard emits [DashboardLoading, DashboardError] when service throws, (3) FilterChanged re-emits [DashboardLoading, DashboardLoaded] with updated stats, (4) rapid FilterChanged events result in only one final DashboardLoaded (restartable transformer cancels previous), (5) empty mentor list emits DashboardLoaded with empty mentorStats, (6) parallel fetch correctness — verify all mentor IDs are passed to the service. Mock ReferralAttributionService with mocktail. Do not test Supabase directly in BLoC tests — that is the service layer's concern.

Component
Coordinator Recruitment Dashboard
ui medium
Epic Risks (3)
medium impact high prob dependency

BadgeCriteriaIntegration must reference specific badge definition IDs from the badge-definition-repository for recruitment badges. If those badge definitions have not been created in the database when this epic is implemented, the integration will silently fail to award badges.

Mitigation & Contingency

Mitigation: As the first task of this epic, create the four recruitment badge definitions (seed data migration) with known, stable IDs. BadgeCriteriaIntegration hardcodes these IDs as constants. Include an assertion in the integration tests that verifies the badge definition records exist in the test database.

Contingency: If the badge definitions system does not support seeding at migration time, store the badge definition IDs in a feature-flag-style config table and look them up at runtime, falling back to a no-op with a warning log if they are absent.

medium impact medium prob technical

The coordinator dashboard aggregates referral stats across all peer mentors in an organisation. For large organisations (HLF has many peer mentors nationally), the aggregation query may be slow, causing the dashboard to feel unresponsive.

Mitigation & Contingency

Mitigation: Implement the aggregation as a Supabase database view or RPC that runs server-side with appropriate indexes on (mentor_id, org_id, created_at, event_type). Add a composite index on referral_events during the foundation epic's migration. Cache the result in the Riverpod provider with a 5-minute TTL.

Contingency: If query performance remains unacceptable at scale, materialise the aggregation in a nightly pg_cron job into a stats_cache table, and serve the dashboard from the cache with a 'last updated' timestamp shown to the coordinator.

medium impact low prob integration

The existing badge award service is implemented by the achievement-badges feature. If that feature's public API (BadgeAwardService interface) changes while this epic is in progress, the BadgeCriteriaIntegration will break at compile time or behave incorrectly at runtime.

Mitigation & Contingency

Mitigation: Confirm the BadgeAwardService interface is stable and document the exact method signatures this integration depends on. Write a narrow integration test that constructs the real BadgeAwardService against a test database to detect breaking changes immediately.

Contingency: If the badge service interface changes, adapt the BadgeCriteriaIntegration adapter class to match the new contract. The adapter pattern used here isolates the change to a single class.