critical priority high complexity backend pending backend specialist Tier 3

Acceptance Criteria

Concrete ReminderSchedulerServiceImpl implements the abstract ReminderSchedulerService interface
All four dependencies (ReminderEvaluationService, ReminderDispatchService, AssignmentContactTrackingRepository, ReminderConfigRepository) are injected via constructor and declared as final fields
runDailyEvaluation() fetches only open assignments (status = open or equivalent) using AssignmentContactTrackingRepository
Assignments are processed in configurable batches (batch size read from ReminderConfigRepository at run start)
ReminderEvaluationService is called once per assignment and its result determines the dispatch path
EvaluationResult.remind routes to ReminderDispatchService.dispatchReminder()
EvaluationResult.escalate routes to ReminderDispatchService.dispatchEscalation()
EvaluationResult.noAction increments the skipped counter without calling dispatch
SchedulerRunResult counters are correctly accumulated across all batches and returned at the end of the run
A failure dispatching one assignment does not abort processing of subsequent assignments — errors are caught per-assignment and tallied
Method returns SchedulerRunResult.empty() (plus any successful counts) even if all assignments fail individually
Service compiles with zero dart analyze errors

Technical Requirements

frameworks
Flutter
Dart
Riverpod
apis
Supabase PostgREST REST API (via repository abstractions)
data models
Assignment
ReminderContactTracking
ReminderConfig
EvaluationResult
SchedulerRunResult
performance requirements
Batch processing must use Future.wait() within each batch for parallel dispatch, not sequential await per item
Total memory footprint must not grow unbounded — process one batch at a time and release before fetching the next
Supabase assignment fetch must use .range() pagination rather than loading all records into memory at once
security requirements
Service must operate under service-role Supabase credentials injected via the repository layer — never expose credentials directly in the service class
No raw SQL in the service class — all data access via repository interfaces

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Use a Stream> from the repository to enable lazy batch loading, or use offset-based pagination with a while loop and a hasMore flag — the latter is simpler and easier to test. Model EvaluationResult as a sealed class with three subtypes (Remind, Escalate, NoAction) so the routing switch is exhaustive and the compiler enforces all cases. Accumulate run results using a mutable _RunAccumulator helper class (private to the implementation file) that exposes increment methods — this keeps the main method readable. Wrap each per-assignment dispatch in a try/catch that catches Object and logs the error with the assignment ID before continuing.

Do not use Future.wait() for the entire assignment list at once — only within each batch — to bound concurrency and memory.

Testing Requirements

Unit tests using flutter_test with all dependencies mocked via mocktail. Required test scenarios: (1) empty assignment list returns SchedulerRunResult.empty(); (2) all assignments return EvaluationResult.remind → remindersDispatched count equals assignment count; (3) all assignments return EvaluationResult.escalate → escalationsDispatched count correct; (4) mixed results accumulate correctly across remind/escalate/noAction buckets; (5) one dispatch failure does not abort remaining assignments and failed assignment does not increment success counters; (6) batch size respected — verify repository paginated calls match expected batch boundaries. Integration test (optional, against Supabase local emulator): full run with 20 seeded open assignments across multiple batches.

Component
Reminder Scheduler Service
service high
Epic Risks (3)
medium impact high prob scope

The idempotency window (how long after a reminder is sent before another can be sent for the same assignment) is not explicitly specified. An incorrect window — too short, duplicate reminders appear; too long, a resolved and re-opened situation is not re-notified. This ambiguity could result in user-visible bugs post-launch.

Mitigation & Contingency

Mitigation: Before implementation, define the idempotency window explicitly with stakeholders: a reminder is suppressed if a same-type notification record exists with sent_at within the last (reminder_days - 1) days. Document this rule as a named constant in the service with a comment referencing the decision.

Contingency: If the window is wrong in production, it is a single constant change with a hotfix deployment. The notification_log table allows re-processing without data migration.

high impact medium prob technical

For organisations with thousands of open assignments (e.g., NHF with 1,400 chapters), the daily scheduler query over all open assignments could time out or consume excessive Supabase compute units, especially if the contact tracking query lacks proper indexing.

Mitigation & Contingency

Mitigation: Add a composite index on assignments(status, last_contact_date) before running performance tests. Use cursor-based pagination in the scheduler (query 500 rows at a time). Run a load test with 10,000 synthetic assignments as described in the feature documentation before merging.

Contingency: If the query is too slow for synchronous execution, move the evaluation to the Edge Function (cron trigger epic) and use Supabase's built-in parallelism. The service interface does not change, only the execution context.

medium impact medium prob integration

If the push notification service fails (FCM outage, invalid device token) during dispatch, the in-app notification may already be persisted but the push is silently lost. Inconsistent state makes it impossible to report accurate delivery status.

Mitigation & Contingency

Mitigation: Implement push dispatch and in-app persistence as separate operations with independent error handling. Record delivery_status as 'pending', 'delivered', or 'failed' on the notification_log row. Retry failed push deliveries up to 3 times with exponential backoff.

Contingency: If FCM is consistently unavailable, the in-app notification is still visible to the user, providing a degraded but functional fallback. Alert on consecutive push failures via the cron trigger's error logging.