critical priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

Concrete class `ReminderEvaluationServiceImpl` implements `ReminderEvaluationService` and is registered via the dependency injection framework
When last contact date exists and days elapsed < remindAfterDays, `evaluate()` returns `ReminderEvaluationResultNone`
When last contact date exists and days elapsed >= remindAfterDays but < escalateAfterDays, `evaluate()` returns `ReminderEvaluationResultRemind`
When last contact date exists and days elapsed >= escalateAfterDays, `evaluate()` returns `ReminderEvaluationResultEscalate`
When no contact has ever been made (null last contact date), `evaluate()` treats days elapsed as `int.max` and returns `ReminderEvaluationResultEscalate`
Days are computed using UTC dates only to avoid DST-induced off-by-one errors
The method is `async` and properly awaits all repository calls without blocking the UI thread
Repository errors propagate as typed exceptions; the service does not swallow errors silently
Thresholds are passed in as `OrgReminderThresholds` — the service does not fetch config internally

Technical Requirements

frameworks
Flutter
Riverpod (for DI/provider registration)
Dart async/await
apis
AssignmentContactTrackingRepository (internal)
ReminderConfigRepository (internal)
data models
Assignment
OrgReminderThresholds
ContactTrackingRecord
performance requirements
Single repository read per evaluate() call — no N+1 queries
Date arithmetic must use DateTime.utc() exclusively to prevent timezone bugs
evaluate() must complete in under 500ms under normal Supabase latency
security requirements
Assignment ID must be validated as non-empty before issuing repository queries
Service must not log full contact records — only assignment ID and computed day count
Row-level security (RLS) on Supabase tables must be respected; service must not bypass org scoping

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Inject a `clockFn` (`DateTime Function() clock = DateTime.now`) parameter to make date arithmetic fully testable without real-time dependencies. Compute `daysSinceContact = clock().toUtc().difference(lastContactDate.toUtc()).inDays`. Use `int.maxFinite.toInt()` as the sentinel for no-contact-ever rather than a magic number. Register via Riverpod `Provider` (or `riverpod_annotation`) in the reminders feature module.

The thresholds are passed in from the caller (not fetched inside the service) to keep this service stateless and composable. Document the precondition that `escalateAfterDays >= remindAfterDays`; add a debug-mode `assert` inside the method.

Testing Requirements

Unit tests (flutter_test) covering all five threshold scenarios: below-remind, at-remind-boundary, between-remind-and-escalate, at-escalate-boundary, and no-contact-ever. Mock `AssignmentContactTrackingRepository` using Mockito or manual fakes — do not hit real Supabase. Verify UTC date arithmetic with a fixed clock (inject `DateTime Function()` as a clock dependency). Verify that repository exceptions propagate correctly.

Aim for 100% branch coverage on the evaluation logic.

Component
Reminder Evaluation Service
service medium
Epic Risks (3)
medium impact high prob scope

The idempotency window (how long after a reminder is sent before another can be sent for the same assignment) is not explicitly specified. An incorrect window — too short, duplicate reminders appear; too long, a resolved and re-opened situation is not re-notified. This ambiguity could result in user-visible bugs post-launch.

Mitigation & Contingency

Mitigation: Before implementation, define the idempotency window explicitly with stakeholders: a reminder is suppressed if a same-type notification record exists with sent_at within the last (reminder_days - 1) days. Document this rule as a named constant in the service with a comment referencing the decision.

Contingency: If the window is wrong in production, it is a single constant change with a hotfix deployment. The notification_log table allows re-processing without data migration.

high impact medium prob technical

For organisations with thousands of open assignments (e.g., NHF with 1,400 chapters), the daily scheduler query over all open assignments could time out or consume excessive Supabase compute units, especially if the contact tracking query lacks proper indexing.

Mitigation & Contingency

Mitigation: Add a composite index on assignments(status, last_contact_date) before running performance tests. Use cursor-based pagination in the scheduler (query 500 rows at a time). Run a load test with 10,000 synthetic assignments as described in the feature documentation before merging.

Contingency: If the query is too slow for synchronous execution, move the evaluation to the Edge Function (cron trigger epic) and use Supabase's built-in parallelism. The service interface does not change, only the execution context.

medium impact medium prob integration

If the push notification service fails (FCM outage, invalid device token) during dispatch, the in-app notification may already be persisted but the push is silently lost. Inconsistent state makes it impossible to report accurate delivery status.

Mitigation & Contingency

Mitigation: Implement push dispatch and in-app persistence as separate operations with independent error handling. Record delivery_status as 'pending', 'delivered', or 'failed' on the notification_log row. Retry failed push deliveries up to 3 times with exponential backoff.

Contingency: If FCM is consistently unavailable, the in-app notification is still visible to the user, providing a degraded but functional fallback. Alert on consecutive push failures via the cron trigger's error logging.