high priority low complexity backend pending backend specialist Tier 5

Acceptance Criteria

A structured log entry is emitted at the start of each scheduler run containing: ISO-8601 UTC timestamp, total open assignment count, and run identifier
Per-batch log entries include: batch index (0-based), batch size, and the distribution of evaluation outcomes (remind_count, escalate_count, none_count) for that batch
Each dispatcher call (reminder or escalation) produces a log entry recording: assignment_id, dispatcher type invoked, and success/failure result
When the duplicate-run idempotency guard fires, a single WARN-level log entry is emitted with the active run identifier and skipped timestamp — no further processing logs appear
A summary log entry is emitted at run completion containing the full SchedulerRunResult fields: total_evaluated, total_reminded, total_escalated, total_skipped, duration_ms
All log entries use a consistent structured format (key=value pairs or JSON) compatible with Supabase/server-side log aggregation
No personally identifiable information (PII) such as peer mentor names or contact details appears in any log entry — only IDs and counts
Log level conventions: INFO for normal flow, WARN for idempotency skips, ERROR for dispatcher failures
Logging does not alter the observable behaviour or return values of ReminderSchedulerService

Technical Requirements

frameworks
Dart
Supabase (Edge Functions or server-side Dart runtime)
apis
Supabase logging / stdout structured log sink
data models
Assignment
SchedulerRunResult
ReminderEvaluationOutcome
performance requirements
Log emission must add < 1 ms overhead per log entry
Logging must be non-blocking (synchronous string formatting, async I/O sink acceptable)
security requirements
Strip all PII (names, email, phone) from log payloads — log only opaque IDs
Log entries must not contain assignment content or contact notes
Ensure log sink is not publicly accessible; restrict to internal monitoring roles in Supabase

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Introduce a thin `SchedulerLogger` abstraction (interface + default implementation) injected into ReminderSchedulerService so tests can swap in a capturing stub without touching production I/O. Prefer structured key=value or JSON-lines format over interpolated strings — this makes log aggregation and alerting rule creation straightforward on the Supabase side. Place all log-level constants in one file to keep conventions consistent. Avoid calling `toString()` on domain objects directly in log statements; instead, project only the fields you need (e.g., `assignment.id`, `result.totalEvaluated`) to prevent accidental PII leakage if the domain object's `toString` is ever updated to include sensitive fields.

Testing Requirements

Unit tests (flutter_test / dart test) must verify: (1) log entries are emitted in the correct order for a standard run, (2) the idempotency-skip path emits exactly one WARN entry and no further logs, (3) a dispatcher failure produces an ERROR entry with the correct assignment_id, (4) no PII fields appear in any captured log string. Use a mock/stub log sink to capture entries in-memory. Do not require a live Supabase connection. Aim for 100% branch coverage of logging call sites.

Component
Reminder Scheduler Service
service high
Epic Risks (3)
medium impact high prob scope

The idempotency window (how long after a reminder is sent before another can be sent for the same assignment) is not explicitly specified. An incorrect window — too short, duplicate reminders appear; too long, a resolved and re-opened situation is not re-notified. This ambiguity could result in user-visible bugs post-launch.

Mitigation & Contingency

Mitigation: Before implementation, define the idempotency window explicitly with stakeholders: a reminder is suppressed if a same-type notification record exists with sent_at within the last (reminder_days - 1) days. Document this rule as a named constant in the service with a comment referencing the decision.

Contingency: If the window is wrong in production, it is a single constant change with a hotfix deployment. The notification_log table allows re-processing without data migration.

high impact medium prob technical

For organisations with thousands of open assignments (e.g., NHF with 1,400 chapters), the daily scheduler query over all open assignments could time out or consume excessive Supabase compute units, especially if the contact tracking query lacks proper indexing.

Mitigation & Contingency

Mitigation: Add a composite index on assignments(status, last_contact_date) before running performance tests. Use cursor-based pagination in the scheduler (query 500 rows at a time). Run a load test with 10,000 synthetic assignments as described in the feature documentation before merging.

Contingency: If the query is too slow for synchronous execution, move the evaluation to the Edge Function (cron trigger epic) and use Supabase's built-in parallelism. The service interface does not change, only the execution context.

medium impact medium prob integration

If the push notification service fails (FCM outage, invalid device token) during dispatch, the in-app notification may already be persisted but the push is silently lost. Inconsistent state makes it impossible to report accurate delivery status.

Mitigation & Contingency

Mitigation: Implement push dispatch and in-app persistence as separate operations with independent error handling. Record delivery_status as 'pending', 'delivered', or 'failed' on the notification_log row. Retry failed push deliveries up to 3 times with exponential backoff.

Contingency: If FCM is consistently unavailable, the in-app notification is still visible to the user, providing a degraded but functional fallback. Alert on consecutive push failures via the cron trigger's error logging.