Implement activity look-back window query in Scheduler
epic-scenario-based-follow-up-prompts-scheduler-and-ui-task-002 — Implement the fetchActivitiesInWindow method in the Scheduler Service that queries Supabase for all activities within the configured look-back window (default 7 days) for a given chapter. Apply RLS-safe queries using the authenticated client and return typed ActivityRecord models.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Use Riverpod Provider or a plain Dart class injected via Provider — do not couple SchedulerService to BuildContext. The method signature should be: Future> fetchActivitiesInWindow(String chapterId, {int windowDays = 7}). Compute windowStart as DateTime.now().toUtc().subtract(Duration(days: windowDays)) before the query. Use supabase.from('activities').select('id, peer_mentor_id, activity_type_id, date, organization_id, duration_minutes, location').eq('organization_unit_id', chapterId).gte('date', windowStart.toIso8601String()).order('date', ascending: false).
Map response with ActivityRecord.fromJson(). Wrap in try/catch and rethrow as SchedulerException. Avoid calling DateTime.now() inside a loop — compute once and pass down.
Testing Requirements
Unit tests using flutter_test with a mocked SupabaseClient (mockito or manual stub). Test cases: (1) returns correctly typed ActivityRecord list when Supabase responds with valid JSON array; (2) returns empty list when Supabase returns empty array; (3) applies correct date filter — assert .gte call received windowStart argument within 1 second of expected; (4) throws SchedulerException with code 'fetch_failed' on Supabase error; (5) boundary condition — activity dated exactly at windowStart is included. Integration test: connect to local Supabase dev instance, seed 3 activities (2 in window, 1 outside), assert only 2 returned.
If the scheduler runs concurrently (e.g., two overlapping cron invocations due to edge function retry), duplicate prompts could be dispatched before the first run's history records are committed, breaking the deduplication guarantee.
Mitigation & Contingency
Mitigation: Use a Postgres advisory lock or unique constraint on (user_id, scenario_id, activity_ref) in the prompt history table to make concurrent writes idempotent; design the scheduler to check history inside a transaction.
Contingency: If concurrency issues persist in production, add a distributed lock via Supabase Edge Function concurrency limit (max_instances=1) for the evaluation function as a hard guard.
Coordinators may find scenario configuration unclear if trigger conditions are expressed as raw JSON or technical terminology, leading to misconfiguration and irrelevant prompts being sent to peer mentors.
Mitigation & Contingency
Mitigation: Design the ScenarioConfigurationScreen to display human-readable descriptions of each template's trigger condition (e.g., 'Send 3 days after first contact if wellbeing concern was flagged') rather than raw rule properties; validate with an HLF coordinator in a design review before implementation.
Contingency: If coordinators still misconfigure rules after launch, add a preview mode that shows a simulated prompt based on a test activity before the rule is enabled.