critical priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

fetchActivitiesInWindow(chapterId, windowDays) method exists on SchedulerService and is callable with typed parameters
Default look-back window of 7 days is used when windowDays is not explicitly provided
Query filters activities by organization_unit_id (chapter) and date >= (now - windowDays)
Query uses the authenticated Supabase client so RLS policies are enforced — no service-role bypass on mobile
Returned list is typed as List<ActivityRecord> with all required fields mapped (id, peer_mentor_id, activity_type_id, date, organization_id)
Empty result returns an empty List<ActivityRecord> without throwing
Network or Supabase error is caught, logged, and rethrown as a typed SchedulerException with a meaningful message
Query performance: indexed on date and organization_unit_id — no full table scans on the activities table
Unit test passes for: normal window with results, empty window, boundary date (exactly 7 days ago inclusive), and Supabase error simulation

Technical Requirements

frameworks
Flutter
Riverpod
supabase_flutter
apis
Supabase PostgreSQL 15 REST/PostgREST — activities table
Supabase Auth (authenticated client)
data models
activity
assignment
performance requirements
Query must complete in under 800ms for chapters with up to 500 activities in the window
Use .select() with explicit column list — avoid SELECT * to reduce payload size
Apply .gte('date', windowStart.toIso8601String()) filter server-side, not client-side
security requirements
Always use the user-scoped Supabase client (supabase.auth.currentSession) — never the service role key on mobile
RLS on activities table must restrict results to the authenticated user's organization
Do not log full activity records — log only counts and IDs in verbose mode
Validate chapterId is a valid UUID before issuing the query to prevent injection

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use Riverpod Provider or a plain Dart class injected via Provider — do not couple SchedulerService to BuildContext. The method signature should be: Future> fetchActivitiesInWindow(String chapterId, {int windowDays = 7}). Compute windowStart as DateTime.now().toUtc().subtract(Duration(days: windowDays)) before the query. Use supabase.from('activities').select('id, peer_mentor_id, activity_type_id, date, organization_id, duration_minutes, location').eq('organization_unit_id', chapterId).gte('date', windowStart.toIso8601String()).order('date', ascending: false).

Map response with ActivityRecord.fromJson(). Wrap in try/catch and rethrow as SchedulerException. Avoid calling DateTime.now() inside a loop — compute once and pass down.

Testing Requirements

Unit tests using flutter_test with a mocked SupabaseClient (mockito or manual stub). Test cases: (1) returns correctly typed ActivityRecord list when Supabase responds with valid JSON array; (2) returns empty list when Supabase returns empty array; (3) applies correct date filter — assert .gte call received windowStart argument within 1 second of expected; (4) throws SchedulerException with code 'fetch_failed' on Supabase error; (5) boundary condition — activity dated exactly at windowStart is included. Integration test: connect to local Supabase dev instance, seed 3 activities (2 in window, 1 outside), assert only 2 returned.

Component
Scenario Prompt Scheduler Service
service high
Epic Risks (2)
high impact medium prob technical

If the scheduler runs concurrently (e.g., two overlapping cron invocations due to edge function retry), duplicate prompts could be dispatched before the first run's history records are committed, breaking the deduplication guarantee.

Mitigation & Contingency

Mitigation: Use a Postgres advisory lock or unique constraint on (user_id, scenario_id, activity_ref) in the prompt history table to make concurrent writes idempotent; design the scheduler to check history inside a transaction.

Contingency: If concurrency issues persist in production, add a distributed lock via Supabase Edge Function concurrency limit (max_instances=1) for the evaluation function as a hard guard.

medium impact medium prob scope

Coordinators may find scenario configuration unclear if trigger conditions are expressed as raw JSON or technical terminology, leading to misconfiguration and irrelevant prompts being sent to peer mentors.

Mitigation & Contingency

Mitigation: Design the ScenarioConfigurationScreen to display human-readable descriptions of each template's trigger condition (e.g., 'Send 3 days after first contact if wellbeing concern was flagged') rather than raw rule properties; validate with an HLF coordinator in a design review before implementation.

Contingency: If coordinators still misconfigure rules after launch, add a preview mode that shows a simulated prompt based on a test activity before the rule is enabled.