critical priority medium complexity integration pending integration specialist Tier 2

Acceptance Criteria

SchedulerService.runPipeline() calls ruleEngine.evaluate(activity) for every ActivityRecord returned by fetchActivitiesInWindow
evaluate() returns either a ScenarioPrompt or null — null values are silently filtered from the candidates list
Final candidates list contains only non-null ScenarioPrompt objects
If fetchActivitiesInWindow returns an empty list, the Rule Engine is not called and candidates is an empty list
In verbose/debug mode, each evaluation outcome (matched scenario ID or 'no match') is logged with the activity ID
Rule Engine exceptions are caught per-activity — one failing evaluation does not abort the entire pipeline
Candidates list is deduplicated by (peer_mentor_id, scenario_id) before being returned from this stage
Unit test asserts Rule Engine is called once per activity in the input list
Unit test asserts that activities returning null from Rule Engine do not appear in candidates output

Technical Requirements

frameworks
Flutter
Riverpod
data models
activity
activity_type
performance requirements
Rule Engine evaluation is synchronous and pure (no I/O) — must complete under 5ms per activity
Batch evaluation using List.map + whereNotNull pattern to avoid imperative loops
security requirements
Rule Engine must not make outbound network calls — evaluation is local logic only
Do not expose internal scenario rule definitions in logs — log only scenario IDs

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Inject ScenarioRuleEngine as a constructor dependency on SchedulerService for testability. The pipeline step should look like: final candidates = activities.map((a) { try { return ruleEngine.evaluate(a); } catch (e) { _log.warning('Rule eval failed for ${a.id}: $e'); return null; } }).whereNotNull().toList(). After mapping, deduplicate using a LinkedHashMap keyed on '${prompt.peerId}:${prompt.scenarioId}'. The ScenarioRuleEngine.evaluate interface should be: ScenarioPrompt?

evaluate(ActivityRecord activity). Keep this stage stateless — do not read from database here; the Rule Engine should operate on the ActivityRecord fields alone.

Testing Requirements

Unit tests with flutter_test. Use a mock ScenarioRuleEngine injected into SchedulerService constructor. Test cases: (1) 3 activities in, rule engine returns [ScenarioPrompt, null, ScenarioPrompt] — assert candidates has 2 items; (2) 0 activities in — assert ruleEngine.evaluate never called, candidates is empty list; (3) rule engine throws on second activity — assert first and third are still processed, exception logged, pipeline continues; (4) two activities trigger same (peer_mentor_id, scenario_id) — assert deduplication yields 1 candidate. Verify verbose logging output contains activity ID for each evaluation using a log capture helper.

Component
Scenario Prompt Scheduler Service
service high
Epic Risks (2)
high impact medium prob technical

If the scheduler runs concurrently (e.g., two overlapping cron invocations due to edge function retry), duplicate prompts could be dispatched before the first run's history records are committed, breaking the deduplication guarantee.

Mitigation & Contingency

Mitigation: Use a Postgres advisory lock or unique constraint on (user_id, scenario_id, activity_ref) in the prompt history table to make concurrent writes idempotent; design the scheduler to check history inside a transaction.

Contingency: If concurrency issues persist in production, add a distributed lock via Supabase Edge Function concurrency limit (max_instances=1) for the evaluation function as a hard guard.

medium impact medium prob scope

Coordinators may find scenario configuration unclear if trigger conditions are expressed as raw JSON or technical terminology, leading to misconfiguration and irrelevant prompts being sent to peer mentors.

Mitigation & Contingency

Mitigation: Design the ScenarioConfigurationScreen to display human-readable descriptions of each template's trigger condition (e.g., 'Send 3 days after first contact if wellbeing concern was flagged') rather than raw rule properties; validate with an HLF coordinator in a design review before implementation.

Contingency: If coordinators still misconfigure rules after launch, add a preview mode that shows a simulated prompt based on a test activity before the rule is enabled.