critical priority high complexity testing pending testing specialist Tier 4

Acceptance Criteria

All test cases pass with 0 failures and 0 errors on a clean flutter test run
Code coverage report shows 100% branch coverage on ScenarioRuleEngine.evaluate() and ScenarioRuleEngine.getPrioritizedMatch()
Test file is organized into clearly named test groups (group() blocks) matching each condition dimension: ContactType, WellbeingFlags, DurationRange, DelayWindow, CooldownGuard, PriorityResolution, EdgeCases
Each test case has a descriptive name following the pattern 'given [precondition], when [action], then [expected outcome]'
Boundary value tests cover: durationMinutes == minDuration (pass), durationMinutes == minDuration - 1 (fail), durationMinutes == maxDuration (pass), durationMinutes == maxDuration + 1 (fail)
Wellbeing 'any' semantics: at least one test where one of three flags matches and rule triggers; at least one test where zero flags match and rule does not trigger
Wellbeing 'all' semantics: at least one test where all flags match; at least one test where only n-1 flags match and rule does not trigger
Cooldown guard tests use an injected clock mock so tests do not depend on wall clock time
Priority resolution tests include a case where three rules all match and the correct winner is selected based on priority, then specificity, then insertion order
Empty rule list test asserts reasonCode == EvaluationReasonCode.NO_RULES_CONFIGURED
No test uses Thread.sleep, Future.delayed, or any real-time delay — all timing is controlled via mocked clock and mocked repositories
Test fixtures (ActivityMetadata, ScenarioRule builders) are extracted into a shared test_helpers/scenario_fixtures.dart file for reuse across test files

Technical Requirements

frameworks
flutter_test
Mockito (or manual test doubles)
apis
PromptHistoryRepository (mocked)
Clock abstraction (mocked)
data models
ActivityMetadata
ScenarioRule
PromptHistory
EvaluationResult
EvaluationReasonCode
performance requirements
Full test suite for this file must complete in under 10 seconds
No individual test case should take longer than 500ms
security requirements
Test fixtures must not use real user IDs, real contact IDs, or real activity data — use clearly synthetic UUIDs (e.g., '00000000-0000-0000-0000-000000000001')

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Structure the test file as: (1) imports and fixture declarations at the top, (2) setUp() building a fresh ScenarioRuleEngine with mock dependencies for each test, (3) test groups in dependency order matching the condition evaluation chain. For boundary value testing, consider a parameterized helper: void runBoundaryTest(int duration, int min, int max, bool expectMatch) — this prevents copy-paste drift when the boundary logic changes. Extract ActivityMetadataBuilder and ScenarioRuleBuilder factory helpers into test_helpers/scenario_fixtures.dart using the builder pattern (return a base object and allow named parameter overrides) so individual tests only specify the fields relevant to their assertion. Document in a top-of-file comment which tasks are under test and what the coverage target is.

If Mockito is not already a dev dependency, prefer hand-written test doubles over adding a new package dependency — the interfaces are simple enough.

Testing Requirements

This task IS the testing task. Organize tests using flutter_test's group() and test() functions. Use a TestPromptHistoryRepository (hand-written test double implementing PromptHistoryRepository) that returns configurable last-prompt timestamps. Use a TestClock (hand-written or Mockito mock) injected into ScenarioRuleEngine for deterministic time control.

Apply table-driven tests using a List of test cases for boundary value scenarios to maximize coverage with minimal boilerplate. Include a 'golden path' integration-style test at the bottom of the file that chains task-005 evaluate() and task-006 getPrioritizedMatch() together on a realistic scenario pulled from the HLF peer mentor follow-up use case described in the requirements documentation.

Component
Scenario Rule Engine
service high
Epic Risks (2)
high impact medium prob scope

The Rule Engine must support a flexible JSON rule schema that can express compound conditions (e.g., contact_type AND wellbeing_flag AND delay_days). Underestimating schema expressiveness may require breaking changes to the rule format after coordinators have already configured rules.

Mitigation & Contingency

Mitigation: Define and freeze the rule JSON schema (trigger_type enum, metadata_conditions structure, delay logic) before any implementation begins; validate schema against all known HLF scenarios documented in the feature spec.

Contingency: If schema changes are needed after deployment, implement a schema version field and a migration utility that upgrades stored rules to the new format without coordinator intervention.

medium impact medium prob technical

Deep-link navigation to the activity wizard with pre-filled arguments may fail if the user's session has expired or if the wizard route is not yet mounted in the navigator stack, causing unhandled navigation exceptions.

Mitigation & Contingency

Mitigation: Implement session state check before navigation; if session is expired, redirect to biometric/login screen and store the pending deep-link URI for post-auth redirect using go_router's redirect mechanism.

Contingency: If post-auth redirect proves unreliable, fall back to navigating to the home screen with a visible action banner that re-triggers the wizard with pre-filled arguments.