high priority low complexity testing pending testing specialist Tier 3

Acceptance Criteria

All tests are in `test/` alongside production code, not in `lib/`; file names mirror production: `way_forward_item_repository_test.dart`, `report_schema_cache_test.dart`
WayForwardItemRepository — create: verifies Supabase insert is called with correct payload; returns the created entity on success; throws RepositoryException on Supabase error
WayForwardItemRepository — read: getById returns the matching entity; returns null (or throws NotFoundException) when row does not exist; getByActivityId returns correct filtered list
WayForwardItemRepository — update: verifies update is called with correct id and diff payload; returns updated entity; throws on optimistic concurrency conflict if implemented
WayForwardItemRepository — delete: verifies delete is called with correct id; returns void on success; throws on not-found
WayForwardItemRepository — RLS rejection: when Supabase returns PGRST301 or 403, repository wraps this in a PermissionException (not a raw SupabaseException)
WayForwardItemRepository — network error: when Supabase client throws a network exception, repository wraps it in a RepositoryException with meaningful message
ReportSchemaCache — cache miss: first call fetches from source and stores result; returns correct schema object
ReportSchemaCache — cache hit: second call within TTL does NOT call the source again; returns same object reference (or equal value)
ReportSchemaCache — TTL expiry: after TTL has elapsed (advance clock via fake clock or injectable DateTime), next call fetches fresh data
ReportSchemaCache — explicit invalidation: after `invalidate(orgId)` call, next access for that orgId fetches fresh data regardless of TTL
ReportSchemaCache — concurrent access: two simultaneous calls for the same key result in exactly one source fetch (deduplicated in-flight requests)
Branch coverage ≥ 90% verified by running `flutter test --coverage` and inspecting `lcov.info`; CI pipeline fails below threshold
No real Supabase network calls in any test — all external calls are intercepted by mocks

Technical Requirements

frameworks
flutter_test
mockito (code-generated mocks via build_runner)
fake_async (for TTL and timer-dependent tests)
apis
Supabase Dart client (mocked — SupabaseClient, PostgrestFilterBuilder)
data models
WayForwardItem entity
ReportSchema / FieldConfig
RepositoryException
PermissionException
performance requirements
Full test suite for both components must complete in < 5 seconds
No real timers — use fake_async to advance time deterministically for TTL tests
security requirements
Test fixtures must not contain real personal data — use synthetic UUIDs and placeholder strings

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Inject the Supabase client (and for the cache, a clock abstraction) via constructor parameters — do not use global singletons or `Supabase.instance` directly in repository classes, as this makes mocking impossible. If the repository currently uses `Supabase.instance`, refactor it to accept `SupabaseClient` as a constructor arg before writing tests. For the cache concurrency test, the simplest approach is storing an in-flight `Future` per key so a second caller awaits the same Future — check if this pattern is already implemented; if not, add it as part of this task. Use `throwsA(isA())` matchers for exception-type assertions rather than raw `expect(() => ..., throws)`.

Keep test fixtures in a shared `test/fixtures/` folder so they can be reused by tasks 010 and 011.

Testing Requirements

These ARE the tests. Use `@GenerateMocks([SupabaseClient, ...])` with mockito's build_runner code generation. Group tests with `group()` blocks per method. Use `setUp()` to construct the repository with injected mock.

For cache concurrency test, use `Future.wait([cache.get(id), cache.get(id)])` inside `fakeAsync` and verify the underlying loader was called exactly once via `verify(...).called(1)`. Run coverage with `flutter test --coverage` and add a CI step to fail if line coverage drops below 90%.

Component
Way Forward Item Repository
data low
Epic Risks (3)
high impact medium prob security

Supabase RLS policies for multi-org report access may be more complex than anticipated — coordinators need cross-peer-mentor access within their org but not across orgs, and draft reports should be invisible to coordinators until submitted. Misconfigured RLS could expose sensitive health data or block legitimate access.

Mitigation & Contingency

Mitigation: Define and test RLS policies in isolation before writing repository code. Create a dedicated SQL migration file with policy definitions and an automated integration test suite that verifies each role's access boundaries using real Supabase auth tokens.

Contingency: If RLS proves too complex to express declaratively, implement application-level access control in the repository layer with explicit org and role checks, and add a security audit task before the feature goes to production.

high impact medium prob integration

The org field config JSON stored in Supabase may lack a stable, versioned schema contract. If different organisations have drifted to different field-definition formats, org-field-config-loader will fail silently or crash, breaking form rendering for those orgs.

Mitigation & Contingency

Mitigation: Define a canonical JSON Schema for field config and validate all existing org configs against it before implementation begins. Store a schema version field in every config record and handle version migrations explicitly in the loader.

Contingency: If existing configs are too heterogeneous, implement a config normalisation pass in org-field-config-loader that coerces known variants to the canonical format, logging warnings for fields that cannot be normalised so operations can fix them in the admin console.

medium impact low prob technical

TTL-based schema cache invalidation may cause peer mentors to use stale field definitions for up to the TTL window after an admin updates the org config, potentially collecting data against outdated field structures.

Mitigation & Contingency

Mitigation: Set a conservative TTL (e.g. 15 minutes) and expose a manual cache-bust mechanism triggered on app foreground-resume. Document the maximum staleness window in the admin console so org admins know to plan config changes outside active reporting windows.

Contingency: If stale schema causes a data quality incident, add a Supabase Realtime subscription to the org config table that invalidates the cache immediately on any config update.