critical priority medium complexity testing pending testing specialist Tier 4

Acceptance Criteria

Test case: claims with an existing completed export run reference are excluded from the filtered batch — verified with a seeded test database containing 5 previously exported and 3 new claims
Test case: after a successful export, all claims in the batch are marked with the new export run reference — verified by querying claim state post-mark
Test case: if the mark database call throws a simulated error mid-batch, zero claims are marked (full rollback) — no partial state in the database
Test case: two concurrent export attempts for the same organisation and period — only one proceeds, the other receives an appropriate conflict error or empty batch
Test case: when the approved-claims query returns an empty list, the guard returns an empty batch without error and does not create an export run record
Test case: guard is organisation-scoped — seeding claims for org-A does not affect guard output for org-B
All unit tests use flutter_test with mocked Supabase client — no real network calls
Integration tests targeting a local Supabase instance verify atomicity at the database level (transaction rollback)
Test file follows the arrange-act-assert pattern with descriptive test names in English
Test suite achieves 100% statement coverage on the DoubleExportGuard class

Technical Requirements

frameworks
flutter_test
mockito or mocktail for Dart mocking
apis
Supabase PostgreSQL 15 (local test instance for integration tests)
data models
activity
annual_summary
bufdir_export_audit_log
claim_event
performance requirements
Unit test suite must complete in under 10 seconds
Integration test suite must complete in under 60 seconds against local Supabase
security requirements
Test database must use isolated test organisation IDs — never real production UUIDs
Test credentials for local Supabase stored in .env.test, never committed to version control

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

The critical test to get right is the atomicity test. To simulate a mid-transaction failure, inject a mock Supabase client that succeeds for the first N mark calls and throws on the (N+1)th — then assert zero rows are updated in the database. This requires the guard's mark implementation to use a database transaction (Supabase RPC or batch operation with rollback). For the concurrency test, use Dart isolates or async futures launched concurrently with Future.wait and assert that only one export run record is created.

Use a test fixture builder (a helper function that creates seeded claim records) to avoid repetitive setup across test cases. Name tests descriptively: 'given 3 previously exported claims and 2 new claims, filterBatch returns only the 2 new claims'. Avoid test interdependence — each test must set up and tear down its own database state.

Testing Requirements

Three-layer test strategy: (1) Pure unit tests with fully mocked dependencies test the guard's filtering logic and return types in isolation. (2) Repository-level integration tests with a real local Supabase PostgreSQL instance test atomicity and transaction rollback. (3) Concurrency tests using Dart's Future.wait to simulate simultaneous export attempts. Use flutter_test as the test runner.

Use mocktail (preferred over mockito for null-safety ergonomics) for mocking the Supabase client in unit tests. Each test file maps 1:1 to the source class under test. 100% coverage required on DoubleExportGuard.

Component
Double-Export Guard
service medium
Epic Risks (3)
high impact medium prob dependency

The Xledger CSV/JSON import specification may not be available in full detail at implementation time. If the field format, column ordering, encoding requirements, or required fields differ from assumptions, the generated file will be rejected by Xledger on first production use.

Mitigation & Contingency

Mitigation: Obtain the official Xledger import specification document from Blindeforbundet before starting XledgerExporter implementation. Build a dedicated acceptance test that validates a sample export file against all documented constraints.

Contingency: If the spec arrives late, implement a configurable column-mapping layer so that field order and names can be adjusted via configuration without code changes. Ship a file-based export that coordinators can manually verify before connecting to Xledger import.

high impact low prob technical

The atomic claim-marking transaction in Double-Export Guard could fail under high concurrency if two coordinators trigger an export for overlapping date ranges simultaneously, potentially allowing duplicate exports to proceed past the guard.

Mitigation & Contingency

Mitigation: Use a database-level advisory lock or a SELECT FOR UPDATE on the relevant claim rows within the export transaction to serialize concurrent exports per organization. Add an integration test that simulates concurrent export triggers.

Contingency: If locking proves problematic at the database level, implement an application-level distributed lock using a Supabase row in a dedicated export_locks table with an expiry timestamp and automatic cleanup on failure.

medium impact high prob integration

HLF's Dynamics portal API endpoint may not be available or documented in time for Phase 1, leaving DynamicsExporter unable to be validated against a real system and potentially shipping with an incorrect field schema.

Mitigation & Contingency

Mitigation: Design DynamicsExporter for file-based export first (CSV/JSON download), with the API push implemented behind a feature flag. Request a Dynamics test environment or sandbox from HLF as early as possible.

Contingency: Ship DynamicsExporter as a file export only for Phase 1. Phase the API push integration into a follow-on task once the Dynamics sandbox is available, using the same AccountingExporter interface with no breaking changes.