critical priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

A `DoubleExportGuardService` class is implemented with two public methods: `filterAlreadyExported(List<String> candidateClaimIds)` and `markClaimsAsExported(String exportRunId, List<String> claimIds)`
`filterAlreadyExported` queries the `export_run_claims` join table (or equivalent) in Supabase and returns only the claim IDs that have NOT yet been associated with a completed export run
`markClaimsAsExported` inserts rows into `export_run_claims` inside a Supabase database transaction — if any insert fails, the entire transaction is rolled back and no partial marks are written
If `filterAlreadyExported` returns an empty list (all candidates already exported), the caller receives an empty list and no export run is started — this path is explicitly handled and logged
The service raises a typed `DoubleExportGuardError` (not a raw exception) if the Supabase query fails
The `markClaimsAsExported` operation is idempotent: calling it twice with the same `exportRunId` and `claimIds` does not create duplicate rows (use `upsert` or a unique constraint on `(export_run_id, claim_id)`)
An `export_runs` row with status `in_progress` is created before filtering, and its status transitions to `completed` only after `markClaimsAsExported` succeeds — status transitions are not handled in this service but the `exportRunId` it accepts must reference an existing row
All Supabase calls are performed using the project's existing `SupabaseClient` injection pattern
Service is covered by integration tests hitting a Supabase test instance or mock, verifying the concurrency invariant

Technical Requirements

frameworks
Flutter
Dart
Riverpod
apis
Supabase PostgREST API
Supabase RPC (for transaction wrapping if needed)
data models
ApprovedClaim
ExportRun
ExportRunClaim
performance requirements
The filter query must use an indexed `claim_id` column on `export_run_claims` — bulk filtering of 380+ claims (HLF high-volume user) must complete in under 500ms
The mark operation must use a single bulk insert, not N individual inserts
The service must not hold a database lock longer than the duration of the mark transaction
security requirements
Row-level security (RLS) on `export_run_claims` must restrict access to users with the `accounting_export` permission — verify RLS policy exists before shipping
The `exportRunId` must be validated as a valid UUID before use in queries to prevent injection
No claim amounts or personal data are written to `export_run_claims` — only IDs and the run reference

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

The atomicity requirement is critical. Supabase's PostgREST does not support multi-statement transactions natively via the REST API. Use a Supabase PostgreSQL RPC function (a `plpgsql` stored procedure) that wraps the filter-and-mark logic in a `BEGIN/COMMIT` block, and call it via `supabase.rpc()`. This is the only reliable way to get true atomicity without a server-side middleware layer.

The RPC should accept: `export_run_id UUID`, `candidate_claim_ids UUID[]` and return the filtered (unexported) claim IDs so the caller knows which claims to pass to the exporter. This collapses filter + mark into a single round trip, eliminating the TOCTOU (time-of-check-time-of-use) race condition that would exist if filter and mark were separate REST calls. Inject `SupabaseClient` via Riverpod provider. Name the RPC `filter_and_reserve_claims_for_export`.

Testing Requirements

Integration tests using `flutter_test` with a Supabase test project or a local Supabase instance. Required test scenarios: (1) happy path — 10 candidate claims, 3 already exported, filter returns 7, mark succeeds; (2) all candidates already exported — filter returns empty list, no mark call is made; (3) mark transaction failure (simulate via RPC error mock) — verify no partial rows exist after failure; (4) idempotent mark — call markClaimsAsExported twice with same args, verify no duplicate rows; (5) concurrent filter + mark race simulation — two parallel calls with overlapping claim IDs, verify exactly one call proceeds to mark (requires DB unique constraint, not just app logic). Unit tests for input validation (null IDs, empty list, malformed UUIDs).

Component
Double-Export Guard
service medium
Epic Risks (3)
high impact medium prob dependency

The Xledger CSV/JSON import specification may not be available in full detail at implementation time. If the field format, column ordering, encoding requirements, or required fields differ from assumptions, the generated file will be rejected by Xledger on first production use.

Mitigation & Contingency

Mitigation: Obtain the official Xledger import specification document from Blindeforbundet before starting XledgerExporter implementation. Build a dedicated acceptance test that validates a sample export file against all documented constraints.

Contingency: If the spec arrives late, implement a configurable column-mapping layer so that field order and names can be adjusted via configuration without code changes. Ship a file-based export that coordinators can manually verify before connecting to Xledger import.

high impact low prob technical

The atomic claim-marking transaction in Double-Export Guard could fail under high concurrency if two coordinators trigger an export for overlapping date ranges simultaneously, potentially allowing duplicate exports to proceed past the guard.

Mitigation & Contingency

Mitigation: Use a database-level advisory lock or a SELECT FOR UPDATE on the relevant claim rows within the export transaction to serialize concurrent exports per organization. Add an integration test that simulates concurrent export triggers.

Contingency: If locking proves problematic at the database level, implement an application-level distributed lock using a Supabase row in a dedicated export_locks table with an expiry timestamp and automatic cleanup on failure.

medium impact high prob integration

HLF's Dynamics portal API endpoint may not be available or documented in time for Phase 1, leaving DynamicsExporter unable to be validated against a real system and potentially shipping with an incorrect field schema.

Mitigation & Contingency

Mitigation: Design DynamicsExporter for file-based export first (CSV/JSON download), with the API push implemented behind a feature flag. Request a Dynamics test environment or sandbox from HLF as early as possible.

Contingency: Ship DynamicsExporter as a file export only for Phase 1. Phase the API push integration into a follow-on task once the Dynamics sandbox is available, using the same AccountingExporter interface with no breaking changes.