critical priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

ExportRun Dart model class exists with all fields matching the database schema, using fromJson/toJson with snake_case ↔ camelCase mapping
ExportRunStatus is a Dart enum with values: pending, running, completed, failed — with a fromString() factory for safe deserialization
createRun(orgId, initiatedBy, dateRangeStart, dateRangeEnd, targetSystem) inserts a new row and returns the created ExportRun with server-generated run_id and created_at
updateRunStatus(runId, status, {recordCount, fileUrl, completedAt}) performs a targeted UPDATE and throws ExportRunNotFoundException if row is not found
getRunById(runId) returns ExportRun? (nullable) — returns null if not found, does not throw
getRunsByOrg(orgId, {limit = 20, offset = 0}) returns paginated List<ExportRun> ordered by created_at DESC
markClaimsExported(List<String> claimIds) bulk-updates exported_at = now() on expense_claims rows using an IN clause — runs as a single SQL statement, not N individual updates
getLastExportDate(orgId, targetSystem) returns DateTime? representing the most recent completed_at for that org+system combination
All methods handle Supabase PostgrestException and wrap it in a domain-specific ExportRunException with a human-readable message
Repository is exposed via a Riverpod Provider<ExportRunRepository> using ref.watch(supabaseClientProvider)

Technical Requirements

frameworks
Flutter
Riverpod
supabase_flutter
apis
Supabase PostgREST (via supabase_flutter client)
data models
ExportRun
ExportRunStatus
expense_claims
performance requirements
markClaimsExported must use a single bulk UPDATE, not a loop — critical for runs with 100+ claims
getRunsByOrg must use server-side pagination (range()) not client-side filtering
security requirements
Never pass org_id from the client as a filter parameter — rely on RLS to enforce tenant isolation
Do not expose supabase service role key anywhere in Flutter code
Validate that run belongs to current user's org before returning sensitive file_url

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Place the repository in lib/features/accounting/data/repositories/export_run_repository.dart following the existing feature-slice folder structure. Define the ExportRun model in lib/features/accounting/domain/models/export_run.dart. Use Riverpod's @riverpod code generation if the project already uses it, otherwise use a plain Provider. For markClaimsExported, use supabase.from('expense_claims').update({'exported_at': DateTime.now().toIso8601String()}).in_('id', claimIds) — confirm column name matches the migration.

Treat PostgrestException with code '42501' (insufficient privilege) as a permissions error with a user-facing message. Avoid storing the repository state in a StateNotifier — this is a pure data-access layer with no local state.

Testing Requirements

Write unit tests using flutter_test with a mocked SupabaseClient (mockito or manual mock). Test createRun: verify correct payload shape sent to Supabase insert. Test updateRunStatus: verify correct fields sent on update, and that ExportRunNotFoundException is thrown on empty response. Test markClaimsExported: verify a single .update().in_() call is made (not N calls).

Test getLastExportDate: verify correct filter on target_system and ordering. Test ExportRun.fromJson: cover null fields (record_count, file_url, completed_at), valid timestamps, and unknown status values. Aim for 90%+ line coverage on the repository class.

Component
Export Run Repository
data medium
Epic Risks (3)
high impact medium prob technical

Adding exported_at and export_run_id columns to expense_claims requires a live migration on a table shared with the approval workflow. A poorly timed migration could lock the table and block claim submissions or approvals.

Mitigation & Contingency

Mitigation: Use non-blocking ADD COLUMN with a DEFAULT of NULL (no backfill needed) executed during a low-traffic window. Test migration rollback on a staging replica before production deployment.

Contingency: If migration causes table lock contention, roll back and reschedule for a maintenance window. Use a feature flag to gate the export UI until the migration completes successfully.

medium impact high prob scope

Chart of accounts mapping configurations for Xledger and Dynamics may not be fully specified by stakeholders at development time, leaving the mapper with incomplete data and causing validation failures for unmapped expense categories.

Mitigation & Contingency

Mitigation: Implement the mapper to return a structured validation error (not a crash) for any unmapped field, and surface these errors clearly in the export confirmation dialog. Request full mapping tables from Blindeforbundet and HLF stakeholders as a pre-condition for this epic.

Contingency: If mappings arrive incomplete, ship the mapper with the available subset and mark unmapped categories as excluded (skipped with reason). Coordinators see which categories are skipped and can manually submit those records.

medium impact medium prob dependency

Supabase Vault configuration for storing per-org accounting credentials may require infra permissions or environment secrets not yet provisioned in staging or production, blocking development and testing of credential retrieval.

Mitigation & Contingency

Mitigation: Provision Vault secrets and environment configuration in staging as the first task of this epic. Document the exact secret naming convention and rotation procedure before implementation begins.

Contingency: If Vault is unavailable, use environment variables scoped to the Edge Function as a temporary fallback for development. Block production deployment until Vault-based storage is confirmed operational.