critical priority high complexity backend pending backend specialist Tier 3

Acceptance Criteria

Calling runExport(ExportRequest) creates an audit record with status 'initialised' before any downstream service is called
After querying activities, the audit record status transitions to 'querying_activities'; after mapping, to 'mapping_columns'; after file generation, to 'generating_file'; after storage, to 'storing_file'; and on success, to 'completed'
The returned ExportResult contains a non-null downloadUrl (Supabase signed URL), the auditId, row count, and file size in bytes
If BufdirActivityQueryService returns zero rows, the orchestrator marks the audit record as 'completed_empty' and returns an ExportResult with rowCount = 0 and a null downloadUrl, rather than generating an empty file
Each pipeline step is wrapped so that an unhandled exception transitions the audit record to 'failed' with an errorMessage before re-throwing
The orchestrator does not catch exceptions itself beyond status-update responsibility — error classification is delegated to task-009
Attachment bundling is only invoked when ExportRequest.includeAttachments is true; the step is skipped otherwise
All status transitions are written to Supabase atomically via BufdirExportAuditService before the next step begins
ExportResult is a sealed/union type with ExportSuccess and ExportFailure variants
The orchestrator is stateless: all mutable state lives in the Supabase audit record, not in memory

Technical Requirements

frameworks
Flutter
Riverpod
Dart
BLoC (for UI state consumption)
apis
Supabase Database REST API
Supabase Storage API (signed URL generation)
BufdirActivityQueryService internal API
BufdirColumnMapper internal API
Excel/CSV generator internal API
data models
ExportRequest
ExportResult
ExportSuccess
ExportFailure
BufdirExportAuditRecord
BufdirActivityRow
MappedColumnRow
performance requirements
Full pipeline for a 500-row export must complete within 30 seconds on a stable connection
Activity query and file generation steps must run in a background isolate (compute()) to avoid blocking the UI thread
Supabase signed URL generated for downloadUrl must have a minimum TTL of 24 hours
security requirements
ExportRequest must carry the authenticated user's organisationId; the orchestrator must verify the requesting user belongs to that organisation before querying activities
Signed download URLs must be scoped to the specific export file path and not expose other files in the bucket
Audit records must never be deleted by the orchestrator; only status updates are permitted
All Supabase calls must use the authenticated client (not the anon key)

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Structure the orchestrator as a sequence of private async methods (_createAudit, _queryActivities, _mapColumns, _generateFile, _bundleAttachments, _storeAndFinalize) each called in order from a single public runExport method. Wrap each private method call in a try/catch that calls _markFailed(auditId, e.toString()) before rethrowing — this keeps the happy-path code linear and readable. Use Dart's sealed classes (Dart 3+) for ExportResult to force exhaustive handling at the call site. Run _generateFile inside compute() passing a plain data record (avoid passing closures).

The BufdirExportOrchestratorService should be registered as a Riverpod AsyncNotifierProvider so the UI can watch export progress via audit record polling. Keep the orchestrator thin: it should contain sequencing logic only, never business rules about columns or activity filtering.

Testing Requirements

Unit tests (flutter_test): use mock implementations of BufdirActivityQueryService, BufdirColumnMapper, file generator, and BufdirExportAuditService to verify correct step sequencing via call-order assertions. Test the zero-row path returns ExportSuccess with rowCount=0 and null downloadUrl. Test that audit status transitions occur in the correct order using a recorded fake. Test that includeAttachments=false skips the bundling step entirely.

Integration test (Supabase local emulator): run a full pipeline against real Supabase tables with seed data for NHF, Blindeforbundet, and HLF to confirm the audit record reaches 'completed' status and the signed URL is accessible. Target ≥85% line coverage on the orchestrator class.

Component
Bufdir Export Orchestrator Service
service high
Epic Risks (2)
medium impact medium prob scope

Bufdir's column schema may have per-field business rules (conditional required fields, cross-field validation, organisation-specific category taxonomies) that cannot be expressed in a simple key-value mapping configuration. If the configuration model is too simple, supporting NHF's specific requirements will require hardcoded organisation logic, undermining the configuration-driven design.

Mitigation & Contingency

Mitigation: Design the column configuration schema as a full JSON document supporting field-level transformation rules, conditional expressions, and org-specific value enumerations. Validate the design against a real NHF Bufdir Excel template before implementation begins.

Contingency: If the configuration model cannot express all required rules, implement a thin transformation plugin interface where org-specific logic can be added as a named Dart class registered against the organisation ID, with the JSON config covering only the common cases.

high impact medium prob technical

For large organisations like NHF with potentially tens of thousands of activity records, the full export pipeline (query + map + generate + bundle + upload) may exceed Supabase Edge Function execution time limits (typically 150s), causing silent timeouts that leave audit records in a pending state indefinitely.

Mitigation & Contingency

Mitigation: Implement the orchestrator as a background Dart isolate with progress streaming rather than a synchronous Edge Function call. Use chunked processing for the query and mapping phases to reduce peak memory usage. Profile against realistic NHF data volumes in a staging environment.

Contingency: If processing time cannot be reduced below the timeout threshold, implement an asynchronous job model where the export is queued, processed in the background, and the user is notified via push notification when the download is ready — treating it as an eventual rather than synchronous operation.