critical priority medium complexity integration pending integration specialist Tier 3

Acceptance Criteria

ReportReexportCoordinator has a method `Future<ReportExportResult> reexport(ReportPeriodParameters params)` that invokes the existing Bufdir export pipeline service
The export pipeline is called via its existing public interface — no internal pipeline logic is duplicated or copied into the coordinator
The same `ReportPeriodParameters` fields (start date, end date, reporting period ID, aggregation scope) are mapped 1:1 to the export pipeline's expected input type
Given identical period parameters, re-export produces a file with the same content as the original export (byte-for-byte equivalence test on a known deterministic dataset)
If the export pipeline throws any exception, the coordinator propagates a typed `ReexportPipelineException` wrapping the original cause — it does not swallow the error silently
The coordinator does not perform any aggregation, CSV formatting, or file writing itself — all such logic remains in the existing pipeline
Unit tests verify that the pipeline's invoke method is called exactly once with the correctly mapped parameters
Unit tests verify that a pipeline exception is wrapped and rethrown as `ReexportPipelineException`
An integration test on staging data confirms the re-exported file matches the original for a known period

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
Supabase Edge Functions (Deno) — if export pipeline runs server-side
Supabase PostgreSQL 15 (aggregation queries)
Bufdir Reporting API (if the pipeline submits directly)
data models
bufdir_export_audit_log
bufdir_column_schema
activity
assignment
performance requirements
Re-export pipeline invocation must complete within the same time budget as the original export
No redundant data fetches — period parameters are passed in, not re-fetched by the coordinator
Export pipeline is not called more than once per coordinator invocation
security requirements
Coordinator must verify the acting user has the 'coordinator' or 'org_admin' role before invoking the pipeline — not just RLS
Bufdir Reporting API credentials must not be passed through the mobile client — pipeline invocation must be server-side via Edge Function if it touches the Bufdir API
Organisation context from the authenticated JWT must be propagated to the pipeline to prevent cross-organisation report generation
The re-exported file must be stored in the private Supabase Storage bucket — never in a public bucket

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

The key design constraint is zero duplication: the coordinator is a thin orchestrator that calls the pipeline, not a reimplementation of it. Map `ReportPeriodParameters` to the pipeline input DTO in a dedicated mapper method or extension — keep the mapping logic out of the coordinator's main method to make it independently testable. If the existing export pipeline is implemented as a Supabase Edge Function, the coordinator calls it via Supabase's `functions.invoke()` method. If it is a Dart service class, inject it via Riverpod and call its public method.

Use `try-catch` around the pipeline invocation and rethrow as `ReexportPipelineException(cause: e)`. Byte-for-byte equivalence requires that the pipeline does not include timestamps or UUIDs generated at invocation time in the CSV body — verify this with the export pipeline owner before implementation.

Testing Requirements

Unit tests (flutter_test) with mock export pipeline: (1) verify `invoke` is called once with correctly mapped parameters, (2) verify `ReexportPipelineException` is thrown when the pipeline throws, (3) verify the coordinator returns the pipeline's result object unmodified. Integration test: for a known seeded dataset with a fixed period range, run the original export, then run re-export with the same parameters and assert the resulting CSV content matches line-by-line. Contract test: assert the `ReportPeriodParameters` to export pipeline input mapping handles all field types without loss (dates, UUIDs, enums).

Component
Report Re-export Coordinator
service medium
Epic Risks (3)
high impact medium prob dependency

The ReportReexportCoordinator must invoke the Bufdir export pipeline defined in the bufdir-report-export feature. If that feature's internal API changes (renamed services, altered parameters), the re-export coordinator will break silently at runtime.

Mitigation & Contingency

Mitigation: Define a stable, versioned interface (abstract class or Dart interface) for the export pipeline entry point. The re-export coordinator depends only on this interface, not on concrete export service internals. Document the contract in both features.

Contingency: If the export pipeline breaks the re-export coordinator, fall back to surfacing a clear 'regeneration unavailable' message to the coordinator with instructions to use the primary export screen for the same period as a workaround, while the interface mismatch is fixed.

high impact low prob security

The audit trail must be immutable — coordinators must not be able to edit or delete past events. If the RLS policies allow UPDATE or DELETE on audit event rows, a coordinator could suppress evidence of a re-export or failed submission.

Mitigation & Contingency

Mitigation: Apply INSERT-only RLS policies to the audit events table (no UPDATE, no DELETE for any non-service-role user). Use a separate service-role key for writing audit events, never the user's JWT. Validate this in integration tests by asserting that UPDATE and DELETE calls from coordinator-role sessions are rejected with RLS errors.

Contingency: If immutability is compromised before detection, run a database audit comparing the audit log against the main history table timestamps to identify tampered records, restore from backup if needed, and issue a patch RLS migration immediately.

low impact low prob technical

The user stories require filter state (year, period type, status) to persist within a session so coordinators do not lose context when navigating away. Implementing this with Riverpod state management could cause stale filter state if the provider is not properly scoped to the session lifecycle.

Mitigation & Contingency

Mitigation: Scope the filter state provider to the router's history route scope, not globally. Use autoDispose with a keepAlive flag tied to the session so filters reset on logout but persist on tab switches within the same session.

Contingency: If filter state becomes stale or leaks between sessions, add an explicit reset in the logout handler that disposes all scoped providers. This is a UX degradation (coordinator must re-apply filters) rather than a data integrity issue.