high priority medium complexity backend pending backend specialist Tier 4

Acceptance Criteria

After a successful re-export, the coordinator calls the repository to update the `file_reference` field of the corresponding bufdir_export_audit_log record in a single database operation
The database update and the file reference assignment happen atomically — either both succeed or neither takes effect (use a Supabase database transaction or a single UPDATE with RETURNING)
If the database update fails after the file was generated, the coordinator throws a `FileReferenceUpdateException` and does not attempt to resolve a signed URL
After a successful database update, the coordinator calls `resolveDownloadUrl` (from task-003) to generate and return a signed URL
If signed URL resolution fails after a successful database update, the coordinator returns null for the URL but does NOT roll back the file reference update — the file reference is authoritative, the signed URL is ephemeral
The returned value is a `ReexportResult` object containing both the new file reference (String) and the nullable signed URL (String?)
The `updated_at` timestamp on the history record is updated as part of the same atomic operation
Unit tests cover: successful update + successful URL, successful update + failed URL (returns null URL with non-null file reference), failed update (exception thrown), concurrent update guard (optimistic lock or last-write-wins semantics documented)

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
Supabase PostgreSQL 15 (UPDATE ... RETURNING)
Supabase Storage SDK (createSignedUrl)
data models
bufdir_export_audit_log
performance requirements
Database update must be a single round-trip (UPDATE with RETURNING, not separate UPDATE then SELECT)
Total method time (update + URL resolution) must complete under 3 seconds under normal conditions
No polling or retry loops — fail fast on error
security requirements
Only the organisation's own history records may be updated — RLS on bufdir_export_audit_log enforces this at the database level
The new file reference path must follow the validated pattern from task-003 before being written to the database
Signed URL returned must never be persisted in the database — it is ephemeral and returned only to the requesting coordinator call
The acting user must hold the 'coordinator' or 'org_admin' role — enforce at service level before the update call

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Use Supabase's `.update({...}).eq('id', historyEntryId).select().single()` to perform the update and retrieve the updated row in one round-trip. Wrap the repository call in try-catch and rethrow as `FileReferenceUpdateException`. After a successful update, call `resolveDownloadUrl` (already implemented in task-003 on the ReportHistoryService — inject that service or reuse the same storage client method) and wrap the return in the `ReexportResult` value object. The signed URL resolution failure must be swallowed (return null) because the source of truth is the file_reference in the database — the UI can always generate a new signed URL on demand.

Define `ReexportResult` as an immutable value object with `final String fileReference` and `final String? signedUrl`. Do not add retry logic — log the URL failure at warning level and return.

Testing Requirements

Unit tests (flutter_test) with mock repository and mock storage client: (1) successful update + successful URL — assert ReexportResult contains both values, (2) successful update + storage client throws — assert ReexportResult.signedUrl is null, ReexportResult.fileReference is set, no exception thrown, (3) repository update throws PostgrestException — assert FileReferenceUpdateException is thrown and storage client is never called, (4) assert `updated_at` field is included in the update payload. Integration test on staging: perform a re-export cycle end-to-end, assert the database record has the new file_reference value after completion, and assert the returned signed URL is valid and accessible.

Component
Report Re-export Coordinator
service medium
Epic Risks (3)
high impact medium prob dependency

The ReportReexportCoordinator must invoke the Bufdir export pipeline defined in the bufdir-report-export feature. If that feature's internal API changes (renamed services, altered parameters), the re-export coordinator will break silently at runtime.

Mitigation & Contingency

Mitigation: Define a stable, versioned interface (abstract class or Dart interface) for the export pipeline entry point. The re-export coordinator depends only on this interface, not on concrete export service internals. Document the contract in both features.

Contingency: If the export pipeline breaks the re-export coordinator, fall back to surfacing a clear 'regeneration unavailable' message to the coordinator with instructions to use the primary export screen for the same period as a workaround, while the interface mismatch is fixed.

high impact low prob security

The audit trail must be immutable — coordinators must not be able to edit or delete past events. If the RLS policies allow UPDATE or DELETE on audit event rows, a coordinator could suppress evidence of a re-export or failed submission.

Mitigation & Contingency

Mitigation: Apply INSERT-only RLS policies to the audit events table (no UPDATE, no DELETE for any non-service-role user). Use a separate service-role key for writing audit events, never the user's JWT. Validate this in integration tests by asserting that UPDATE and DELETE calls from coordinator-role sessions are rejected with RLS errors.

Contingency: If immutability is compromised before detection, run a database audit comparing the audit log against the main history table timestamps to identify tampered records, restore from backup if needed, and issue a patch RLS migration immediately.

low impact low prob technical

The user stories require filter state (year, period type, status) to persist within a session so coordinators do not lose context when navigating away. Implementing this with Riverpod state management could cause stale filter state if the provider is not properly scoped to the session lifecycle.

Mitigation & Contingency

Mitigation: Scope the filter state provider to the router's history route scope, not globally. Use autoDispose with a keepAlive flag tied to the session so filters reset on logout but persist on tab switches within the same session.

Contingency: If filter state becomes stale or leaks between sessions, add an explicit reset in the logout handler that disposes all scoped providers. This is a UX degradation (coordinator must re-apply filters) rather than a data integrity issue.