Store new file reference and return signed URL
epic-bufdir-report-history-services-task-007 — After successful re-export, have the ReportReexportCoordinator persist the newly generated file reference in the report history table via the repository, then resolve and return a signed download URL via the storage client. Ensure the history record is updated atomically with the new file path so partial failures leave the database in a consistent state.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
Use Supabase's `.update({...}).eq('id', historyEntryId).select().single()` to perform the update and retrieve the updated row in one round-trip. Wrap the repository call in try-catch and rethrow as `FileReferenceUpdateException`. After a successful update, call `resolveDownloadUrl` (already implemented in task-003 on the ReportHistoryService — inject that service or reuse the same storage client method) and wrap the return in the `ReexportResult` value object. The signed URL resolution failure must be swallowed (return null) because the source of truth is the file_reference in the database — the UI can always generate a new signed URL on demand.
Define `ReexportResult` as an immutable value object with `final String fileReference` and `final String? signedUrl`. Do not add retry logic — log the URL failure at warning level and return.
Testing Requirements
Unit tests (flutter_test) with mock repository and mock storage client: (1) successful update + successful URL — assert ReexportResult contains both values, (2) successful update + storage client throws — assert ReexportResult.signedUrl is null, ReexportResult.fileReference is set, no exception thrown, (3) repository update throws PostgrestException — assert FileReferenceUpdateException is thrown and storage client is never called, (4) assert `updated_at` field is included in the update payload. Integration test on staging: perform a re-export cycle end-to-end, assert the database record has the new file_reference value after completion, and assert the returned signed URL is valid and accessible.
The ReportReexportCoordinator must invoke the Bufdir export pipeline defined in the bufdir-report-export feature. If that feature's internal API changes (renamed services, altered parameters), the re-export coordinator will break silently at runtime.
Mitigation & Contingency
Mitigation: Define a stable, versioned interface (abstract class or Dart interface) for the export pipeline entry point. The re-export coordinator depends only on this interface, not on concrete export service internals. Document the contract in both features.
Contingency: If the export pipeline breaks the re-export coordinator, fall back to surfacing a clear 'regeneration unavailable' message to the coordinator with instructions to use the primary export screen for the same period as a workaround, while the interface mismatch is fixed.
The audit trail must be immutable — coordinators must not be able to edit or delete past events. If the RLS policies allow UPDATE or DELETE on audit event rows, a coordinator could suppress evidence of a re-export or failed submission.
Mitigation & Contingency
Mitigation: Apply INSERT-only RLS policies to the audit events table (no UPDATE, no DELETE for any non-service-role user). Use a separate service-role key for writing audit events, never the user's JWT. Validate this in integration tests by asserting that UPDATE and DELETE calls from coordinator-role sessions are rejected with RLS errors.
Contingency: If immutability is compromised before detection, run a database audit comparing the audit log against the main history table timestamps to identify tampered records, restore from backup if needed, and issue a patch RLS migration immediately.
The user stories require filter state (year, period type, status) to persist within a session so coordinators do not lose context when navigating away. Implementing this with Riverpod state management could cause stale filter state if the provider is not properly scoped to the session lifecycle.
Mitigation & Contingency
Mitigation: Scope the filter state provider to the router's history route scope, not globally. Use autoDispose with a keepAlive flag tied to the session so filters reset on logout but persist on tab switches within the same session.
Contingency: If filter state becomes stale or leaks between sessions, add an explicit reset in the logout handler that disposes all scoped providers. This is a UX degradation (coordinator must re-apply filters) rather than a data integrity issue.