high priority low complexity backend pending backend specialist Tier 5

Acceptance Criteria

Every invocation of the re-export flow results in exactly one audit event written to the bufdir_export_audit_log table (or a dedicated reexport audit sub-table), regardless of success or failure
The audit event captures: acting_user_id (UUID from JWT), organization_id (UUID from JWT claim), history_entry_id (UUID), reexport_timestamp (UTC datetime), outcome ('success' | 'failure'), and failure_reason (nullable string for failure cases)
Audit logging is placed in a `finally` block (or equivalent pattern) so it executes even when the re-export pipeline throws an exception
When the audit log INSERT itself fails, the failure is logged to the application logger at error level, but the original re-export exception (if any) is not suppressed — the audit failure is secondary
Audit records are immutable after insertion — no UPDATE or DELETE operations are permitted on the audit table from application code (enforce via RLS)
The failure_reason field captures the exception type and a sanitised message — no stack traces, no PII, no raw database error messages
Unit tests verify: audit INSERT is called on success, audit INSERT is called on pipeline failure, original exception is still thrown after audit INSERT on failure, audit INSERT failure does not mask the original exception
Audit records created by the test suite are distinguishable from production records (use a test organisation ID or a flag)

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
Supabase PostgreSQL 15 (INSERT into audit table)
data models
bufdir_export_audit_log
performance requirements
Audit INSERT must be fire-and-complete before the coordinator method returns — no async fire-and-forget that could be lost
Audit INSERT is a single row insert — must complete under 500ms
Audit logging must not block the main re-export result being returned to the UI
security requirements
Audit table RLS must permit INSERT for authenticated users but deny UPDATE and DELETE for all roles including service role (append-only)
acting_user_id and organization_id must be extracted from the server-side JWT claim, not from client-supplied parameters — prevents spoofing
failure_reason must be sanitised: strip stack traces, file paths, and any string that matches UUID or email patterns before storage to prevent accidental PII leakage in audit logs
Audit records must never be deleted or modified — GDPR compliance requires retaining audit trails for government reporting actions
Timestamp must be server-generated (use `now()` in the INSERT or Supabase's server_default) — not client-supplied

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Wrap the entire re-export pipeline invocation in a `try-catch-finally` block. In the `try` block, run the pipeline. In the `catch` block, capture the exception type and a sanitised message. In the `finally` block, build the audit event and call `auditRepository.insertReexportAuditEvent(event)` — wrap this call in its own `try-catch` that only logs the failure and never rethrows.

A sanitisation helper should strip UUIDs (regex), email addresses (regex), and truncate the message to 500 characters. Extract `userId` and `organizationId` from the injected `AuthSession` — never accept them as method parameters. Define the audit event as an immutable `ReexportAuditEvent` value object. Consider a dedicated `ReexportAuditRepository` rather than reusing the general report history repository, to keep the audit write path isolated.

Do not use `unawaited()` for the audit INSERT — await it to ensure it completes before the coordinator returns.

Testing Requirements

Unit tests (flutter_test) with mock audit repository: (1) on successful re-export — assert audit INSERT called once with outcome='success' and null failure_reason, (2) on pipeline exception — assert audit INSERT called once with outcome='failure' and sanitised failure_reason, and original exception is rethrown after the INSERT, (3) on audit INSERT exception — assert original re-export result or exception is unaffected and application logger receives an error-level message, (4) assert acting_user_id and organization_id come from the injected auth session, not from method parameters. Integration test on staging: trigger a successful re-export and a failed re-export (by passing an invalid history entry ID), then query the audit table and assert both records exist with the correct outcome values.

Component
Report Re-export Coordinator
service medium
Epic Risks (3)
high impact medium prob dependency

The ReportReexportCoordinator must invoke the Bufdir export pipeline defined in the bufdir-report-export feature. If that feature's internal API changes (renamed services, altered parameters), the re-export coordinator will break silently at runtime.

Mitigation & Contingency

Mitigation: Define a stable, versioned interface (abstract class or Dart interface) for the export pipeline entry point. The re-export coordinator depends only on this interface, not on concrete export service internals. Document the contract in both features.

Contingency: If the export pipeline breaks the re-export coordinator, fall back to surfacing a clear 'regeneration unavailable' message to the coordinator with instructions to use the primary export screen for the same period as a workaround, while the interface mismatch is fixed.

high impact low prob security

The audit trail must be immutable — coordinators must not be able to edit or delete past events. If the RLS policies allow UPDATE or DELETE on audit event rows, a coordinator could suppress evidence of a re-export or failed submission.

Mitigation & Contingency

Mitigation: Apply INSERT-only RLS policies to the audit events table (no UPDATE, no DELETE for any non-service-role user). Use a separate service-role key for writing audit events, never the user's JWT. Validate this in integration tests by asserting that UPDATE and DELETE calls from coordinator-role sessions are rejected with RLS errors.

Contingency: If immutability is compromised before detection, run a database audit comparing the audit log against the main history table timestamps to identify tampered records, restore from backup if needed, and issue a patch RLS migration immediately.

low impact low prob technical

The user stories require filter state (year, period type, status) to persist within a session so coordinators do not lose context when navigating away. Implementing this with Riverpod state management could cause stale filter state if the provider is not properly scoped to the session lifecycle.

Mitigation & Contingency

Mitigation: Scope the filter state provider to the router's history route scope, not globally. Use autoDispose with a keepAlive flag tied to the session so filters reset on logout but persist on tab switches within the same session.

Contingency: If filter state becomes stale or leaks between sessions, add an explicit reset in the logout handler that disposes all scoped providers. This is a UX degradation (coordinator must re-apply filters) rather than a data integrity issue.