high priority low complexity backend pending backend specialist Tier 1

Acceptance Criteria

ReportHistoryService exposes a method `Future<String?> resolveDownloadUrl(String fileReference)` that accepts the stored file path/reference from a bufdir_export_audit_log record
Method delegates to Supabase Storage client to generate a signed URL with a configurable expiry (default 1 hour, matching the integration security requirement)
Returns null when the storage client throws any exception (file not found, network error, RLS denial) — never rethrows to the caller
Returns null when fileReference is empty, null, or does not match expected path format
Signed URL expiry is read from configuration/constants and not hardcoded inline
Error details are logged at warning level with the file reference and error type for debugging without exposing PII
Unit tests confirm null is returned for all error scenarios: StorageException, network timeout, malformed reference
Unit tests confirm a valid signed URL string is returned when the storage client succeeds
The method is organisation-scoped — the caller must be authenticated and the RLS policy on the storage bucket prevents cross-organisation access

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
Supabase Storage SDK (createSignedUrl)
Supabase Auth (session validation)
data models
bufdir_export_audit_log
performance requirements
Method must complete in under 2 seconds under normal network conditions
No caching of signed URLs — always generate fresh to respect expiry semantics
Single network round-trip to storage client per call
security requirements
Signed URLs must expire after 1 hour (configurable) per Supabase Storage security policy
Storage bucket RLS policy enforces per-organisation isolation — method must not bypass RLS
File reference must be validated against expected path pattern before calling storage client to prevent path traversal
Signed URL must never be logged, even at verbose level — only the file reference key may appear in logs
Service role key must not be used — rely on authenticated user session token

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Inject the Supabase Storage client via constructor or Riverpod provider — do not access `Supabase.instance` directly inside the service method to keep it testable. Use `try-catch` around the `createSignedUrl` call and catch the broadest `Object` type (or at minimum `StorageException` and `Exception`) so unexpected runtime errors also result in null return rather than an uncaught exception propagating to the UI. Validate file reference format with a simple regex (e.g. `^bufdir-reports/[a-zA-Z0-9_\-/]+\.csv$`) before the storage call to provide an early-exit path.

Define the signed URL expiry duration as a named constant in the Bufdir feature constants file — do not inline the integer. Avoid storing or caching the resulting URL anywhere in application state; it is ephemeral.

Testing Requirements

Unit tests (flutter_test) are required covering: (1) successful signed URL generation with mock Supabase Storage client returning a valid URL, (2) null return on StorageException, (3) null return on empty file reference, (4) null return on malformed file reference format, (5) null return on network timeout simulation. Integration test on a Supabase staging environment verifying a real signed URL is generated and is accessible. No e2e UI tests required for this service method — UI rendering of disabled state is tested at widget level separately.

Component
Report History Service
service low
Epic Risks (3)
high impact medium prob dependency

The ReportReexportCoordinator must invoke the Bufdir export pipeline defined in the bufdir-report-export feature. If that feature's internal API changes (renamed services, altered parameters), the re-export coordinator will break silently at runtime.

Mitigation & Contingency

Mitigation: Define a stable, versioned interface (abstract class or Dart interface) for the export pipeline entry point. The re-export coordinator depends only on this interface, not on concrete export service internals. Document the contract in both features.

Contingency: If the export pipeline breaks the re-export coordinator, fall back to surfacing a clear 'regeneration unavailable' message to the coordinator with instructions to use the primary export screen for the same period as a workaround, while the interface mismatch is fixed.

high impact low prob security

The audit trail must be immutable — coordinators must not be able to edit or delete past events. If the RLS policies allow UPDATE or DELETE on audit event rows, a coordinator could suppress evidence of a re-export or failed submission.

Mitigation & Contingency

Mitigation: Apply INSERT-only RLS policies to the audit events table (no UPDATE, no DELETE for any non-service-role user). Use a separate service-role key for writing audit events, never the user's JWT. Validate this in integration tests by asserting that UPDATE and DELETE calls from coordinator-role sessions are rejected with RLS errors.

Contingency: If immutability is compromised before detection, run a database audit comparing the audit log against the main history table timestamps to identify tampered records, restore from backup if needed, and issue a patch RLS migration immediately.

low impact low prob technical

The user stories require filter state (year, period type, status) to persist within a session so coordinators do not lose context when navigating away. Implementing this with Riverpod state management could cause stale filter state if the provider is not properly scoped to the session lifecycle.

Mitigation & Contingency

Mitigation: Scope the filter state provider to the router's history route scope, not globally. Use autoDispose with a keepAlive flag tied to the session so filters reset on logout but persist on tab switches within the same session.

Contingency: If filter state becomes stale or leaks between sessions, add an explicit reset in the logout handler that disposes all scoped providers. This is a UX degradation (coordinator must re-apply filters) rather than a data integrity issue.