critical priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

BufdirExportFileStorage is a Dart class accepting SupabaseClient and a config object (StorageConfig with ttl, bucket name, maxRetries) as constructor dependencies
uploadExportFile constructs the storage path as '{orgId}/{exportId}.{format.extension}' — no other path format is accepted
uploadExportFile throws PathValidationException if orgId or exportId contains characters outside [a-zA-Z0-9-_] (UUID-safe set)
uploadExportFile retries up to maxRetries times (default 3) on StorageException with exponential backoff (1s, 2s, 4s)
uploadExportFile throws ExportUploadException after exhausting all retries, including the last StorageException as cause
uploadExportFile returns the constructed file path (String) on success
generateSignedUrl returns a valid signed URL string for the given filePath with expiry equal to ttl.inSeconds
generateSignedUrl throws ArgumentError if ttl is zero or negative
generateSignedUrl throws SignedUrlException wrapping StorageException on Supabase failure
deleteExpiredFiles lists all files under '{orgId}/' prefix and deletes files whose name encodes a timestamp older than olderThan — or uses file metadata created_at if available
deleteExpiredFiles returns the count of deleted files as int
deleteExpiredFiles never deletes files from a different org prefix — path is always scoped to the provided orgId
ExportFormat is an enum with values csv, xlsx, json each having an extension getter

Technical Requirements

frameworks
Dart (latest)
supabase_flutter (SupabaseClient, SupabaseStorageClient)
apis
Supabase Storage: storage.from('bufdir-exports').uploadBinary(path, bytes, fileOptions)
Supabase Storage: storage.from('bufdir-exports').createSignedUrl(path, expiresIn)
Supabase Storage: storage.from('bufdir-exports').list(path: prefix)
Supabase Storage: storage.from('bufdir-exports').remove([paths])
data models
ExportFormat enum (csv, xlsx, json)
StorageConfig (bucketName, defaultTtl, maxRetries)
FileObject (Supabase Storage list response model)
performance requirements
Upload of files up to 50MB must not timeout — set Supabase HTTP client timeout to 120 seconds for upload operations
Exponential backoff must not block the Dart event loop — use Future.delayed for retry delays
deleteExpiredFiles must batch delete calls (max 100 paths per Supabase remove call) to avoid request size limits
security requirements
Path construction must sanitise orgId and exportId via regex [a-zA-Z0-9-_]+ before constructing the path string — throw PathValidationException on invalid input
The adapter must never accept a pre-constructed path string for upload — always derive path from (orgId, exportId, format) to prevent path traversal
Signed URLs must not be stored in app state or logs — return immediately to caller for one-time use
deleteExpiredFiles must always prepend orgId to the list prefix — no wildcard listing across all orgs

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Place at lib/features/bufdir_reporting/data/adapters/bufdir_export_file_storage.dart. The retry logic should be implemented as a private _withRetry(Future Function() operation, {int maxAttempts}) helper using a for-loop with await Future.delayed(Duration(seconds: pow(2, attempt).toInt())). For ExportFormat, define the enum with a const extension: extension ExportFormatExt on ExportFormat { String get extension => const { ExportFormat.csv: 'csv', ExportFormat.xlsx: 'xlsx', ExportFormat.json: 'json' }[this]!; }. For deleteExpiredFiles, since Supabase Storage FileObject has a created_at field in metadata, use that for age comparison.

Batch the remove calls: chunk the list into groups of 100 using a simple for-loop with sublist. Register as a Riverpod Provider reading supabaseClientProvider and storageConfigProvider.

Testing Requirements

Unit tests with mocktail mocking SupabaseStorageClient: (1) uploadExportFile constructs correct path '{orgId}/{exportId}.csv' for ExportFormat.csv; (2) uploadExportFile retries exactly maxRetries times on StorageException before throwing ExportUploadException; (3) uploadExportFile succeeds on second attempt after one StorageException (retry logic working); (4) uploadExportFile throws PathValidationException for orgId containing '/'; (5) generateSignedUrl passes correct expiresIn to Supabase; (6) generateSignedUrl throws ArgumentError for Duration.zero; (7) deleteExpiredFiles calls list with correct prefix and remove with correct path list; (8) deleteExpiredFiles returns correct count of deleted files. Integration tests against Supabase test bucket: verify real upload, signed URL access, and deletion. Test the cross-org path guard: constructing a path with another org's ID must be structurally impossible given the method signature.

Component
Export File Storage Adapter
infrastructure low
Epic Risks (2)
high impact medium prob security

RLS policies for the audit log and schema config tables must correctly handle multi-chapter membership hierarchies (up to 1,400 local chapters for NHF). Incorrect policies could either over-expose data across organisations or prevent legitimate coordinator access, both of which are serious compliance failures.

Mitigation & Contingency

Mitigation: Design RLS policies using the existing org hierarchy resolver pattern. Write integration tests that verify cross-organisation isolation with representative fixture data covering NHF's multi-level hierarchy before merging.

Contingency: If RLS policies prove too complex to express safely in Postgres, implement a Supabase Edge Function as a data access proxy that enforces isolation in application code, with RLS serving as a secondary defence layer.

medium impact medium prob scope

Bufdir's column schema is expected to evolve as Norse Digital Products negotiates a simplified digital reporting format. If the schema config versioning model is too rigid, applying Bufdir schema updates without a code deployment could fail, forcing emergency releases.

Mitigation & Contingency

Mitigation: Design the schema config table to store the full JSON column mapping as a JSONB field with a version number. Provide an admin API to upsert new versions without any schema migration required.

Contingency: If the versioning model is insufficient for a Bufdir schema change, fall back to a code deployment with the updated default schema, using the database config only for org-specific overrides.