high priority low complexity backend pending backend specialist Tier 1

Acceptance Criteria

`ExportStorageBucket` class exists in `lib/data/services/export_storage_bucket.dart` and is registered as a Riverpod provider
`uploadReport(String orgId, String reportId, Uint8List bytes, ExportFormat format)` uploads to path `{orgId}/{reportId}.{ext}`, retries up to 3 times with 2-second exponential backoff on network failure, and returns a typed `StorageUploadResult` containing the storage path on success or an error detail on failure
`getSignedDownloadUrl(String storagePath, {int expiresIn = 604800})` returns a signed URL string valid for the specified duration (default 7 days = 604800 seconds) or throws `StorageException` if the path does not exist
`deleteReport(String storagePath)` removes the file from the bucket and returns void — throws `StorageException` if deletion fails (not on 404, which is treated as success)
`listOrgReports(String orgId)` returns a list of `StorageFileInfo` objects (path, size, lastModified) for all files in the `{orgId}/` prefix
An `ExportFormat` enum exists with values `csv` and `pdf` and a `toExtension()` method returning the file extension string
A `StorageUploadResult` sealed class exists with `StorageUploadSuccess(storagePath)` and `StorageUploadFailure(error)` variants
All methods throw typed `StorageException` (not raw Supabase errors) with a human-readable message and the original cause
Unit tests cover all methods including retry logic and error paths with mocked Supabase Storage client

Technical Requirements

frameworks
Flutter (Dart)
Riverpod (provider registration)
supabase_flutter package (StorageFileApi)
apis
Supabase Storage API via supabase_flutter (upload, createSignedUrl, remove, list)
data models
ExportFormat enum
StorageUploadResult sealed class
StorageFileInfo model
StorageException
performance requirements
uploadReport must implement retry with exponential backoff: attempt 1 immediate, attempt 2 after 2s, attempt 3 after 4s
getSignedDownloadUrl must complete within 500ms under normal conditions
uploadReport must support file sizes up to 50MB without streaming segmentation
security requirements
The adapter must use the authenticated user's Supabase client for getSignedDownloadUrl and listOrgReports (RLS enforces org scoping)
uploadReport and deleteReport are only called from the edge function server-side — the Flutter adapter for these methods should accept an explicit service-role client or be clearly documented as server-only
Never log file bytes or signed URLs to console or crash reporters
storagePath parameter must be validated to match the `{org_id}/{report_id}.{ext}` pattern before any API call

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use `supabase.storage.from('bufdir-exports')` as the base for all operations. For the path convention, centralise it as a private method `_buildPath(String orgId, String reportId, ExportFormat format)` returning `'$orgId/$reportId.${format.toExtension()}'` — this is the single place to change if the convention ever needs to update. For retry logic, implement a simple loop with `Future.delayed` rather than pulling in a retry package, since this is one use case. The `uploadReport` content type header should be set: `'text/csv'` for CSV, `'application/pdf'` for PDF — pass via the `fileOptions` parameter of the Supabase upload call.

The `listOrgReports` method uses the Supabase Storage `list` API with a prefix filter — note that Supabase Storage list returns at most 100 items by default, so add `limit: 1000` and document the limitation. Since this adapter is described as 'used exclusively by the export edge function and the file download handler', add a class-level doc comment stating this constraint to prevent misuse.

Testing Requirements

Unit tests using flutter_test with a mocked Supabase StorageFileApi. Test uploadReport: verify correct bucket name and path are used, verify retry is attempted on first failure, verify StorageUploadSuccess is returned after successful upload, verify StorageUploadFailure is returned after 3 consecutive failures. Test getSignedDownloadUrl: verify the expiresIn parameter is passed correctly, verify StorageException is thrown on API error. Test deleteReport: verify 404 is treated as success, verify StorageException on other errors.

Test listOrgReports: verify prefix filter is set to `{orgId}/` and response is correctly mapped to StorageFileInfo list. Place tests in `test/data/services/export_storage_bucket_test.dart`. Target >90% line coverage for this class.

Component
Export Storage Bucket
infrastructure low
Epic Risks (3)
high impact medium prob technical

NHF's three-level hierarchy (national / region / chapter) with 1,400 chapters may have edge cases such as chapters belonging to multiple regions, orphaned nodes, or missing parent links in the database. Incorrect scope expansion would silently under- or over-report activities, which could invalidate a Bufdir submission.

Mitigation & Contingency

Mitigation: Obtain a full hierarchy fixture export from NHF before implementation begins. Write exhaustive unit tests covering boundary cases: single chapter, full national roll-up, chapters with no activities, and chapters assigned to multiple regions. Validate resolver output against a known-good manual count.

Contingency: If hierarchy data quality is too poor for automated resolution at launch, implement a manual scope override in the coordinator UI that allows the coordinator to explicitly select org units from a tree picker, bypassing the resolver.

medium impact high prob dependency

The activity_type_configuration table may not cover all activity types currently in use, leaving a subset unmapped at launch. Bufdir submissions with unmapped categories will be incomplete and may be rejected by Bufdir.

Mitigation & Contingency

Mitigation: Run a query against production activity data before implementation to enumerate all distinct activity type IDs. Cross-reference with Bufdir's published category schema (request from Norse Digital Products). Flag every gap as a known issue and build the warning surface into the preview panel.

Contingency: Implement a fallback 'Other' category bucket for unmapped types and surface a prominent warning in the export preview requiring coordinator acknowledgement before proceeding. Log unmapped types for post-launch cleanup.

high impact low prob security

Supabase RLS policies on generated_reports and the storage bucket must enforce strict org isolation. A misconfigured policy could allow a coordinator from one organisation to read another organisation's export files, creating a serious data breach with GDPR implications.

Mitigation & Contingency

Mitigation: Write RLS integration tests that attempt cross-org reads with explicitly different JWT tokens and assert that all attempts return empty sets or 403 errors. Include RLS policy review in the pull request checklist. Use Supabase's built-in policy tester during development.

Contingency: If a policy gap is discovered post-deployment, immediately revoke all signed URLs for affected exports, audit the access log for unauthorised reads, and issue a coordinated disclosure to affected organisations per GDPR breach notification requirements.