critical priority low complexity infrastructure pending infrastructure specialist Tier 1

Acceptance Criteria

uploadFile(orgId, activityId, fileName, bytes, mimeType) constructs the storage path as '{orgId}/{activityId}/{fileName}' and uploads to the 'activity-attachments' bucket
uploadFile returns the full storage path string (e.g., 'org-uuid/activity-uuid/report.pdf') on success
uploadFile throws/returns a typed StorageUploadError (not a raw Supabase exception) on failure, wrapping the underlying cause
deleteFile(storagePath) removes the object from the 'activity-attachments' bucket using the exact path provided
deleteFile throws/returns a typed StorageDeleteError on failure
File paths are strictly in the format '{orgId}/{activityId}/{fileName}' — no additional nesting or prefix variation
The adapter is registered as an abstract class implementation, allowing dependency injection and mocking in tests
Unit tests verify that the correct bucket name and path are passed to the Supabase storage client for both upload and delete
Supabase storage errors (network, permission, size limit) are caught and re-thrown as typed domain exceptions

Technical Requirements

frameworks
Flutter
Supabase Dart SDK (supabase_flutter)
apis
Supabase Storage upload API (storage.from(bucket).uploadBinary)
Supabase Storage remove API (storage.from(bucket).remove)
data models
StoragePath (string alias: '{orgId}/{activityId}/{fileName}')
ActivityAttachment (file_size_bytes, mime_type fields)
performance requirements
Upload must stream bytes directly from Uint8List without creating a temporary File on disk
For files under 6 MB (activity doc attachments), single-part upload is sufficient — no multipart needed
security requirements
Path must be constructed server-side-verifiable: '{orgId}/{activityId}/{fileName}' — the orgId segment allows RLS policies to restrict access to the owning org
fileName must be sanitised before use in the path: strip path separators, limit to alphanumeric + dash/underscore/dot, max 255 chars
MIME type must be explicitly set in the upload options (contentType) to prevent browser MIME sniffing
The adapter must NOT expose bucket credentials or signed URLs in error messages

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Create an abstract class `StorageAdapter` (or `AttachmentStorageAdapter`) with the two method signatures, then `SupabaseStorageAdapter implements StorageAdapter`. Inject `SupabaseClient` via constructor to enable mocking. Use `supabaseClient.storage.from('activity-attachments').uploadBinary(path, bytes, fileOptions: FileOptions(contentType: mimeType))` for upload. For the path, construct as: `final path = '$orgId/$activityId/${_sanitizeFileName(fileName)}';`.

Implement `_sanitizeFileName` as a private static helper: replace spaces with underscores, remove characters outside `[a-zA-Z0-9._-]`, truncate to 200 chars. Catch `StorageException` from the Supabase SDK and wrap in your own typed exceptions so higher layers remain decoupled from the SDK. Register the adapter with Riverpod as a `Provider` referencing the concrete implementation.

Testing Requirements

Unit tests using mockito/mocktail to mock `SupabaseStorageFileApi`. Test cases: (1) uploadFile happy path — verify bucket name 'activity-attachments', correct path construction, and returned path string; (2) uploadFile with special characters in fileName — verify sanitisation; (3) uploadFile storage error — Supabase throws StorageException, adapter wraps in StorageUploadError; (4) deleteFile happy path — verify remove is called with exact storagePath; (5) deleteFile not found — Supabase returns error, adapter wraps in StorageDeleteError. No real network calls. Target 100% branch coverage on error handling paths.

Component
Supabase Storage Adapter
infrastructure low
Epic Risks (3)
high impact medium prob security

Supabase RLS policies may not cover all query paths (e.g., service-role key usage in edge functions), potentially exposing attachment metadata or objects from another organisation to an unauthorised actor, breaching GDPR requirements.

Mitigation & Contingency

Mitigation: Add org_id scoping as an explicit WHERE clause at the Dart repository level as a second line of defence. Document which queries use the anon key versus service-role key, and audit all edge function calls that touch the storage bucket.

Contingency: If a bypass is discovered post-deployment, immediately revoke the affected signed URLs, rotate the service-role key, add the missing org_id filter, and deploy a patch. Notify affected organisations per GDPR breach protocol.

medium impact low prob dependency

Supabase free/pro tier storage quotas may be exceeded earlier than expected if organisations upload large PDFs frequently, causing upload failures with no graceful degradation for users.

Mitigation & Contingency

Mitigation: Configure a 10 MB per-file cap enforced in the upload service (Epic 2), and add a storage usage monitoring alert at 80% of the allocated quota. Document the upgrade path in runbooks.

Contingency: If the quota is hit, temporarily disable new uploads via the org-level feature flag (attachments_enabled) and upgrade the Supabase plan. Communicate clearly to affected coordinators with an estimated restoration time.

high impact low prob integration

The feature documentation specifies a migration order dependency: the activity_attachments table must be created after the activities table and before the Bufdir export join query is updated. Running migrations out of order will cause foreign-key or join failures.

Mitigation & Contingency

Mitigation: Add the migration to the numbered Supabase migration sequence immediately after the activities table migration. Add a CI check that runs migrations in order against a clean schema.

Contingency: If a deployment runs migrations out of order, roll back via the Supabase migration rollback script, reorder, and redeploy. No data loss occurs as attachments do not exist yet at that point.