Implement SupabaseStorageAdapter upload and delete
epic-document-attachments-foundation-task-006 — Implement the SupabaseStorageAdapter Dart class with typed methods: uploadFile(orgId, activityId, fileName, bytes, mimeType) uploads the file to the activity-attachments bucket under the path {orgId}/{activityId}/{fileName} and returns the storage path, and deleteFile(storagePath) removes the object from the bucket. Handle Supabase storage errors with typed exceptions. Enforce the org-scoped path convention established in the bucket RLS task.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
Create an abstract class `StorageAdapter` (or `AttachmentStorageAdapter`) with the two method signatures, then `SupabaseStorageAdapter implements StorageAdapter`. Inject `SupabaseClient` via constructor to enable mocking. Use `supabaseClient.storage.from('activity-attachments').uploadBinary(path, bytes, fileOptions: FileOptions(contentType: mimeType))` for upload. For the path, construct as: `final path = '$orgId/$activityId/${_sanitizeFileName(fileName)}';`.
Implement `_sanitizeFileName` as a private static helper: replace spaces with underscores, remove characters outside `[a-zA-Z0-9._-]`, truncate to 200 chars. Catch `StorageException` from the Supabase SDK and wrap in your own typed exceptions so higher layers remain decoupled from the SDK. Register the adapter with Riverpod as a `Provider
Testing Requirements
Unit tests using mockito/mocktail to mock `SupabaseStorageFileApi`. Test cases: (1) uploadFile happy path — verify bucket name 'activity-attachments', correct path construction, and returned path string; (2) uploadFile with special characters in fileName — verify sanitisation; (3) uploadFile storage error — Supabase throws StorageException, adapter wraps in StorageUploadError; (4) deleteFile happy path — verify remove is called with exact storagePath; (5) deleteFile not found — Supabase returns error, adapter wraps in StorageDeleteError. No real network calls. Target 100% branch coverage on error handling paths.
Supabase RLS policies may not cover all query paths (e.g., service-role key usage in edge functions), potentially exposing attachment metadata or objects from another organisation to an unauthorised actor, breaching GDPR requirements.
Mitigation & Contingency
Mitigation: Add org_id scoping as an explicit WHERE clause at the Dart repository level as a second line of defence. Document which queries use the anon key versus service-role key, and audit all edge function calls that touch the storage bucket.
Contingency: If a bypass is discovered post-deployment, immediately revoke the affected signed URLs, rotate the service-role key, add the missing org_id filter, and deploy a patch. Notify affected organisations per GDPR breach protocol.
Supabase free/pro tier storage quotas may be exceeded earlier than expected if organisations upload large PDFs frequently, causing upload failures with no graceful degradation for users.
Mitigation & Contingency
Mitigation: Configure a 10 MB per-file cap enforced in the upload service (Epic 2), and add a storage usage monitoring alert at 80% of the allocated quota. Document the upgrade path in runbooks.
Contingency: If the quota is hit, temporarily disable new uploads via the org-level feature flag (attachments_enabled) and upgrade the Supabase plan. Communicate clearly to affected coordinators with an estimated restoration time.
The feature documentation specifies a migration order dependency: the activity_attachments table must be created after the activities table and before the Bufdir export join query is updated. Running migrations out of order will cause foreign-key or join failures.
Mitigation & Contingency
Mitigation: Add the migration to the numbered Supabase migration sequence immediately after the activities table migration. Add a CI check that runs migrations in order against a clean schema.
Contingency: If a deployment runs migrations out of order, roll back via the Supabase migration rollback script, reorder, and redeploy. No data loss occurs as attachments do not exist yet at that point.