Write unit tests for upload validation and rollback logic
epic-document-attachments-services-task-009 — Write flutter_test unit tests for AttachmentUploadService covering: file_too_large rejection at exactly 10 MB + 1 byte, invalid_mime_type rejection for disallowed types, successful upload path with metadata persistence, and rollback behaviour when metadata write fails (verify StorageAdapter.delete is called with the correct path and the error propagates as upload_failed). Use mockito or mocktail to mock StorageAdapter and ActivityAttachmentRepository.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Use a fake Uint8List of the required byte length to simulate file size without allocating real memory where possible — or pass a length parameter to a stub. For MIME type tests, construct a minimal XFile wrapper with the desired mime type set explicitly rather than relying on file extension sniffing. For the rollback test, configure the mock ActivityAttachmentRepository to throw a RepositoryException, then verify the StorageAdapter.delete call using mocktail's verify() with a captured argument matcher. Ensure the test does NOT catch the propagated error — use expectLater with throwsA(isA
Group tests with Dart's group() function for readable output. Follow Arrange-Act-Assert structure in every test case.
Testing Requirements
Unit tests only using flutter_test. Use mocktail (preferred) or mockito to mock StorageAdapter and ActivityAttachmentRepository. Cover boundary conditions at the 10 MB limit (exact boundary, boundary+1), all allowed and at least two disallowed MIME types, the happy path, and the rollback path. Verify mock interactions with verify()/verifyNever().
Aim for 100% branch coverage of AttachmentUploadService. No integration or widget tests in this task.
The storage upload succeeds but the subsequent metadata insert fails. The rollback delete call to Supabase Storage could itself fail (network error, transient timeout), leaving an orphaned object in the bucket with no database record pointing to it — a cost and compliance risk that also breaks delete-on-cascade logic.
Mitigation & Contingency
Mitigation: Wrap the rollback delete in a retry loop (3 attempts, exponential back-off). Log orphaned-object incidents to a dedicated structured log stream for periodic audit. Consider a scheduled Supabase Edge Function that reconciles storage objects against database records and flags orphans.
Contingency: If orphaned objects accumulate, run the reconciliation edge function manually to identify and purge them. Add a monitoring alert for metadata insert failures after successful uploads so the issue is caught within minutes.
If the signed URL TTL is set too short, users browsing the attachment preview modal on slow connections will receive expired URLs before the content loads, causing a broken experience. If set too long, a URL shared outside the app (e.g., pasted into a chat) remains valid beyond the intended access window.
Mitigation & Contingency
Mitigation: Default TTL to 60 minutes, configurable via a named constant. The in-memory cache TTL should be set to TTL minus 5 minutes to ensure cached URLs are refreshed before they expire. Document the trade-off in code comments.
Contingency: If users report broken previews, shorten the cache TTL hotfix. If a URL leak is reported, rotate the Supabase storage signing secret to invalidate all outstanding signed URLs immediately.
The multi-attachment user story requires parallel uploads with individual progress indicators. Managing concurrent BLoC events for 3–5 simultaneous uploads risks state collisions, progress indicator mixups, or partial rollbacks that are difficult to reason about.
Mitigation & Contingency
Mitigation: Design the BLoC to maintain a per-attachment upload state map keyed by a client-generated UUID. Each upload runs as an isolated Future with its own result emitted as a typed event. Write integration tests for 3-concurrent-upload scenarios.
Contingency: If state collisions occur in production, fall back to sequential upload processing (one at a time) gated behind a feature flag until the concurrent model is stabilised.