Implement cache invalidation on attachment deletion
epic-document-attachments-services-task-006 — Add an invalidateCacheEntry(storagePath) method to AttachmentSignedUrlService that removes the cache entry for the given path. Wire this method so it is called by AttachmentUploadService (or the Bloc layer) when a delete operation succeeds, ensuring stale signed URLs are not served after the underlying object is removed.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
The `invalidateCacheEntry(String storagePath)` method is a one-liner: `_cache.remove(storagePath)`. The non-trivial part is the wiring: decide at which layer to call it. Recommended: the BLoC delete event handler calls `AttachmentSignedUrlService.invalidateCacheEntry(path)` after receiving a successful Right from the delete use case — this keeps the service layer cohesive and the BLoC as the coordinator. Avoid having `AttachmentUploadService` depend on `AttachmentSignedUrlService` directly to prevent circular dependency.
Add a `// CACHE INVALIDATION: remove stale signed URL entry after successful delete` comment at the call site for maintainability. This task is intentionally small — do not add batch invalidation, pattern-based invalidation, or prefetching as scope creep.
Testing Requirements
Unit tests (flutter_test): (1) invalidate a cached path — subsequent getSignedUrl calls the Supabase mock again, (2) invalidate a non-existent path — no exception thrown, service state unchanged, (3) invalidate path A does not affect cached path B — both paths in cache, remove A, B still cached, (4) end-to-end flow test: cache a URL, delete the attachment, verify cache is empty for that path and the mock storage SDK was called for the regeneration on next access. All tests are pure unit tests — no real Supabase calls.
The storage upload succeeds but the subsequent metadata insert fails. The rollback delete call to Supabase Storage could itself fail (network error, transient timeout), leaving an orphaned object in the bucket with no database record pointing to it — a cost and compliance risk that also breaks delete-on-cascade logic.
Mitigation & Contingency
Mitigation: Wrap the rollback delete in a retry loop (3 attempts, exponential back-off). Log orphaned-object incidents to a dedicated structured log stream for periodic audit. Consider a scheduled Supabase Edge Function that reconciles storage objects against database records and flags orphans.
Contingency: If orphaned objects accumulate, run the reconciliation edge function manually to identify and purge them. Add a monitoring alert for metadata insert failures after successful uploads so the issue is caught within minutes.
If the signed URL TTL is set too short, users browsing the attachment preview modal on slow connections will receive expired URLs before the content loads, causing a broken experience. If set too long, a URL shared outside the app (e.g., pasted into a chat) remains valid beyond the intended access window.
Mitigation & Contingency
Mitigation: Default TTL to 60 minutes, configurable via a named constant. The in-memory cache TTL should be set to TTL minus 5 minutes to ensure cached URLs are refreshed before they expire. Document the trade-off in code comments.
Contingency: If users report broken previews, shorten the cache TTL hotfix. If a URL leak is reported, rotate the Supabase storage signing secret to invalidate all outstanding signed URLs immediately.
The multi-attachment user story requires parallel uploads with individual progress indicators. Managing concurrent BLoC events for 3–5 simultaneous uploads risks state collisions, progress indicator mixups, or partial rollbacks that are difficult to reason about.
Mitigation & Contingency
Mitigation: Design the BLoC to maintain a per-attachment upload state map keyed by a client-generated UUID. Each upload runs as an isolated Future with its own result emitted as a typed event. Write integration tests for 3-concurrent-upload scenarios.
Contingency: If state collisions occur in production, fall back to sequential upload processing (one at a time) gated behind a feature flag until the concurrent model is stabilised.