high priority low complexity backend pending backend specialist Tier 4

Acceptance Criteria

A `attachmentUploadServiceProvider` is defined and returns a correctly constructed `AttachmentUploadService` with injected `StorageAdapter` and `ActivityAttachmentRepository` dependencies
A `attachmentSignedUrlServiceProvider` is defined and returns a correctly constructed `AttachmentSignedUrlService` with injected `StorageAdapter` and a configurable TTL sourced from app configuration
Both providers are scoped to the active org session — when the user switches organisations, both service instances (including the signed URL cache) are disposed and recreated for the new org context
The `AttachmentSignedUrlService` cache is never shared across two different org sessions — validated by spinning up two provider scopes with different org IDs and asserting independent cache state
Providers are defined in a dedicated `attachment_providers.dart` file and exported from the feature's barrel export
The `AttachmentBloc` (task-007) reads `attachmentUploadServiceProvider` and `attachmentSignedUrlServiceProvider` via Riverpod — no manual instantiation in widgets
Provider disposal is verified: when a scoped `ProviderContainer` is disposed, the underlying service's in-memory cache is also released (GC-eligible)

Technical Requirements

frameworks
Flutter
Riverpod
apis
Supabase Storage SDK (injected via StorageAdapter provider)
Supabase PostgreSQL 15 (injected via ActivityAttachmentRepository provider)
data models
activity
performance requirements
Provider instantiation is lazy — services are not created until first accessed
Provider reads in BLoC constructors must not trigger unnecessary rebuilds — use `ref.read` (not `ref.watch`) for services injected into BLoC
security requirements
Per-org provider scoping ensures the signed URL cache (which contains time-limited private file access URLs) is never accessible to a different org's session
The Supabase client used by StorageAdapter must be the authenticated instance tied to the current org session — not a shared unauthenticated client
Service role key is never injected into client-side providers — only the user JWT-authenticated Supabase client is used

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Use `riverpod` 2.x with code generation (`@riverpod` annotation) or manual `Provider`/`NotifierProvider` depending on the project's existing convention — match the existing pattern. For org-scoped providers, use `ProviderScope` overrides at the org-session level. The recommended pattern: define a `currentOrgIdProvider` (a `StateProvider`) that changes on org switch; make the service providers `family` providers parameterized by `orgId`, reading from `currentOrgIdProvider`. This ensures automatic re-creation on org change without manual disposal logic.

Place all attachment providers in `lib/features/attachments/providers/attachment_providers.dart`. Export via the feature barrel `lib/features/attachments/attachments.dart`. Keep providers thin — no business logic in provider bodies, only construction and dependency wiring.

Testing Requirements

Unit/integration tests (flutter_test with ProviderContainer): (1) `attachmentUploadServiceProvider` resolves to a non-null `AttachmentUploadService` instance with correct dependencies injected — verify via type check and mock dependency tracing, (2) `attachmentSignedUrlServiceProvider` resolves with default TTL of 3300 seconds unless overridden, (3) Override test: provide a mock StorageAdapter via `ProviderContainer(overrides: [...])` and verify AttachmentUploadService uses the mock, (4) Scope isolation test: two ProviderContainers with different org scopes — adding a cache entry to one does not appear in the other, (5) Disposal test: dispose a ProviderContainer and verify the service's cache map is cleared. Use Riverpod's `ProviderContainer` directly in tests — do not rely on widget tests for provider scoping verification.

Component
Signed URL Service
service low
Epic Risks (3)
medium impact medium prob technical

The storage upload succeeds but the subsequent metadata insert fails. The rollback delete call to Supabase Storage could itself fail (network error, transient timeout), leaving an orphaned object in the bucket with no database record pointing to it — a cost and compliance risk that also breaks delete-on-cascade logic.

Mitigation & Contingency

Mitigation: Wrap the rollback delete in a retry loop (3 attempts, exponential back-off). Log orphaned-object incidents to a dedicated structured log stream for periodic audit. Consider a scheduled Supabase Edge Function that reconciles storage objects against database records and flags orphans.

Contingency: If orphaned objects accumulate, run the reconciliation edge function manually to identify and purge them. Add a monitoring alert for metadata insert failures after successful uploads so the issue is caught within minutes.

medium impact medium prob scope

If the signed URL TTL is set too short, users browsing the attachment preview modal on slow connections will receive expired URLs before the content loads, causing a broken experience. If set too long, a URL shared outside the app (e.g., pasted into a chat) remains valid beyond the intended access window.

Mitigation & Contingency

Mitigation: Default TTL to 60 minutes, configurable via a named constant. The in-memory cache TTL should be set to TTL minus 5 minutes to ensure cached URLs are refreshed before they expire. Document the trade-off in code comments.

Contingency: If users report broken previews, shorten the cache TTL hotfix. If a URL leak is reported, rotate the Supabase storage signing secret to invalidate all outstanding signed URLs immediately.

medium impact medium prob technical

The multi-attachment user story requires parallel uploads with individual progress indicators. Managing concurrent BLoC events for 3–5 simultaneous uploads risks state collisions, progress indicator mixups, or partial rollbacks that are difficult to reason about.

Mitigation & Contingency

Mitigation: Design the BLoC to maintain a per-attachment upload state map keyed by a client-generated UUID. Each upload runs as an isolated Future with its own result emitted as a typed event. Write integration tests for 3-concurrent-upload scenarios.

Contingency: If state collisions occur in production, fall back to sequential upload processing (one at a time) gated behind a feature flag until the concurrent model is stabilised.