critical priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

ExpenseRepository class exposes typed async methods: createClaim(CreateExpenseClaimDto) → Future<ExpenseClaim>, getClaimById(String id) → Future<ExpenseClaim?>, listClaimsForUser(String userId, {ExpenseClaimStatus? statusFilter}) → Future<List<ExpenseClaim>>, listClaimsForOrg(String orgId, {DateRange? period}) → Future<List<ExpenseClaim>>, updateClaimStatus(String id, ExpenseClaimStatus status) → Future<ExpenseClaim>, deleteDraftClaim(String id) → Future<void>, addLineItem(String claimId, CreateLineItemDto) → Future<ExpenseLineItem>, removeLineItem(String lineItemId) → Future<void>
Offline write queue persists pending createClaim and addLineItem operations to local storage (Hive box or SQLite table) when Supabase is unreachable
On connectivity restoration (detected via connectivity_plus), offline queue drains automatically in FIFO order
If sync fails after 3 retries with exponential backoff, the pending item remains in queue and user is notified via a stream event
claimStatusStream(String claimId) → Stream<ExpenseClaimStatus> uses Supabase Realtime channel subscription, filtered by claim ID
All returned Dart objects are immutable value types (freezed or manual copyWith) — no mutable model classes
Repository is injectable (Riverpod provider or constructor injection) and mockable for unit tests
Repository correctly maps Supabase Postgres errors (unique violation, FK violation, RLS denial) to typed domain exceptions
Offline-created claims are assigned a local UUID immediately so UI can reference them before server confirmation
When the same claim is synced from both Realtime and a direct fetch, the repository deduplicates correctly
listClaimsForUser returns results sorted by created_at DESC by default
Realtime subscription is cancelled when the repository is disposed to prevent memory leaks

Technical Requirements

frameworks
Flutter
Riverpod
supabase_flutter
hive_flutter or sqflite (offline queue)
connectivity_plus
freezed (code generation)
apis
Supabase PostgreSQL 15 (CRUD queries)
Supabase Realtime (claim status stream)
Supabase Auth (JWT passed automatically by client)
data models
claim_event
annual_summary
activity
performance requirements
listClaimsForUser must return cached results within 50ms while refreshing in background
Offline queue drain must not block the UI thread — run in Dart isolate or async microtask queue
Realtime reconnection after network loss must occur within 5 seconds of connectivity restoration
security requirements
Repository never constructs raw SQL — use Supabase typed query builder only
JWT is automatically attached by supabase_flutter client — repository must not manually handle tokens
Offline queue stored in Hive must encrypt sensitive fields (amount, description) using hive_flutter AES encryption with key from flutter_secure_storage
Offline queue must be cleared on user logout to prevent cross-user data leakage on shared devices

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Use the Repository pattern with a clear interface (abstract class IExpenseRepository) to allow easy mocking in BLoC/Cubit unit tests. For the offline queue, prefer Hive over SQLite to avoid schema migration complexity — the queue only needs a simple list of pending operations serialized as JSON. Use the connectivity_plus package's Stream to trigger sync. The Supabase Realtime channel for claim status should use postgres_changes event type on the expense_claims table with filter: `id=eq.{claimId}`.

Be careful with Realtime — subscribe only to claims the current user owns or is authorised to observe (leverage RLS on realtime). Implement a SyncStatus sealed class (Pending / Syncing / Synced / Failed) and expose it as a stream so UI can show offline indicators. Use Riverpod's keepAlive on the repository provider to preserve Realtime subscriptions across widget rebuilds.

Testing Requirements

Unit tests (flutter_test + mocktail): mock the Supabase client, test each repository method for happy path and error cases including RLS denial (403), network timeout, and FK violation. Integration tests: spin up local Supabase instance via Docker, run actual CRUD operations against the expense_claims schema from task-003, verify RLS enforcement. Offline tests: simulate network unavailability using a mock connectivity stream, create a claim, verify it lands in the offline queue, restore connectivity, verify it syncs. Realtime tests: mock the Supabase channel and verify the stream emits correct status values.

Minimum 80% line coverage on repository class.

Component
Expense Repository
data medium
Epic Risks (3)
high impact medium prob security

Row-level security policies for expense claims must correctly scope data to organisation, role (peer mentor sees own claims only, coordinator sees org-wide queue), and claim status. Incorrect RLS can expose claims cross-organisation or prevent coordinators from accessing the attestation queue.

Mitigation & Contingency

Mitigation: Define RLS policies in code-reviewed migration files. Write integration tests that attempt cross-org reads with different JWT roles and assert access denial. Review with a second engineer before merging migrations.

Contingency: If RLS is misconfigured post-deployment, disable the affected policy temporarily and apply a hotfix migration within the same release window. No claim data is exposed publicly due to Supabase project-level auth requirement.

medium impact medium prob technical

The auto-approval Edge Function is triggered server-side on expense insert. Cold-start latency or Edge Function failures can block the submission response and degrade UX, especially on mobile networks.

Mitigation & Contingency

Mitigation: Implement the auto-approval Edge Function client with a timeout and graceful fallback: if no result is received within 5 seconds, treat the claim as 'pending' and poll for the status update via Supabase Realtime. Keep the Edge Function warm with a periodic ping.

Contingency: If Edge Function reliability is unacceptable, move auto-approval evaluation to a database trigger or Postgres function as an interim measure, accepting that threshold configuration changes require a migration rather than a settings update.

medium impact low prob scope

The expense type catalogue and threshold configuration are cached locally for offline use. If an organisation updates their catalogue exclusion rules or thresholds while a peer mentor is offline, the local cache may allow submissions that violate the new policy.

Mitigation & Contingency

Mitigation: Cache entries include a TTL (24 hours). On connectivity restore, refresh cache before allowing new submissions. Server-side validation in the Edge Function and save functions provides a second enforcement layer.

Contingency: If a stale-cache submission passes client validation but fails server validation, surface a clear error message explaining that the expense type rules have been updated and prompt the user to review their selection with the refreshed catalogue.