critical priority low complexity backend pending backend specialist Tier 1

Acceptance Criteria

ExpenseTypeCatalogueRepository exposes getExpenseTypes() → Stream<List<ExpenseType>> that emits the current catalogue on subscribe and re-emits on any upstream change
Each ExpenseType model includes: id, name, description, isTravelExpenseEligible, requiresReceiptAboveNok (nullable int), and exclusiveGroupIds (List<String>)
getMutualExclusionGroups() method returns a Map<String, List<String>> where the key is the group ID and the value is the list of expense type IDs in that group, allowing O(1) conflict checking
isConflicting(List<String> selectedTypeIds) method returns true if any two selected type IDs share an exclusive group, false otherwise
Catalogue data is cached locally: after first successful fetch, the data is available offline without any network call on subsequent app launches
Cache is invalidated and re-fetched when the app comes to foreground after more than 24 hours since the last fetch
If Supabase is unreachable on first launch and no cache exists, getExpenseTypes() emits an error state that the caller can surface as 'Catalogue unavailable — check connection'
Switching organization (user logs into a different org) clears the previous org's cached catalogue and fetches the new org's data
Repository is injectable via Riverpod provider (expenseTypeCatalogueRepositoryProvider) so it can be overridden in tests

Technical Requirements

frameworks
Flutter
Riverpod
Dart
apis
Supabase PostgreSQL 15 REST/PostgREST (SELECT on expense_types and expense_type_exclusive_group_members)
data models
activity_type
performance requirements
First load from Supabase completes in under 1.5 seconds on 4G — the full catalogue for one org is typically under 20 rows
Offline cache read completes in under 100ms — use shared_preferences JSON string for simplicity (Hive adds unnecessary complexity for this data size)
getMutualExclusionGroups() is synchronous and computed from already-loaded data — no async call
security requirements
Supabase RLS enforces org isolation server-side — repository does not add client-side org filtering as a second check (trust the RLS)
Cached catalogue stored as JSON in shared_preferences (non-sensitive configuration data — no PII in expense type names or group rules)
Repository must not expose the raw Supabase client — callers use only the typed Dart interface

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Fetch expense types and exclusive group members in a single Supabase query using a join: supabase.from('expense_types').select('*, expense_type_exclusive_group_members(exclusive_group_id)'). This returns each expense type with a nested array of its group memberships — map this to exclusiveGroupIds in the Dart model. For the reactive stream, use a StreamController> in the repository; on init, load from cache (emit immediately), then fetch from Supabase (emit updated data), then set a timer for the 24-hour refresh. Do not use Supabase Realtime for this data — expense type catalogues change very infrequently and the added complexity of a Realtime subscription is not justified.

Use shared_preferences with a JSON-encoded list for the cache and store a 'catalogue_fetched_at' timestamp key alongside it. The isConflicting helper should be a pure function — extract it to a standalone function in a separate file so it can be tested without the repository.

Testing Requirements

Unit tests with flutter_test: (1) getExpenseTypes() returns correctly mapped ExpenseType list when Supabase returns valid rows; (2) isConflicting(['km-godtgjoerelse-id', 'kollektiv-id']) returns true (shared exclusive group); (3) isConflicting(['km-godtgjoerelse-id', 'parkering-id']) returns false (different groups); (4) isConflicting with single item always returns false; (5) cache hit path: Supabase client never called when valid cache exists within 24 hours; (6) cache miss after 24 hours: Supabase called and cache refreshed; (7) Supabase throws on first launch with no cache: emits error state. Use a FakeSupabaseClient and a FakeSharedPreferences. Verify the Riverpod provider override works correctly in a ProviderContainer.

Epic Risks (3)
high impact medium prob security

Row-level security policies for expense claims must correctly scope data to organisation, role (peer mentor sees own claims only, coordinator sees org-wide queue), and claim status. Incorrect RLS can expose claims cross-organisation or prevent coordinators from accessing the attestation queue.

Mitigation & Contingency

Mitigation: Define RLS policies in code-reviewed migration files. Write integration tests that attempt cross-org reads with different JWT roles and assert access denial. Review with a second engineer before merging migrations.

Contingency: If RLS is misconfigured post-deployment, disable the affected policy temporarily and apply a hotfix migration within the same release window. No claim data is exposed publicly due to Supabase project-level auth requirement.

medium impact medium prob technical

The auto-approval Edge Function is triggered server-side on expense insert. Cold-start latency or Edge Function failures can block the submission response and degrade UX, especially on mobile networks.

Mitigation & Contingency

Mitigation: Implement the auto-approval Edge Function client with a timeout and graceful fallback: if no result is received within 5 seconds, treat the claim as 'pending' and poll for the status update via Supabase Realtime. Keep the Edge Function warm with a periodic ping.

Contingency: If Edge Function reliability is unacceptable, move auto-approval evaluation to a database trigger or Postgres function as an interim measure, accepting that threshold configuration changes require a migration rather than a settings update.

medium impact low prob scope

The expense type catalogue and threshold configuration are cached locally for offline use. If an organisation updates their catalogue exclusion rules or thresholds while a peer mentor is offline, the local cache may allow submissions that violate the new policy.

Mitigation & Contingency

Mitigation: Cache entries include a TTL (24 hours). On connectivity restore, refresh cache before allowing new submissions. Server-side validation in the Edge Function and save functions provides a second enforcement layer.

Contingency: If a stale-cache submission passes client validation but fails server validation, surface a clear error message explaining that the expense type rules have been updated and prompt the user to review their selection with the refreshed catalogue.