critical priority medium complexity backend pending backend specialist Tier 1

Acceptance Criteria

ApprovedClaim Dart model contains: claimId, orgId, claimantUserId, claimantName, amount (double), expenseType (enum), receiptUrl (nullable), activityId, submittedAt, approvedAt, approvedByUserId — all correctly typed
fetchApprovedClaims(orgId, dateRangeStart, dateRangeEnd, {claimType, page = 0, pageSize = 100}) returns List<ApprovedClaim>
Query filters status = 'approved' AND exported_at IS NULL AND submitted_at >= dateRangeStart AND submitted_at <= dateRangeEnd
Optional claimType filter, when provided, adds AND expense_type = claimType to the query
Pagination uses Supabase range(from, to) and returns an empty list (not an error) when no results exist
fetchTotalApprovedCount(orgId, dateRangeStart, dateRangeEnd, {claimType}) returns int for pagination UI without fetching all records
Service handles the case where claimant user profile data must be joined — either via a Supabase view or a two-step fetch — and the approach is documented
All returned ApprovedClaim objects have non-null claimantName even if the user profile join fails (fallback to claimantUserId string)
Service is injectable via Riverpod and depends on ExportRunRepository for the exported_at exclusion filter being consistent

Technical Requirements

frameworks
Flutter
Riverpod
supabase_flutter
apis
Supabase PostgREST with select() joins or RPC for complex queries
data models
expense_claims
ApprovedClaim
users (for claimant name join)
performance requirements
Query must use server-side filtering — never fetch all claims and filter in Dart
For large orgs with 500+ claims per period, page size of 100 keeps response under 500ms
Consider a Supabase database view (approved_claims_view) joining expense_claims + users to avoid multi-round-trip joins in Flutter
security requirements
RLS on expense_claims must restrict rows to authenticated user's org_id — do not add explicit org_id filter in Dart as a security measure (it is only a performance hint)
receipt_url must only be returned for authenticated coordinators — verify RLS policy covers this column
Do not return PII (full claimant name) to peer mentor role — if role check is needed, do it server-side via RLS or a parameterized RPC

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Place in lib/features/accounting/data/services/approved_claims_query_service.dart. The key challenge is joining claimant name: if the users table is not directly queryable by coordinators (common RLS pattern), use a Supabase RPC function (SECURITY DEFINER) that performs the join server-side and returns a typed result set. Alternatively, define a database view with appropriate RLS. Avoid N+1 fetches (fetching each claimant profile individually).

The exported_at IS NULL filter is critical for idempotency — claims exported in a previous run must never appear in a new run. Confirm with the team what 'approved' status value looks like in the existing expense_claims table before implementing.

Testing Requirements

Unit tests with mocked Supabase client: Test fetchApprovedClaims with no claimType filter — verify correct query chain (eq status, is exported_at null, gte/lte date). Test with claimType filter — verify additional eq clause is appended. Test empty result set — verify empty list returned without exception. Test fetchTotalApprovedCount — verify count() call is made.

Test claimantName fallback — mock a missing user profile and verify fallback string is used. Integration test (optional, local Supabase): seed 10 claims (5 approved+unexported, 2 approved+exported, 3 pending) and verify fetchApprovedClaims returns exactly 5.

Component
Approved Claims Query Service
data medium
Epic Risks (3)
high impact medium prob technical

Adding exported_at and export_run_id columns to expense_claims requires a live migration on a table shared with the approval workflow. A poorly timed migration could lock the table and block claim submissions or approvals.

Mitigation & Contingency

Mitigation: Use non-blocking ADD COLUMN with a DEFAULT of NULL (no backfill needed) executed during a low-traffic window. Test migration rollback on a staging replica before production deployment.

Contingency: If migration causes table lock contention, roll back and reschedule for a maintenance window. Use a feature flag to gate the export UI until the migration completes successfully.

medium impact high prob scope

Chart of accounts mapping configurations for Xledger and Dynamics may not be fully specified by stakeholders at development time, leaving the mapper with incomplete data and causing validation failures for unmapped expense categories.

Mitigation & Contingency

Mitigation: Implement the mapper to return a structured validation error (not a crash) for any unmapped field, and surface these errors clearly in the export confirmation dialog. Request full mapping tables from Blindeforbundet and HLF stakeholders as a pre-condition for this epic.

Contingency: If mappings arrive incomplete, ship the mapper with the available subset and mark unmapped categories as excluded (skipped with reason). Coordinators see which categories are skipped and can manually submit those records.

medium impact medium prob dependency

Supabase Vault configuration for storing per-org accounting credentials may require infra permissions or environment secrets not yet provisioned in staging or production, blocking development and testing of credential retrieval.

Mitigation & Contingency

Mitigation: Provision Vault secrets and environment configuration in staging as the first task of this epic. Document the exact secret naming convention and rotation procedure before implementation begins.

Contingency: If Vault is unavailable, use environment variables scoped to the Edge Function as a temporary fallback for development. Block production deployment until Vault-based storage is confirmed operational.