critical priority medium complexity database pending database specialist Tier 1

Acceptance Criteria

Migration script creates expense_claims table with columns: id (uuid PK), organization_id (uuid FK → organizations, NOT NULL), claimant_id (uuid FK → users, NOT NULL), status (enum: draft/submitted/under_review/approved/rejected/auto_approved, NOT NULL DEFAULT 'draft'), total_amount (numeric(12,2) NOT NULL), currency (varchar(3) NOT NULL DEFAULT 'NOK'), auto_approved (boolean NOT NULL DEFAULT false), threshold_snapshot (jsonb), created_at (timestamptz NOT NULL DEFAULT now()), updated_at (timestamptz NOT NULL DEFAULT now())
Migration script creates expense_line_items table with columns: id (uuid PK), expense_claim_id (uuid FK → expense_claims CASCADE DELETE), expense_type_id (uuid FK → activity_types), amount (numeric(10,2) NOT NULL), description (text), receipt_url (text), distance_km (numeric(6,1)), created_at (timestamptz NOT NULL DEFAULT now())
Composite index on expense_claims(organization_id, status) for coordinator list queries
Index on expense_claims(claimant_id, created_at DESC) for peer mentor history queries
Index on expense_claims(organization_id, created_at DESC) for period-based reporting queries
RLS policy: claimants can SELECT/INSERT/UPDATE their own claims where claimant_id = auth.uid() AND status IN ('draft', 'submitted')
RLS policy: coordinators can SELECT all claims within their organization_id and UPDATE status field only
RLS policy: service role bypasses RLS for edge function auto-approval writes
All FK constraints use ON DELETE RESTRICT unless explicitly specified as CASCADE
Migration is idempotent (safe to re-run) using IF NOT EXISTS and DO $$ blocks
Enum type for claim status defined as a Postgres TYPE, not a CHECK constraint, to allow future extension
threshold_snapshot jsonb column stores a copy of the org thresholds at submission time to prevent retroactive threshold changes affecting historical claims
Updated_at column has a trigger that auto-updates on any row modification
Migration includes rollback script that cleanly drops all created objects in reverse order

Technical Requirements

frameworks
Supabase CLI migrations
PostgreSQL 15
apis
Supabase PostgreSQL 15
Supabase Auth (auth.uid() in RLS)
data models
activity
assignment
claim_event
annual_summary
performance requirements
Index on (organization_id, status) must keep coordinator list query under 100ms for orgs with 10,000 claims
Claimant history query (last 50 claims) must complete under 50ms
Migration must complete in under 30 seconds on production Supabase instance
security requirements
RLS enforced on all tables — no public access policy
Service role key never exposed to mobile client — only edge functions use service role
organization_id must be verified against JWT claims in RLS policies, not trusted from client input
threshold_snapshot stores copy at submission time to prevent retroactive manipulation
All PII fields (claimant personal data) subject to Supabase column-level encryption if available

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use Supabase CLI `supabase migration new expense_claims_schema` to create the migration file. Store threshold_snapshot as jsonb rather than normalized columns — this avoids JOIN complexity when rendering historical claims and protects against org config changes invalidating past records. For the RLS coordinator policy, join against an org_members or user_roles table to verify coordinator role rather than using a hardcoded role claim in JWT — this allows role changes to take effect without token re-issue. Define the claim status enum as a Postgres TYPE so Dart code generation (via supabase_flutter) can map it to a sealed class.

Ensure the migration file is committed to version control under supabase/migrations/. Do not use Supabase Studio UI to create tables — all schema must be in migration files for reproducibility.

Testing Requirements

Write PostgreSQL unit tests using pgTAP or Supabase's built-in test runner. Test coverage must include: (1) RLS policy tests — verify claimant A cannot read claimant B's claims in same org, verify coordinator can read all org claims, verify coordinator cannot modify claim amounts; (2) FK constraint tests — verify cascade delete on expense_claim deletion removes all line items; (3) Enum transition tests — verify invalid status transitions are blocked at DB level if triggers enforce state machine; (4) Index usage verification using EXPLAIN ANALYZE on the three defined query patterns; (5) Migration rollback test — apply migration, roll back, re-apply cleanly. Run migration against a local Supabase instance in CI before merging.

Component
Expense Repository
data medium
Epic Risks (3)
high impact medium prob security

Row-level security policies for expense claims must correctly scope data to organisation, role (peer mentor sees own claims only, coordinator sees org-wide queue), and claim status. Incorrect RLS can expose claims cross-organisation or prevent coordinators from accessing the attestation queue.

Mitigation & Contingency

Mitigation: Define RLS policies in code-reviewed migration files. Write integration tests that attempt cross-org reads with different JWT roles and assert access denial. Review with a second engineer before merging migrations.

Contingency: If RLS is misconfigured post-deployment, disable the affected policy temporarily and apply a hotfix migration within the same release window. No claim data is exposed publicly due to Supabase project-level auth requirement.

medium impact medium prob technical

The auto-approval Edge Function is triggered server-side on expense insert. Cold-start latency or Edge Function failures can block the submission response and degrade UX, especially on mobile networks.

Mitigation & Contingency

Mitigation: Implement the auto-approval Edge Function client with a timeout and graceful fallback: if no result is received within 5 seconds, treat the claim as 'pending' and poll for the status update via Supabase Realtime. Keep the Edge Function warm with a periodic ping.

Contingency: If Edge Function reliability is unacceptable, move auto-approval evaluation to a database trigger or Postgres function as an interim measure, accepting that threshold configuration changes require a migration rather than a settings update.

medium impact low prob scope

The expense type catalogue and threshold configuration are cached locally for offline use. If an organisation updates their catalogue exclusion rules or thresholds while a peer mentor is offline, the local cache may allow submissions that violate the new policy.

Mitigation & Contingency

Mitigation: Cache entries include a TTL (24 hours). On connectivity restore, refresh cache before allowing new submissions. Server-side validation in the Edge Function and save functions provides a second enforcement layer.

Contingency: If a stale-cache submission passes client validation but fails server validation, surface a clear error message explaining that the expense type rules have been updated and prompt the user to review their selection with the refreshed catalogue.