medium priority high complexity backend pending backend specialist Tier 2

Acceptance Criteria

Exponential backoff retry logic implemented with configurable parameters: initial_delay_ms (default 500), backoff_multiplier (default 2), max_attempts (default 3), max_delay_ms (default 10000)
Retry is triggered on HTTP 429, 500, 502, 503, 504 responses only — not on 4xx client errors (400, 401, 403, 422)
Each retry attempt is logged to export_runs with attempt_number, http_status, error_message, and timestamp
export_runs table schema: id (uuid), organisation_id (uuid), integration_type (enum), claim_ids (jsonb array), attempt_number (int), http_status (int), error_message (text, nullable), exported_at (timestamptz), triggered_by_user_id (uuid)
Double-export guard: before submitting a batch, query export_runs for any successful export (http_status 2xx) containing any of the same claim_ids for the same organisation — if found, skip those claims and return them as already_exported in the result
Double-export check uses a Postgres function or index to efficiently query jsonb claim_ids array (GIN index on claim_ids column)
getExportStatus(organisationId, limit) Edge Function returns the last N export run records for the organisation, including per-claim success/failure details
Export status records are accessible to coordinator role users via RLS policy on export_runs
A failed final attempt (all retries exhausted) records a terminal failure entry in export_runs and returns a typed ExportFailureResult with all failed claim IDs
Retry delay does not block the Edge Function beyond its timeout — if retries would exceed the function timeout, fail fast and record a timeout error

Technical Requirements

frameworks
Supabase Edge Functions (Deno)
Supabase PostgreSQL 15
apis
Supabase PostgreSQL 15
Xledger REST API
Microsoft Dynamics 365 REST API
performance requirements
GIN index on export_runs.claim_ids must keep double-export guard query under 50ms for tables with up to 10,000 rows
getExportStatus must return within 200ms for up to 100 recent records
security requirements
export_runs RLS: service role can insert; coordinator role can read rows scoped to their organisation_id; peer mentors cannot read
claim_ids in export_runs must not include sensitive PII fields — only UUIDs
Error messages stored in export_runs must be sanitised to remove any credential fragments before storage

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Implement retry as a reusable withRetry(fn, options) higher-order function in the Edge Function codebase. Use Deno's setTimeout with await for delay (ensure total delay stays within Edge Function wall-clock limit, typically 150s for Supabase). For the double-export guard, use a Postgres query: SELECT id FROM export_runs WHERE organisation_id = $1 AND http_status BETWEEN 200 AND 299 AND claim_ids ?| $2::text[] — this uses the GIN index via the ?| (any-key-exists) operator. Sanitise error messages before insert: strip any Bearer/API-key patterns using a regex replace.

The export_runs table should be append-only — never update or delete rows, to preserve audit integrity. For getExportStatus, return a summary object per run including { runId, exportedAt, totalClaims, successCount, failureCount, integrationStatus }.

Testing Requirements

Unit tests (Deno): (1) retry fires on 500 and not on 400, (2) backoff delay doubles on each attempt up to max_delay_ms, (3) max_attempts respected — no more retries after limit reached. Integration tests: (4) successful export creates export_runs record with http_status=200 and all claim_ids, (5) double-export guard returns already_exported for claims present in a prior successful run, (6) double-export guard does not block claims absent from prior runs. Edge Function test: getExportStatus returns correct records scoped to requesting organisation. Database test: GIN index exists on export_runs.claim_ids and query plan uses index scan.

All tests use mock HTTP adapters and a Supabase local dev instance.

Component
Accounting Integration Client
infrastructure high
Epic Risks (3)
high impact high prob integration

The Dynamics portal (HLF) and Xledger (Blindeforbundet) APIs have organisation-managed API contracts that may change without notice. Field mapping requirements, authentication flows, and export formats are not fully documented and may only be clarified during integration testing.

Mitigation & Contingency

Mitigation: Engage HLF and Blindeforbundet technical contacts early to obtain API documentation, sandbox credentials, and example payloads before implementation starts. Design the accounting integration client as a thin adapter layer with organisation-specific mappers so that field mapping changes require only mapper updates, not core client changes.

Contingency: If API documentation is unavailable or the API is unstable during Phase 3, implement a CSV/JSON file export as an interim deliverable. Coordinators can manually upload the file to their respective accounting systems until the live API integration is completed.

high impact medium prob scope

The confidentiality declaration for Blindeforbundet drivers may have specific legal requirements around content, format, wording, and record-keeping that are not yet specified. Implementing the wrong declaration flow could expose Blindeforbundet to compliance risk.

Mitigation & Contingency

Mitigation: Treat the declaration content and acknowledgement flow as a Blindeforbundet-controlled configuration, not hardcoded text. Implement the declaration as a templated document fetched from Supabase and reviewed by Blindeforbundet before any production deployment. Obtain written sign-off on the declaration text and acknowledgement mechanism before the epic is considered complete.

Contingency: If legal requirements cannot be confirmed in time for the sprint, deliver the driver honorarium form without the confidentiality declaration and gate the entire driver feature behind its feature flag. The declaration can be added in a follow-up sprint once requirements are confirmed, without blocking other feature delivery.

high impact medium prob integration

If the accounting export can be triggered multiple times for the same approved claims batch, duplicate records may be created in Dynamics or Xledger, causing accounting reconciliation problems that are difficult to reverse.

Mitigation & Contingency

Mitigation: Implement idempotent export runs: each export batch is assigned a unique run ID stored in the database. The accounting integration client checks for an existing successful export run for the same claim IDs before submitting. Approved claims that have been exported are marked with exported_at timestamp to prevent re-export.

Contingency: If duplicate exports occur despite idempotency checks (e.g. network failure after API success but before local confirmation), provide coordinators with an export history panel showing run IDs and timestamps. Implement a reconciliation endpoint that can query the accounting system for existing records before re-submitting flagged claims.