critical priority high complexity backend pending backend specialist Tier 1

Acceptance Criteria

CredentialProvider.getCredential(integrationId: string): Promise<string> retrieves the secret value for the given integration from Supabase Vault using the service role client
Credentials retrieved are stored in a module-level Map keyed by integrationId for the duration of the invocation — no second Vault call is made for the same integrationId within one function invocation
If a secret is not found in Vault (null result), CredentialProvider throws a typed CredentialNotFoundError with the integrationId — never a generic error
If the Vault call fails due to network or permissions error, CredentialProvider throws a typed CredentialAccessError with the original cause — never swallows the error silently
Credential values never appear in: console.log output, response body, Supabase logs (structured log fields), or error messages returned to callers
Unit tests cover: successful retrieval, cache hit (second call does not make a second DB call), missing secret error, Vault access failure error
The module is exported from supabase/functions/_shared/credential-provider.ts and is importable by all functions via the import map alias

Technical Requirements

frameworks
Supabase Edge Functions (Deno)
Supabase JS client v2 (service role)
apis
Supabase Vault API (via supabase.rpc('vault.decrypted_secrets') or direct table access with service role)
data models
VaultSecret (id, name, decrypted_secret)
performance requirements
Vault lookup must complete in under 200ms on a warm Supabase instance
Per-invocation cache eliminates redundant Vault calls for the same credential within a single dispatch
security requirements
Service role client must be initialised with SUPABASE_SERVICE_ROLE_KEY from environment — never from a user-supplied value
Credential values must be treated as opaque strings — never inspected, parsed, or logged
CredentialNotFoundError and CredentialAccessError messages must reference the integrationId only, never the secret name or partial secret value
The cache Map must not persist across invocations — it must be declared inside the module scope and reset on each cold or warm start as Deno isolates are per-invocation

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Integration Task

Handles integration between different epics or system components. Requires coordination across multiple development streams.

Implementation Notes

Access Vault secrets via the Supabase JS client by querying the vault.decrypted_secrets view with the service role client: supabase.from('vault.decrypted_secrets').select('decrypted_secret').eq('name', secretName).single(). The secret naming convention should be integration_{integrationId}_api_key (e.g., integration_xledger_api_key) — document this convention in the shared README. Define custom error classes as simple Deno classes extending Error with a typed name property to allow instanceof checks downstream. Do not use try/catch inside CredentialProvider to suppress errors — let them propagate to the calling router which will translate them into HTTP 500/403 responses.

Ensure the Supabase client is instantiated once per module (not per call) to reuse the HTTP connection pool within an invocation.

Testing Requirements

Write Deno unit tests (using Deno.test) for CredentialProvider. Mock the Supabase client using a stub object — do not make real Vault calls in unit tests. Test scenarios: (1) getCredential returns the correct decrypted secret for a known integrationId; (2) a second call for the same integrationId uses the cache and does not call the Supabase client again; (3) when Vault returns null, CredentialNotFoundError is thrown with the correct integrationId; (4) when the Supabase call rejects, CredentialAccessError is thrown wrapping the original error. All tests must pass with deno test --allow-env.

Component
Integration Edge Functions
infrastructure high
Epic Risks (3)
medium impact medium prob technical

Supabase Edge Functions have cold start latency that can cause the first sync invocation after idle periods to fail or timeout when the external API has a short connection window, leading to missed scheduled syncs that go undetected.

Mitigation & Contingency

Mitigation: Configure Edge Function memory and implement a warm-up ping mechanism before heavy sync invocations. Set generous timeout values on the external API calls. Log all cold-start incidents for monitoring.

Contingency: If cold starts cause consistent sync failures, migrate the sync scheduler to a persistent Supabase cron job that pre-warms the function 30 seconds before the scheduled sync time.

high impact low prob technical

The sync scheduler must execute jobs at predictable times for financial reporting accuracy. Drift in cron execution timing (due to Supabase infrastructure delays) could cause syncs to run at wrong times, leading to missing data in accounting exports or duplicate exports across reporting periods.

Mitigation & Contingency

Mitigation: Implement idempotency keys based on integration ID + scheduled period, so re-runs of a delayed sync cannot create duplicate exports. Log actual execution timestamps vs scheduled timestamps and alert on drift exceeding 5 minutes.

Contingency: If scheduler reliability is insufficient, integrate with a dedicated cron service (e.g., pg_cron on Supabase) for millisecond-precise scheduling, replacing the application-level scheduler.

high impact medium prob integration

Aggressive health monitoring ping frequency could trigger rate limiting on external APIs (especially Xledger and Dynamics), causing legitimate export calls to fail after the monitor exhausts the API's request quota.

Mitigation & Contingency

Mitigation: Use lightweight health check endpoints (HEAD requests or vendor-specific ping/status endpoints) rather than data requests. Set health check frequency to once per 15 minutes minimum. Implement exponential backoff after consecutive failures.

Contingency: If rate limiting occurs, disable active health monitoring for the affected integration type and switch to passive health detection (mark unhealthy only when a scheduled sync fails).