Sync job status tracking and run log storage
epic-external-system-integration-configuration-backend-infrastructure-task-010 — Design and implement the sync_run_log database table and associated repository layer for the Sync Scheduler. The table tracks job start time, end time, status, records processed, records failed, error message, and triggered_by (scheduler vs manual). Implement write operations for status transitions and read operations for the admin dashboard to display last run results per integration.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Handles integration between different epics or system components. Requires coordination across multiple development streams.
Implementation Notes
Use a Postgres enum or CHECK constraint for status — prefer CHECK for easier future extension without a migration. The (org_id, integration_type, status='running') unique partial index needed by task-011 should be added here as a prerequisite. Implement updateRunLogStatus as an atomic UPDATE ... WHERE id = $runId AND status = 'running' to prevent accidental overwrites of already-completed runs.
Keep error_message truncated to 2000 characters maximum in the repository layer to avoid unbounded storage. Use Supabase Edge Function wrappers rather than direct PostgREST calls so server-side validation is always enforced. Consider adding a generated column duration_seconds = EXTRACT(EPOCH FROM ended_at - started_at) for dashboard display convenience.
Testing Requirements
Unit tests using flutter_test with mocked Supabase client covering: (1) insertRunLog creates correct initial row with status='running'; (2) updateRunLogStatus transitions all valid statuses; (3) updateRunLogStatus with null errorMessage clears previous error; (4) getLastRunPerIntegration returns only the latest row per integration_type; (5) getRunHistory respects limit parameter; (6) RLS rejects cross-org reads (integration test against local Supabase instance). Migration SQL tested for idempotency by running twice. At least 80% line coverage on repository layer.
Supabase Edge Functions have cold start latency that can cause the first sync invocation after idle periods to fail or timeout when the external API has a short connection window, leading to missed scheduled syncs that go undetected.
Mitigation & Contingency
Mitigation: Configure Edge Function memory and implement a warm-up ping mechanism before heavy sync invocations. Set generous timeout values on the external API calls. Log all cold-start incidents for monitoring.
Contingency: If cold starts cause consistent sync failures, migrate the sync scheduler to a persistent Supabase cron job that pre-warms the function 30 seconds before the scheduled sync time.
The sync scheduler must execute jobs at predictable times for financial reporting accuracy. Drift in cron execution timing (due to Supabase infrastructure delays) could cause syncs to run at wrong times, leading to missing data in accounting exports or duplicate exports across reporting periods.
Mitigation & Contingency
Mitigation: Implement idempotency keys based on integration ID + scheduled period, so re-runs of a delayed sync cannot create duplicate exports. Log actual execution timestamps vs scheduled timestamps and alert on drift exceeding 5 minutes.
Contingency: If scheduler reliability is insufficient, integrate with a dedicated cron service (e.g., pg_cron on Supabase) for millisecond-precise scheduling, replacing the application-level scheduler.
Aggressive health monitoring ping frequency could trigger rate limiting on external APIs (especially Xledger and Dynamics), causing legitimate export calls to fail after the monitor exhausts the API's request quota.
Mitigation & Contingency
Mitigation: Use lightweight health check endpoints (HEAD requests or vendor-specific ping/status endpoints) rather than data requests. Set health check frequency to once per 15 minutes minimum. Implement exponential backoff after consecutive failures.
Contingency: If rate limiting occurs, disable active health monitoring for the affected integration type and switch to passive health detection (mark unhealthy only when a scheduled sync fails).