critical priority medium complexity database pending database specialist Tier 1

Acceptance Criteria

Migration file is created in supabase/migrations/ with a timestamped filename following the existing convention
Table organization_integrations exists with columns: id (UUID primary key, default gen_random_uuid()), organization_id (UUID NOT NULL, FK to organizations.id ON DELETE CASCADE), integration_type (integration_type_enum NOT NULL), status (integration_status_enum NOT NULL, default 'inactive'), field_mappings (JSONB, default '{}'), sync_schedule (TEXT, nullable, cron-format string), excluded_features (JSONB, default '[]'), created_at (TIMESTAMPTZ NOT NULL default now()), updated_at (TIMESTAMPTZ NOT NULL default now())
PostgreSQL enum type integration_type_enum is created with values: 'xledger', 'dynamics', 'cornerstone', 'consio', 'bufdir'
PostgreSQL enum type integration_status_enum is created with values: 'active', 'inactive', 'error', 'pending_setup'
Unique constraint exists on (organization_id, integration_type) — one row per org per system
Index exists on organization_id for efficient org-scoped queries
GIN index exists on field_mappings JSONB column for future key-path queries
updated_at trigger is applied to automatically update the timestamp on every row modification
Migration is idempotent — can be run twice without error (use CREATE TABLE IF NOT EXISTS, CREATE TYPE IF NOT EXISTS patterns)
Migration rollback (down) file removes the table and enums cleanly

Technical Requirements

frameworks
Supabase
PostgreSQL
apis
Supabase Management API
data models
OrganizationIntegration
IntegrationTypeEnum
IntegrationStatusEnum
performance requirements
All org-scoped SELECT queries (WHERE organization_id = $1) must use the organization_id index
JSONB field_mappings must support GIN-indexed key lookups for mapping resolution at query time
security requirements
No plaintext credentials may be stored in any column of this table — credential storage is handled exclusively by the Vault (task-004)
organization_id FK with ON DELETE CASCADE ensures orphaned integration records are cleaned up when an organization is removed
The table must have RLS enabled (ALTER TABLE ... ENABLE ROW LEVEL SECURITY) even before policies are written (task-003)

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Integration Task

Handles integration between different epics or system components. Requires coordination across multiple development streams.

Implementation Notes

Use supabase/migrations/ directory for the SQL file. Name it with format: {timestamp}_create_organization_integrations.sql. The integration_type_enum values must exactly match the string constants defined in task-001's Dart registry (xledger, dynamics, cornerstone, consio, bufdir) — keep a cross-reference comment in the migration. For sync_schedule, store as a cron expression string (e.g., '0 2 * * *' for 2am daily) and validate format in the application layer, not the database.

For excluded_features, store as a JSONB array of feature identifier strings (e.g., ['payroll_sync', 'leave_tracking']) allowing organizations to disable specific sync modules without deleting the integration. The updated_at trigger should reuse a shared moddatetime() function if already present in the Supabase project, otherwise define it in a prerequisite migration.

Testing Requirements

Database migration tests using Supabase local development environment (supabase start). Verify: (1) Migration applies cleanly from scratch. (2) Migration rollback removes table and types. (3) Unique constraint on (organization_id, integration_type) rejects duplicate inserts.

(4) FK cascade deletes integration rows when parent organization is deleted. (5) status column rejects values not in enum. (6) updated_at trigger fires on UPDATE. Run via supabase db reset in CI to validate migration idempotency.

Component
Organization Integration Repository
data medium
Epic Risks (3)
high impact medium prob technical

Supabase Vault API has limited documentation for Dart/Flutter clients; wrapping it correctly for credential rotation and secret reference management may require significant trial and error, delaying the vault component and blocking all downstream credential-dependent work.

Mitigation & Contingency

Mitigation: Spike the Vault integration in the first sprint using a minimal proof-of-concept (store, retrieve, rotate one secret). Document the API surface before building the full vault client. Identify any missing Dart SDK bindings early.

Contingency: If Supabase Vault is too complex, fall back to Supabase's encrypted column approach (pgcrypto) for credential storage as a temporary measure, with a planned migration path to Vault once the API is understood.

high impact low prob security

Incorrect RLS policy configuration on organization_integrations could allow org admins of one organization to read or modify another organization's integration credentials, creating a serious data breach and compliance violation.

Mitigation & Contingency

Mitigation: Write integration tests that explicitly attempt cross-org data access using different JWT tokens and assert 0 rows returned. Include RLS policy review in PR checklist. Use Supabase's local development stack for policy validation before deployment.

Contingency: If a breach is discovered post-deployment, immediately revoke all integration credentials, rotate vault secrets, notify affected organizations, and apply emergency RLS patches.

medium impact medium prob technical

JSONB columns for field_mappings and sync_schedule lack database-level schema enforcement; AI-generated or malformed JSON could silently corrupt integration configurations, causing export failures that are hard to diagnose.

Mitigation & Contingency

Mitigation: Define TypeScript/Dart model classes with strict deserialization and validation. Add database check constraints or triggers that validate JSONB structure at write time. Version the JSONB schema to enable forward-compatible migrations.

Contingency: Build a repair script that scans organization_integrations for invalid JSONB and resets corrupted records to a safe default state, alerting the admin of the affected organization.