critical priority low complexity database pending database specialist Tier 0

Acceptance Criteria

ClaimEventsRepository class is created in the data layer with a well-defined abstract interface (ClaimEventsRepositoryInterface or similar) for testability
Repository exposes: createEvent(ClaimEvent), getEventsByClaimId(String claimId), getRecentEventsByChapter(String chapterId, {int limit}), and deleteEvent(String eventId)
Supabase RLS policy is defined on the claim_events table: coordinators can only SELECT/INSERT events where the claim's chapter_id matches their own chapter_id (stored in auth.users metadata or a coordinator_profiles table)
ClaimEvent model includes: id, claim_id, event_type (enum: submitted, approved, rejected, escalated, more_info_requested), coordinator_id (nullable), created_at, metadata (JSONB)
createEvent returns a typed Either<Failure, ClaimEvent> (or throws domain exceptions) — never returns raw Supabase maps
All Supabase errors (network, auth, constraint violations) are caught and mapped to domain-level failure types (NetworkFailure, PermissionFailure, ValidationFailure)
Repository is registered in the dependency injection container (Riverpod provider or BLoC service locator)
A coordinator in chapter A cannot read or write events belonging to chapter B — verified by an integration test
Repository methods are documented with expected inputs, outputs, and thrown exceptions

Technical Requirements

frameworks
Flutter
Dart
supabase_flutter
Riverpod or BLoC for DI
apis
Supabase PostgREST REST API
Supabase Auth (for RLS context)
data models
ClaimEvent
ExpenseClaim
CoordinatorProfile
performance requirements
getEventsByClaimId must complete within 300ms on a standard connection
Queries must use indexed columns (claim_id, chapter_id) — confirm indexes exist in migration
Avoid N+1 patterns: fetch all events for a claim in a single query
security requirements
RLS policy must be enforced at the database level — never rely on application-layer filtering alone
coordinator_id in events must be set from the authenticated JWT (auth.uid()), not from client-supplied input
No raw SQL string interpolation — use Supabase SDK parameterized queries only
Sensitive claim metadata fields must not be logged in production

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Define a ClaimEventType enum in the domain layer so the database string values are mapped at the repository boundary — do not leak raw strings into BLoC or UI. Use Supabase's `.from('claim_events').select()` with `.eq('claim_id', id)` and `.order('created_at')`. For RLS: create a Postgres policy referencing a join to `expense_claims` where `chapter_id = (SELECT chapter_id FROM coordinator_profiles WHERE user_id = auth.uid())`. Place the RLS migration SQL in a versioned migration file alongside the repository.

Follow the existing repository pattern in the codebase (abstract interface + concrete Supabase implementation) to keep the architecture consistent and allow easy mocking in tests.

Testing Requirements

Unit tests: mock the Supabase client and verify that each repository method calls the correct table/filter/order chain. Test success path and each failure type (network error, permission denied, not found). Integration tests: spin up a local Supabase instance (or use a test project) and verify RLS — a coordinator JWT for chapter A must receive an empty result set when querying events in chapter B. Test event creation sets coordinator_id from the JWT, not from the payload.

Use flutter_test with a fake/stub SupabaseClient for unit tests. Target 90%+ line coverage on the repository class.

Epic Risks (3)
medium impact medium prob technical

Maintaining multi-select state across paginated list pages is architecturally complex in Flutter with Riverpod/BLoC. If the selection state is stored in the widget tree rather than the state layer, page transitions and list redraws can silently clear selections, causing coordinators to lose their multi-select and re-enter it.

Mitigation & Contingency

Mitigation: Store the selected claim ID set in a dedicated Riverpod StateNotifier outside the paginated list widget tree. The paginated list reads selection state from this provider and does not own it. Selection persists independently of list scroll position or page loads.

Contingency: If cross-page selection proves prohibitively complex, limit bulk selection to the currently visible page (add a clear warning in the UI) and prioritise single-page bulk approval for the initial release.

medium impact medium prob integration

If a coordinator has the queue open while another coordinator approves claims from the same queue (possible in large organisations with shared chapter coverage), the Realtime update may arrive out of order or be missed during a reconnect, leaving the first coordinator's view stale and allowing them to attempt to approve an already-actioned claim.

Mitigation & Contingency

Mitigation: The ApprovalWorkflowService's optimistic locking (from the foundation epic) will catch the concurrent edit at the database level. The CoordinatorReviewQueueScreen should handle the resulting ConcurrencyException by removing the claim from the local list and showing a brief snackbar: 'This claim was already actioned by another coordinator.'

Contingency: Add a queue staleness indicator (a subtle 'last updated X seconds ago' label) and a manual refresh button as a fallback for coordinators who notice inconsistencies.

low impact high prob dependency

The end-to-end test requirement that a peer mentor receives a push notification within 30 seconds of coordinator approval depends on FCM delivery latency, which is outside the application's control and can vary significantly in CI/CD environments.

Mitigation & Contingency

Mitigation: Structure end-to-end tests to verify notification intent (correct FCM payload dispatched, correct Realtime event emitted) rather than actual device delivery timing. Use test doubles for FCM delivery in automated tests and reserve real-device delivery tests for manual pre-release validation.

Contingency: If notification timing requirements must be validated in automation, instrument the ApprovalNotificationService with a test hook that records dispatch timestamps and assert against those rather than actual FCM callbacks.