Implement paginated history fetch with org scoping
epic-bufdir-report-history-services-task-001 — Build the core fetch method in ReportHistoryService that retrieves paginated report history records scoped to the authenticated user's organization. Integrate with the report history repository to query records, apply cursor-based or offset pagination, and return typed result objects including total count and page metadata.
Acceptance Criteria
Technical Requirements
Implementation Notes
Define a `PaginatedResult
get currentOrganizationId` — do not call `Supabase.instance.client.auth.currentUser` directly in the service (breaks testability). Decide between offset pagination (simpler, acceptable for this use case — report history is append-only and low-volume) and cursor pagination (more correct for real-time data). Given Bufdir reporting context, offset with `page`/`pageSize` is sufficient and matches the repository's `.range()` implementation. Document this decision in the method's Dart doc comment.
Testing Requirements
Unit tests using Mockito mocks for `ReportHistoryRepository` and the auth session provider. Test cases: (1) happy path returns correctly structured `PaginatedResult`; (2) page beyond total count returns empty list with `hasNextPage: false`; (3) organization_id from session is passed to repository — never a hardcoded value; (4) pageSize outside 1–100 throws `ArgumentError`; (5) missing organization_id in session throws `AuthorizationException`; (6) repository exception propagates correctly (service does not swallow it). Use `ProviderContainer` from Riverpod test utilities to inject mock dependencies if service is a Riverpod provider.
The ReportReexportCoordinator must invoke the Bufdir export pipeline defined in the bufdir-report-export feature. If that feature's internal API changes (renamed services, altered parameters), the re-export coordinator will break silently at runtime.
Mitigation & Contingency
Mitigation: Define a stable, versioned interface (abstract class or Dart interface) for the export pipeline entry point. The re-export coordinator depends only on this interface, not on concrete export service internals. Document the contract in both features.
Contingency: If the export pipeline breaks the re-export coordinator, fall back to surfacing a clear 'regeneration unavailable' message to the coordinator with instructions to use the primary export screen for the same period as a workaround, while the interface mismatch is fixed.
The audit trail must be immutable — coordinators must not be able to edit or delete past events. If the RLS policies allow UPDATE or DELETE on audit event rows, a coordinator could suppress evidence of a re-export or failed submission.
Mitigation & Contingency
Mitigation: Apply INSERT-only RLS policies to the audit events table (no UPDATE, no DELETE for any non-service-role user). Use a separate service-role key for writing audit events, never the user's JWT. Validate this in integration tests by asserting that UPDATE and DELETE calls from coordinator-role sessions are rejected with RLS errors.
Contingency: If immutability is compromised before detection, run a database audit comparing the audit log against the main history table timestamps to identify tampered records, restore from backup if needed, and issue a patch RLS migration immediately.
The user stories require filter state (year, period type, status) to persist within a session so coordinators do not lose context when navigating away. Implementing this with Riverpod state management could cause stale filter state if the provider is not properly scoped to the session lifecycle.
Mitigation & Contingency
Mitigation: Scope the filter state provider to the router's history route scope, not globally. Use autoDispose with a keepAlive flag tied to the session so filters reset on logout but persist on tab switches within the same session.
Contingency: If filter state becomes stale or leaks between sessions, add an explicit reset in the logout handler that disposes all scoped providers. This is a UX degradation (coordinator must re-apply filters) rather than a data integrity issue.