User list data layer and Supabase RLS queries
epic-admin-portal-user-management-task-001 — Implement the data access layer for the User Account Management Screen, including Supabase queries that respect RLS-enforced views scoped to the admin's organisation. Build paginated user fetch with cursor-based pagination to support organisations with hundreds or thousands of users without degrading performance.
Acceptance Criteria
Technical Requirements
Implementation Notes
Use cursor-based pagination over offset pagination — NHF has up to 1,400 local chapters with many users; offset pagination degrades at scale. The cursor should encode the last row's (created_at, id) tuple as a base64 JSON string. The PostgREST query should use `.gt('created_at', cursor.createdAt).or('created_at.eq.{cursor.createdAt},id.gt.{cursor.id}')` for stable ordering. Create a `UserRepository` class in `lib/data/repositories/admin/user_repository.dart`.
Define a `PaginatedResult
Testing Requirements
Write both unit tests (mocked SupabaseClient) and integration tests (local Supabase). Unit tests: (1) fetchUsers returns correctly mapped AppUser list from mocked JSON response; (2) cursor is correctly appended to subsequent requests; (3) role filter is correctly serialised into the PostgREST query string; (4) empty result set returns empty list with null next_cursor. Integration tests against local Supabase seeded with 3 orgs, 50+ users in org_a: (1) RLS isolation — org_b admin fetches org_b users only; (2) pagination — 3 pages of 20 return all 50 users with no duplicates; (3) name search — prefix search for 'An' returns only matching users. Run `EXPLAIN ANALYZE` on the pagination query and assert Index Scan.
Displaying NHF users with membership in up to 5 local chapters in a flat list view without duplicating entries requires a non-trivial aggregation query. Incorrect query design could result in duplicated user rows or missing chapter affiliations, confusing admins and causing incorrect role assignments.
Mitigation & Contingency
Mitigation: Design the user list query to GROUP BY user_id and aggregate chapter affiliations as an array field. Use AdminRepository's typed models to surface this aggregated structure to the UI. Validate with a test dataset containing users in 5 chapters.
Contingency: If aggregation query complexity proves too high for real-time filtering, implement a separate multi-chapter affiliation fetch triggered only when a specific user row is expanded, reducing query complexity for the base list.
Composable multi-dimensional filters (role + chapter + status + certification state) applied server-side against an org with 2,000+ users may produce slow queries, particularly when filtering by certification state requires joining an additional table.
Mitigation & Contingency
Mitigation: Ensure the relevant filter columns (role, status, chapter_id, certification_expiry) are indexed in Supabase. Use cursor-based pagination rather than OFFSET to maintain consistent performance at high page numbers. Profile filter query combinations against a large dataset during development.
Contingency: If multi-filter performance degrades in production, introduce a denormalised search index table updated on user status changes, allowing the list query to filter from a single table.
Deactivating a user account that has ongoing activity assignments, open expense claims, or active chapter affiliations may leave orphaned records or break downstream workflows if the deactivation does not trigger correct cascade handling.
Mitigation & Contingency
Mitigation: Define and document the expected state of each dependent record type on user deactivation before implementing the toggle. Implement deactivation as a UserManagementService operation that checks for and warns about open dependencies before persisting. Write integration tests covering each dependency type.
Contingency: If orphaned record issues are discovered post-launch, provide an admin-accessible reconciliation view that surfaces users with inconsistent dependency states and allows manual resolution without requiring a code deploy.