Implement HierarchyService Supabase read queries
epic-organizational-hierarchy-management-core-services-task-002 — Implement the read-side Supabase queries in HierarchyService: fetching a single unit by ID, listing all units under a parent, and fetching the full recursive adjacency-list tree using Supabase RPC or recursive CTE. Wire results through the hierarchy cache.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 1 - 540 tasks
Can start after Tier 0 completes
Implementation Notes
For getFullTree, prefer a Supabase RPC call to a get_full_hierarchy(root_id uuid) PostgreSQL function over client-side recursive fetching — this avoids N+1 queries and handles NHF's deep tree safely. The RPC function should return a flat list of {id, parent_id, name, depth} rows which the Dart code assembles into a HierarchyNode tree using a Map
Inject it as a singleton via Riverpod. Avoid holding a reference to the full tree in a BLoC state directly — keep the tree in the cache service and expose it via a Riverpod provider that BLoCs read from. This prevents unnecessary state rebuilds on unrelated BLoC events.
Testing Requirements
Write unit tests using mockito or manual fakes to mock the Supabase client. Test cases: (1) getUnit returns correct OrganizationUnit when Supabase returns a matching row; (2) getUnit throws UnitNotFound when Supabase returns empty list; (3) listChildren returns correctly mapped list sorted by name; (4) getFullTree builds correct HierarchyNode tree from a flat list of units (test with a 3-level fixture: root → 2 children → 4 grandchildren); (5) getFullTree returns cached result on second call without hitting Supabase; (6) a Supabase PostgrestException is caught and rethrown as HierarchyServiceError. Also write an integration test (marked @Tags(['integration'])) that calls a local Supabase instance seeded with fixture data. Run unit tests in CI; run integration tests only in staging pipeline.
Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.
Mitigation & Contingency
Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.
Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.
Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.
Mitigation & Contingency
Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.
Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.
Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.
Mitigation & Contingency
Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.
Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.
Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.
Mitigation & Contingency
Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.
Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.