high priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

HierarchyService exposes getAncestorChain(String unitId) returning List<OrganizationalUnit> ordered from root to the direct parent of unitId (not including unitId itself)
HierarchyService exposes getDescendantSubtree(String unitId) returning List<OrganizationalUnit> containing all units in the subtree rooted at unitId (not including unitId itself)
Both methods read from the in-memory adjacency map and do NOT issue Supabase queries if the cache is warm
Results for both methods are cached per unitId using a Riverpod provider with autoDispose and a configurable TTL (default 5 minutes)
Cache is invalidated for affected unit IDs when a HierarchyChangedEvent is received (integration with task-006)
Calling getAncestorChain on the root node returns an empty list without error
Calling getDescendantSubtree on a leaf node returns an empty list without error
Both methods throw HierarchyUnitNotFoundError if the given unitId does not exist in the adjacency map
AccessScopeService can call getDescendantSubtree to obtain all unit IDs under a coordinator's assigned scope
HierarchyTreeView can call getDescendantSubtree to expand all children without additional Supabase queries

Technical Requirements

frameworks
Flutter
Riverpod
apis
Supabase for initial adjacency map hydration if cache is cold
data models
OrganizationalUnit
HierarchyAdjacencyMap
HierarchyUnitNotFoundError
performance requirements
getAncestorChain must complete in O(depth) time using the parent-pointer map
getDescendantSubtree must complete in O(subtree size) using BFS on the children-list map
Cache hit must return in under 1ms; cache miss (including Supabase hydration) must complete in under 200ms for organizations with up to 2000 units
security requirements
Results must be filtered to units within the authenticated user's organization
Cached results must be scoped per organization ID to prevent cross-organization data leakage
ui components
HierarchyTreeView (consumer of getDescendantSubtree)

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Maintain two complementary maps derived from the same adjacency data: (1) a parent-pointer map (Map) for O(1) parent lookup used in ancestor chain traversal, and (2) a children-list map (Map>) for BFS in descendant traversal. Both can be built in a single O(N) pass over the Supabase units result. Use Riverpod's family provider to cache per unitId: final ancestorChainProvider = FutureProvider.family, String>. For getDescendantSubtree, implement iterative BFS to avoid stack overflow on wide org charts (NHF: 1400 local chapters potentially under one region).

The cache TTL should be configurable via a HierarchyServiceConfig injectable so tests can set TTL to 0 for immediate invalidation. Coordinate with task-006 (event emission) to wire cache invalidation: HierarchyChangedEvent should carry the affected unitId(s) so only the relevant cache entries are invalidated rather than flushing the entire cache.

Testing Requirements

Unit tests (flutter_test): getAncestorChain on root (empty list), on a mid-level node (correct ordered chain), on a leaf (all ancestors). getDescendantSubtree on root (all units), on a leaf (empty list), on a mid-level node (correct subtree). Both methods on unknown unitId throw HierarchyUnitNotFoundError. Cache test: second call returns cached result without invoking Supabase mock.

Cache invalidation test: after HierarchyChangedEvent emission, subsequent call re-computes. Performance test: 2000-node tree, assert both methods complete under 5ms on warm cache.

Component
Hierarchy Service
service high
Epic Risks (4)
high impact medium prob security

Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.

Mitigation & Contingency

Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.

Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.

medium impact medium prob technical

Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.

Mitigation & Contingency

Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.

Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.

medium impact medium prob scope

Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.

Mitigation & Contingency

Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.

Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.

high impact low prob integration

Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.

Mitigation & Contingency

Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.

Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.