critical priority high complexity backend pending backend specialist Tier 2

Acceptance Criteria

HierarchyService.reparentUnit() calls cycle detection before any Supabase write and throws CycleDetectedError if a cycle would result
CycleDetectedError is a typed Dart sealed class/exception containing the proposed parentId, targetUnitId, and the cycle path as a List<String>
DFS or BFS traversal operates on the in-memory or cached adjacency map (Map<String, List<String>>) and does NOT issue Supabase queries per node
Cycle detection completes in O(V+E) time where V = number of units and E = edges in the adjacency list
Attempting to set a unit as its own parent is detected and rejected with CycleDetectedError
Attempting to reparent a unit to one of its own descendants is detected and rejected
If the adjacency list cache is stale or missing, the service rebuilds it from Supabase before running the traversal
All existing reparent operations that do NOT create a cycle pass through without error
CycleDetectedError message is human-readable and suitable for display to a coordinator in the Flutter UI
Unit tests cover: direct self-parent, single-step cycle (A→B→A), multi-step cycle (A→B→C→A), and valid reparent with no cycle

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
apis
Supabase REST/Realtime for adjacency list hydration
data models
OrganizationalUnit
HierarchyAdjacencyMap
CycleDetectedError
performance requirements
Cycle detection must complete in under 50ms for hierarchies up to 2000 nodes (NHF has 1400+ local chapters)
Adjacency map must be loaded from cache; no per-node Supabase roundtrip during traversal
Cache rebuild triggered at most once per mutation batch
security requirements
Cycle detection runs server-side equivalent logic in Dart before any Supabase write to prevent race conditions
Adjacency map must only contain units visible to the authenticated user's organization scope
CycleDetectedError must not expose internal node IDs outside the user's permitted scope

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Use an iterative DFS with an explicit stack (avoid recursion stack overflow on deep hierarchies common in NHF's 1400-chapter structure). Maintain a visited Set and a recursionStack Set; if a node is encountered that is already in the recursionStack, a cycle exists. The adjacency map should be provided as a parameter to the pure detection function to keep it testable in isolation — HierarchyService is responsible for hydrating and passing the map. The CycleDetectedError should record the full cycle path so the UI can display a meaningful message like 'Cannot move Region Oslo under Chapter Grünerløkka — this would create a loop: Region Oslo → Chapter Grünerløkka → Region Oslo'.

Store the adjacency map as a Riverpod StateProvider>> so it is shared with the ancestor/descendant computation task (task-005) without duplication. Do not call HierarchyStructureValidator from within HierarchyService for the cycle check — the validator is a separate pre-gate (task-007); the service-level check is a defense-in-depth layer.

Testing Requirements

Unit tests (flutter_test): cover all cycle permutations (self-loop, 2-node cycle, N-node cycle, valid reparent, empty graph, single-node graph). Integration test: verify that a reparent call that would create a cycle is intercepted before any Supabase mutation is issued (mock Supabase client and assert zero write calls on cycle detection). Performance test: generate a synthetic 2000-node adjacency map and assert traversal completes under 50ms. Golden path test: valid reparent succeeds end-to-end with updated adjacency cache.

Minimum 90% branch coverage on the cycle-detection method.

Component
Hierarchy Service
service high
Epic Risks (4)
high impact medium prob security

Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.

Mitigation & Contingency

Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.

Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.

medium impact medium prob technical

Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.

Mitigation & Contingency

Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.

Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.

medium impact medium prob scope

Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.

Mitigation & Contingency

Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.

Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.

high impact low prob integration

Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.

Mitigation & Contingency

Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.

Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.