critical priority high complexity backend pending backend specialist Tier 1

Acceptance Criteria

HierarchyStructureValidator exposes a pure method validateNoCycle(String unitId, String proposedParentId, Map<String, List<String>> adjacencyList) returning ValidationResult
ValidationResult is a sealed class with subtypes ValidationSuccess and ValidationFailure; ValidationFailure contains a human-readable English message and a typed error code CycleDetected
The method has no side effects and does not access Supabase or any external state
Cycle detection logic uses DFS/BFS and completes in O(V+E) time on the provided adjacency list
A direct self-parent assignment (unitId == proposedParentId) is caught and returns ValidationFailure
A reparent that would make a unit a child of one of its own descendants is caught and returns ValidationFailure
A valid reparent that creates no cycle returns ValidationSuccess
ValidationFailure message identifies the full cycle path for display to the coordinator (e.g., 'Moving this unit would create a cycle: A → B → C → A')
HierarchyService calls this method before every reparent write and propagates the ValidationFailure as a CycleDetectedError to the caller
The method is tested independently of HierarchyService — no service wiring required for validator unit tests

Technical Requirements

frameworks
Flutter
data models
ValidationResult
ValidationSuccess
ValidationFailure
HierarchyAdjacencyMap
performance requirements
Pure function with no I/O — must complete in under 10ms for graphs up to 2000 nodes
No heap allocations beyond the visited set and traversal stack
security requirements
Pure function with no external dependencies — no injection attack surface
Adjacency list passed as parameter must be pre-filtered to the user's organization before being passed to the validator

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

This validator is the authoritative, pure-function implementation of cycle detection logic. The HierarchyService-level check (task-004) is defense-in-depth and should delegate to this validator rather than re-implementing the algorithm. Keep the method signature purely functional: (String, String, Map>) → ValidationResult — no class state, no dependency injection needed. Implement iterative DFS: push proposedParentId onto the stack; at each step, pop a node and check if it equals unitId — if yes, cycle detected.

Accumulate the traversal path in a List to build the cycle message. If traversal exhausts the graph without finding unitId, return ValidationSuccess. Use Dart 3 sealed classes for ValidationResult to enable exhaustive pattern matching at the call site. Document the method with a /// doc comment explaining the algorithm and the O(V+E) complexity guarantee, as this is a critical correctness boundary that will be reviewed by multiple developers.

Testing Requirements

Unit tests (flutter_test) — all tests operate on in-memory adjacency maps with no mocking required: self-parent returns ValidationFailure with CycleDetected code; 2-node cycle (A→B, propose B as parent of A) returns ValidationFailure; 3-node cycle returns ValidationFailure with full path in message; valid reparent across branches returns ValidationSuccess; reparent to existing parent (no change) returns ValidationSuccess; empty adjacency list (new root unit) returns ValidationSuccess; disconnected graph with multiple roots returns correct result. Target 100% branch coverage on validateNoCycle. Fuzz test: generate random DAGs of 100–500 nodes, insert one cycle, assert detector catches it.

Component
Hierarchy Structure Validator
infrastructure medium
Epic Risks (4)
high impact medium prob security

Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.

Mitigation & Contingency

Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.

Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.

medium impact medium prob technical

Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.

Mitigation & Contingency

Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.

Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.

medium impact medium prob scope

Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.

Mitigation & Contingency

Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.

Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.

high impact low prob integration

Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.

Mitigation & Contingency

Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.

Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.