high priority medium complexity backend pending backend specialist Tier 2

Acceptance Criteria

HierarchyStructureValidator exposes validateDeletion(String unitId, Map<String, List<String>> childrenMap) returning ValidationResult
If the unit has no children in childrenMap, return ValidationSuccess
If the unit has one or more direct children, return ValidationFailure with error code OrphanDetected and a List<String> of direct child IDs in the payload
The error message clearly states that deletion would leave N units without a parent and lists the affected child IDs
The method is a pure function with no Supabase or external dependencies
HierarchyService calls validateDeletion before every delete operation and surfaces the OrphanDetectedError to the UI layer
The Flutter UI receiving OrphanDetectedError presents the coordinator with two explicit choices: (1) reassign children to a new parent, or (2) cascade-delete the entire subtree
Cascade-delete is a separate explicit API call — validateDeletion does not itself perform or authorize cascade operations
ValidationFailure payload includes both the count and IDs of direct children to support UI display without additional queries
Unit tests cover: leaf node deletion (success), node with one child, node with multiple children, node that is itself the root with children

Technical Requirements

frameworks
Flutter
data models
ValidationResult
ValidationFailure
OrphanDetectedError
OrganizationalUnit
performance requirements
O(1) lookup for direct children using the provided childrenMap
No graph traversal required — only direct children check (caller decides whether cascade is needed)
security requirements
OrphanDetectedError child ID list must only contain IDs within the user's organization scope
The method must not expose child units from other organizations even if the adjacency map is incorrectly scoped
ui components
DeleteUnitDialog (consumer of OrphanDetectedError for reassign/cascade choice)

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Keep the method signature simple: validateDeletion takes only the unitId and a Map> where keys are parent IDs and values are lists of direct child IDs. This makes it trivially testable without any service setup. The childrenMap is a subset of the full adjacency map — callers should pass only the children-list direction (parent→children), not the full bidirectional map. In the OrphanDetectedError, include the direct child IDs only (not the full subtree) — the decision to cascade or reassign belongs to the user and the HierarchyService, not the validator.

For the Flutter UI, the DeleteUnitDialog should use a BottomSheet or AlertDialog presenting two clear action buttons: 'Reassign [N] child units' and 'Delete entire subtree ([M] total units)'. The cascade-delete path should call getDescendantSubtree (task-005) to compute the total count M for the confirmation message. Never silently cascade — always require explicit user confirmation per WCAG 2.2 AA cognitive accessibility requirements relevant to the target user base (NHF users include people with cognitive challenges).

Testing Requirements

Unit tests (flutter_test): leaf node returns ValidationSuccess; single-child node returns ValidationFailure with exactly one child ID; multi-child node returns ValidationFailure with all child IDs listed; root node with children returns ValidationFailure; empty childrenMap (no children entry for unitId) returns ValidationSuccess; null/missing unitId in map treated as leaf. Widget test: DeleteUnitDialog receives OrphanDetectedError and renders both 'Reassign children' and 'Cascade delete' options. Integration test: HierarchyService.deleteUnit() calls validateDeletion before Supabase delete; assert Supabase delete is NOT called when ValidationFailure is returned.

Component
Hierarchy Structure Validator
infrastructure medium
Epic Risks (4)
high impact medium prob security

Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.

Mitigation & Contingency

Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.

Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.

medium impact medium prob technical

Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.

Mitigation & Contingency

Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.

Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.

medium impact medium prob scope

Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.

Mitigation & Contingency

Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.

Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.

high impact low prob integration

Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.

Mitigation & Contingency

Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.

Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.