Implement depth-limit and level-type ordering validation
epic-organizational-hierarchy-management-core-services-task-009 — Add two validation rules to HierarchyStructureValidator: (1) depth-limit enforcement that rejects assignments exceeding the configured maximum hierarchy depth per organization type, and (2) level-type ordering that ensures unit types (e.g., national → region → chapter → local) only appear at their permitted depth levels. Both rules return typed validation errors.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
Define OrganizationHierarchyConfig as an immutable Dart class loaded from a Supabase table (e.g., organization_hierarchy_configs) keyed by organizationId. Fields: maxDepth (int), allowedDepthsByUnitType (Map
Keep validator methods free of the config object — accept raw int/map parameters — so they remain purely testable without Supabase setup. For the level-type validator, consider that some organizations may permit a unit type at multiple depths (e.g., a 'local' chapter could exist at depth 2 or 3 in a flatter org) — the List
Testing Requirements
Unit tests (flutter_test) for validateDepthLimit: depth == maxDepth returns ValidationSuccess; depth == maxDepth + 1 returns ValidationFailure with DepthLimitExceeded; depth == 0 returns ValidationSuccess; maxDepth == 0 edge case handled. Unit tests for validateLevelTypeOrdering: correct type at correct depth returns ValidationSuccess; correct type at wrong depth returns ValidationFailure with InvalidLevelType; unknown type not in allowedDepthsByType returns ValidationFailure; empty allowedDepthsByType map handled gracefully. Integration test: create an OrganizationHierarchyConfig for NHF (4-level national→region→chapter→local) and verify that attempting to create a 'local' unit at depth 1 (under national) fails with InvalidLevelType. Integration test: verify HierarchyService invokes both validators and does not write to Supabase on any ValidationFailure.
Minimum 95% branch coverage on both methods.
Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.
Mitigation & Contingency
Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.
Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.
Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.
Mitigation & Contingency
Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.
Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.
Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.
Mitigation & Contingency
Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.
Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.
Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.
Mitigation & Contingency
Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.
Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.