Implement HierarchyService CRUD mutation operations
epic-organizational-hierarchy-management-core-services-task-003 — Implement create, update, and delete operations for organization units in HierarchyService. Each mutation must call HierarchyStructureValidator as a pre-commit gate, wrap the Supabase upsert/delete in an error-safe transaction, and return meaningful error types on constraint failures.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
The pre-validation pattern should use a consistent guard clause: await _validator.validate(operation, unit); — throw immediately on failure, proceed only on success. This makes the flow obvious in code review. For Supabase, .insert() returns the inserted row if you chain .select() — always chain .select() on mutations to get the server-assigned values back. For deleteUnit, implement as a soft delete (UPDATE SET is_active = false, updated_at = now()) unless the team explicitly decides on hard deletes.
Soft delete preserves referential integrity for historical activity records that reference the unit. Cache invalidation: after any mutation, call _cache.invalidate(affectedRootId) where affectedRootId is found by traversing up to the tree root using the cached tree — this avoids a full cache clear on every write. Document the cache invalidation strategy in a comment above the method. For the UnitHasChildren exception, include the count of active children in the exception message to give coordinators actionable feedback in the UI.
Testing Requirements
Write unit tests with mocked Supabase client and a mocked HierarchyStructureValidator. Test cases: (1) createUnit calls validator first, then Supabase insert, then cache invalidation — in that order (verify call order with mock); (2) createUnit with a validator rejection never calls Supabase; (3) updateUnit on a non-existent id throws UnitNotFound; (4) deleteUnit on a unit with children throws UnitHasChildren; (5) deleteUnit on a leaf node succeeds and invalidates cache; (6) a Supabase foreign key violation on createUnit is caught and rethrown as the correct exception type; (7) updateUnit with a changed parentId that would create a cycle is rejected by the validator. Write a separate integration test file for staging-only tests that verify end-to-end create → read → update → delete lifecycle against a real Supabase instance seeded with fixture data.
Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.
Mitigation & Contingency
Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.
Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.
Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.
Mitigation & Contingency
Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.
Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.
Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.
Mitigation & Contingency
Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.
Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.
Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.
Mitigation & Contingency
Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.
Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.