Implement Hierarchy Service core business logic
epic-organizational-hierarchy-management-assignment-aggregation-task-007 — Build the domain service encapsulating all hierarchy tree operations: create node, move node, delete node with cascade checks, get subtree, get ancestors, and resolve level types. Orchestrate calls to Organization Unit Repository and Hierarchy Cache, enforce business invariants (no orphaned nodes, valid level sequences), and expose a clean API consumed by higher-layer services.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Define the service as class HierarchyService with constructor injection of OrganizationUnitRepository and HierarchyCache interfaces — never instantiate dependencies inside the class. For moveNode, compute the new path prefix and issue a single Supabase RPC call (update_subtree_paths(nodeId, newParentPath)) implemented as a PL/pgSQL function that updates all descendants in one transaction — this avoids issuing N individual updates from Dart. Represent the tree in memory as a Map
Use Result
Testing Requirements
Unit tests (flutter_test) with mocked OrganizationUnitRepository and HierarchyCache using mockito or mocktail. Test suite must include: createNode happy path, createNode cycle prevention, createNode level type violation, moveNode success with path update assertion, moveNode cycle prevention, deleteNode with children (should fail), deleteNode with no children (should succeed), getSubtree from cache hit, getSubtree from cache miss triggering repository fetch, getAncestors empty (root node). Integration test: run against a local Supabase instance and assert moveNode correctly updates materialized paths for all descendants. Target 85% branch coverage on HierarchyService.
Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.
Mitigation & Contingency
Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.
Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.
The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.
Mitigation & Contingency
Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.
Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.
The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.
Mitigation & Contingency
Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.
Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.