critical priority high complexity backend pending backend specialist Tier 3

Acceptance Criteria

createNode(parentId, name, levelType) validates via HierarchyStructureValidator before writing, persists via OrganizationUnitRepository, updates Hierarchy Cache, and returns the created OrganizationUnit
moveNode(nodeId, newParentId) prevents moves that would create cycles or violate level-type ordering; on success, updates the materialized path for the node and all its descendants atomically
deleteNode(nodeId) raises a HierarchyConstraintException if the node has active children or active user assignments; on success removes the node and triggers cache invalidation
getSubtree(rootId, {int? maxDepth}) returns a HierarchyTreeSnapshot containing all descendant nodes up to maxDepth levels, sourced from cache when available
getAncestors(nodeId) returns an ordered List<OrganizationUnit> from the node up to the national root, sourced from cache when available
resolveLevelType(nodeId) returns the correct LevelType enum value based on the node's depth in the tree
All write operations (create, move, delete) are wrapped in logical transactions — partial failures leave the database in the pre-operation state
The service has no direct Supabase client references — it depends only on OrganizationUnitRepository and HierarchyCache abstractions (dependency inversion)
Unit tests cover all six public methods with at least 3 cases each (happy path, validation failure, cache miss)

Technical Requirements

frameworks
Flutter
Dart
BLoC (for upstream consumers)
apis
Supabase REST via OrganizationUnitRepository abstraction
data models
OrganizationUnit
HierarchyNode
HierarchyTreeSnapshot
LevelType
HierarchyConstraintException
performance requirements
getSubtree and getAncestors must return from cache within 10ms when cache is warm
moveNode path update for a subtree of 500 nodes must complete in under 2 seconds (use batch update via Supabase RPC)
No single service method may issue more than 3 sequential Supabase calls — use batch queries or RPCs for operations requiring multiple reads
security requirements
Service methods must not accept raw SQL fragments as parameters — all inputs are typed Dart values
deleteNode must check for active user assignments before deletion to prevent data integrity violations
All exceptions thrown by the service must be typed domain exceptions (HierarchyConstraintException, HierarchyNotFoundException) — never leak raw Supabase/PostgreSQL errors to callers

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Define the service as class HierarchyService with constructor injection of OrganizationUnitRepository and HierarchyCache interfaces — never instantiate dependencies inside the class. For moveNode, compute the new path prefix and issue a single Supabase RPC call (update_subtree_paths(nodeId, newParentPath)) implemented as a PL/pgSQL function that updates all descendants in one transaction — this avoids issuing N individual updates from Dart. Represent the tree in memory as a Map indexed by nodeId for O(1) lookups when building subtree responses from cache. For deleteNode, first check children count then assignment count — fail fast on the cheaper check.

Use Result as the return type for all public methods rather than throwing exceptions, so callers are forced to handle both success and failure paths explicitly.

Testing Requirements

Unit tests (flutter_test) with mocked OrganizationUnitRepository and HierarchyCache using mockito or mocktail. Test suite must include: createNode happy path, createNode cycle prevention, createNode level type violation, moveNode success with path update assertion, moveNode cycle prevention, deleteNode with children (should fail), deleteNode with no children (should succeed), getSubtree from cache hit, getSubtree from cache miss triggering repository fetch, getAncestors empty (root node). Integration test: run against a local Supabase instance and assert moveNode correctly updates materialized paths for all descendants. Target 85% branch coverage on HierarchyService.

Component
Hierarchy Aggregation Service
service high
Epic Risks (3)
high impact medium prob technical

Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.

Mitigation & Contingency

Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.

Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.

medium impact medium prob scope

The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.

Mitigation & Contingency

Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.

Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.

medium impact low prob technical

The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.

Mitigation & Contingency

Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.

Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.