critical priority high complexity database pending database specialist Tier 1

Acceptance Criteria

An `OrganizationUnitRepository` abstract class and a `SupabaseOrganizationUnitRepository` implementation are created
`getChildren(parentId)` returns immediate children of a node, sorted by name
`getAncestors(nodeId)` returns the ordered path from the given node up to and including the root node
`getSubtree(rootId, maxDepth)` returns all descendants of a node up to `maxDepth` levels deep, using a Supabase RPC backed by a PostgreSQL recursive CTE
`getSubtree` supports depth-aware pagination: given `page` and `pageSize` parameters it returns a slice of nodes at a specific depth level
Repository correctly handles NHF's three-level structure: national associations (level 1) → regions (level 2) → chapters (level 3, up to ~1,400)
`getSubtree` for the root node of a 1,400-chapter tree returns all nodes within 2 seconds
All tree query results are served from the `HierarchyCacheRepository` on cache hits — Supabase is only called on cache misses
A `createUnit(name, parentId, level)` method inserts a new node and invalidates the cache for the parent and all its ancestors
A `moveUnit(unitId, newParentId)` method updates the parent reference and invalidates stale cache entries for both old and new parent subtrees
A `deleteUnit(unitId)` method rejects deletion if the node has children and returns a `NonEmptyNodeException`
Repository methods use typed `Result<T, E>` error handling — no uncaught exceptions propagate to callers

Technical Requirements

frameworks
Flutter
Supabase Dart SDK
flutter_test + mocktail
BLoC or Riverpod (for DI)
apis
Supabase RPC: `get_subtree(root_id UUID, max_depth INT)` — PostgreSQL recursive CTE
Supabase RPC: `get_ancestors(node_id UUID)` — PostgreSQL recursive CTE
Supabase REST API — table: `organization_units`
data models
OrganizationalUnit
HierarchyNode
AncestorPath
SubtreePage
HierarchyLevel
performance requirements
`getSubtree` for root of a 1,400-node tree completes in under 2 seconds (network + DB)
`getChildren` completes in under 150ms for nodes with up to 200 direct children
`getAncestors` completes in under 100ms for trees up to 10 levels deep
Depth-aware pagination returns the first page in the same latency as an unpaginated `getChildren` call
security requirements
All queries respect Supabase RLS — no service-role key bypass on client
`getSubtree` RPC must validate that the requesting user has read access to the root node before executing the recursive CTE
Recursive CTE must include a depth limit guard (e.g., `WHERE depth <= 20`) to prevent infinite loops from circular references in corrupted data

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Model the `organization_units` table with an adjacency-list schema: `(id UUID, name TEXT, parent_id UUID NULLABLE, level INT, created_at TIMESTAMPTZ)`. Use a PostgreSQL recursive CTE for `get_subtree` and `get_ancestors` — these are implemented as Supabase RPC functions called via `supabase.rpc('get_subtree', params: {'root_id': id, 'max_depth': depth})`. The recursive CTE for subtree should be: `WITH RECURSIVE subtree AS (SELECT *, 0 AS depth FROM organization_units WHERE id = root_id UNION ALL SELECT ou.*, s.depth + 1 FROM organization_units ou JOIN subtree s ON ou.parent_id = s.id WHERE s.depth < max_depth) SELECT * FROM subtree ORDER BY depth, name`. Cache integration: inject `HierarchyCacheRepository` and check it before every read; invalidate on every write.

For depth-aware pagination, add `LIMIT pageSize OFFSET page * pageSize` to the CTE's final SELECT filtered by `depth = targetDepth`. Ensure `moveUnit` is wrapped in a Supabase RPC transaction to atomically update the parent and invalidate old/new subtree caches, since two sequential REST calls risk leaving the cache in an inconsistent state if the second call fails.

Testing Requirements

Write unit tests with a mocked Supabase client covering: (1) getChildren with 0, 1, and N children; (2) getAncestors returning correct ordered path from a leaf to root; (3) getSubtree with maxDepth=1 (children only) and maxDepth=3 (full NHF depth); (4) cache hit path where Supabase is never called; (5) createUnit triggers cache invalidation for parent; (6) moveUnit triggers cache invalidation for both old and new parent subtrees; (7) deleteUnit with children returns NonEmptyNodeException. Write one integration test against a Supabase test instance with a seeded 50-node tree verifying the full subtree is returned correctly. Include a performance test seeding 1,400 nodes and asserting `getSubtree` completes within the 2-second SLA.

Component
Hierarchy Aggregation Service
service high
Epic Risks (3)
high impact medium prob technical

Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.

Mitigation & Contingency

Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.

Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.

medium impact medium prob scope

The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.

Mitigation & Contingency

Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.

Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.

medium impact low prob technical

The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.

Mitigation & Contingency

Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.

Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.