critical priority high complexity backend pending backend specialist Tier 0

Acceptance Criteria

OrgHierarchyService.buildTree() accepts a flat List<OrgNodeRow> and returns a typed OrgNode tree with parent-child relationships correctly resolved
Tree construction handles NHF's 4-level hierarchy (national β†’ region β†’ county β†’ chapter) without special-casing any level
Tree construction handles flat 2-level structures (national β†’ chapter) identically through the same interface
Orphan nodes (parent_id references a non-existent node) are logged with a warning and attached to root rather than throwing
Circular reference detection throws OrgHierarchyException with the offending node IDs included in the message
In-memory cache stores the built tree keyed by organisation_id with configurable TTL (default 5 minutes, minimum 30 seconds)
Cache hit returns the cached tree without a Supabase call; cache miss fetches from DB and populates cache
getTree(orgId) is the public interface; callers do not need to know about cache internals
Service is injectable via Riverpod and exposes a typed OrgNode model (id, parentId, name, code, level, children)
Performance: tree construction for 1400 nodes completes in under 100ms on a mid-range device
Unit tests cover: empty input, single-node tree, NHF-scale deep tree, flat 2-level tree, orphan handling, circular reference detection, cache hit/miss

Technical Requirements

frameworks
Flutter
Riverpod
supabase_flutter
apis
Supabase PostgREST β€” organisations table with id, parent_id, name, code, level, org_type columns
data models
OrgNode (id, parentId, name, code, level, orgType, children)
OrgNodeRow (flat DB row DTO)
OrgHierarchyCache (Map<String, CachedTree> with TTL metadata)
performance requirements
Tree build from 1400 flat rows: < 100ms
Cache lookup: O(1)
Memory footprint for NHF full tree: < 2MB
security requirements
Service must not expose any node data beyond what the authenticated user's RLS policy permits β€” raw Supabase queries must run as authenticated user, not service role
No org data stored to disk or shared outside the service boundary

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Use a two-pass algorithm: pass 1 builds a Map by id; pass 2 iterates again to wire parent→child references. This is O(n) and avoids recursive DB queries. Do NOT use recursive Supabase CTEs for the initial fetch — fetch all rows for the org in one query and build the tree in Dart for performance and offline resilience. Cache implementation: a plain Map with a DateTime expiry field per entry is sufficient — no external cache library needed.

Expose the service via a Riverpod Provider; use AsyncNotifier if the initial load should be observable. OrgNode.children should be an UnmodifiableListView to prevent accidental mutation of the cached tree. For the flat 2-level case, the algorithm naturally handles it β€” ensure no level-specific branching is introduced.

Testing Requirements

Unit tests (flutter_test) covering all acceptance criteria scenarios. Use a factory helper to generate synthetic flat OrgNodeRow lists at scale (1400 nodes, 4 levels). Test cache: inject a mock clock to simulate TTL expiry without real delays. Integration test: verify getTree() against a Supabase test project seeded with NHF-like structure.

No widget tests required for this service.

Component
Organisation Hierarchy Service
service high
Epic Risks (4)
medium impact high prob technical

OrgHierarchyNavigator rendering NHF's full 1,400-chapter tree in a single widget may cause Flutter frame-rate drops below 60 fps on mid-range devices, making the navigator unusable for NHF national admins.

Mitigation & Contingency

Mitigation: Implement lazy expansion: only load immediate children on node expand rather than the full tree upfront. Use virtual scrolling for long sibling lists. Test with a synthetic 1,400-node dataset on a low-end Android device during development.

Contingency: If lazy expansion is insufficient, replace the tree widget with a paginated drill-down navigator (select level β†’ select child) that avoids rendering more than 50 nodes at a time.

medium impact medium prob dependency

Bufdir may update their required export column structure or file format during or after development. If the AdminExportService hardcodes the current Bufdir schema, any format change requires a code release rather than a config update.

Mitigation & Contingency

Mitigation: Drive the Bufdir column mapping from a configuration repository rather than hardcoded constants. Abstract column definitions into a named schema config so that format changes require only a config update and re-deployment without service logic changes.

Contingency: If Bufdir format changes post-launch, release a config update within one sprint. If the change is structural (new required sections), scope a targeted service update and communicate timeline to partner organisations.

high impact medium prob integration

Role transition side-effects in UserManagementService (e.g., certification expiry removing mentor from chapter listing, pause triggering coordinator notification) may interact with external services like HLF's website sync. Incomplete side-effect handling could leave the system in an inconsistent state.

Mitigation & Contingency

Mitigation: Model side-effects as explicit domain events published after the primary state change is persisted. Implement event handlers as idempotent operations so re-processing is safe. Write integration tests that assert all side-effects fire correctly for each role transition type.

Contingency: If a side-effect fails after the primary change is persisted, log the failure with full context and trigger a manual reconciliation alert to the on-call team. Provide an admin-accessible re-trigger action for failed side-effects.

medium impact medium prob scope

If AdminStatisticsService cache TTL is set too long, org_admin may see significantly stale KPI values (e.g., a mentor newly paused an hour ago still appears as active), undermining trust in the dashboard.

Mitigation & Contingency

Mitigation: Default cache TTL to 5 minutes with a manual refresh action on the dashboard. Implement cache invalidation triggered by UserManagementService write operations that affect counted entities.

Contingency: If staleness causes org admin complaints post-launch, reduce TTL to 60 seconds and introduce a real-time Supabase subscription for high-impact counters (paused mentors, expiring certifications).