critical priority high complexity backend pending backend specialist Tier 1

Acceptance Criteria

getDescendantIds(nodeId) returns a Set<String> containing the node itself plus all transitive descendants
Calling getDescendantIds on a leaf node returns a set with exactly one element (the node itself)
Calling getDescendantIds on the root node of NHF returns all 1400+ chapter IDs plus region and national IDs
Result is computed from the in-memory cache (no additional Supabase calls beyond the initial tree fetch)
getDescendantIds throws OrgNodeNotFoundException if nodeId does not exist in the cached tree
AdminRlsGuard exposes a buildOrgFilter(nodeId) method that returns a Supabase query filter string (e.g., .filter('org_id', 'in', '(id1,id2,...)')) using the resolved descendant IDs
buildOrgFilter is used consistently in AdminStatisticsService and UserManagementService for all scoped queries
getDescendantIds for 1400-node tree completes in under 10ms
Unit tests: leaf node, mid-tree node (region → all chapters), root node, non-existent node, single-node org

Technical Requirements

frameworks
Flutter
Riverpod
supabase_flutter
apis
Supabase PostgREST — all data tables filtered by org_id IN (descendant_ids)
data models
OrgNode (from task-001)
AdminRlsGuard (stateless helper that consumes OrgHierarchyService)
performance requirements
getDescendantIds for full NHF tree: < 10ms
buildOrgFilter string construction: < 5ms regardless of set size
security requirements
The resolved descendant ID set must be derived from the server-fetched tree, never from client-provided parameters, to prevent scope escalation
Supabase RLS policies must enforce org-scoping server-side as a defence-in-depth layer; this client-side filter is a UX optimisation, not the security boundary

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Implement getDescendantIds as a depth-first traversal of the in-memory OrgNode tree. Use a Queue for iteration (avoids stack overflow on deep trees) rather than recursion. The result Set should include the queried node ID itself — this simplifies all downstream filtering to a single '.in()' clause. AdminRlsGuard should be a plain Dart class (not a Riverpod provider) since it is stateless and depends only on OrgHierarchyService.

Supabase PostgREST '.in()' filter accepts a list; for very large sets (1400+ IDs), benchmark whether a single query with 1400-element IN clause is faster than a Supabase RPC call using a server-side CTE. Add a threshold constant (e.g., 500 IDs) above which the service automatically switches to a Supabase RPC `get_org_subtree_data` function to avoid URL length limits.

Testing Requirements

Unit tests (flutter_test): parameterised tests for all node positions (root, mid, leaf, missing). Test buildOrgFilter output format matches Supabase PostgREST filter syntax exactly. Integration test: execute a scoped query using buildOrgFilter against a Supabase test project and assert only records from the target subtree are returned. Verify that a user authenticated at region level cannot retrieve records from a sibling region via filter manipulation.

Component
Organisation Hierarchy Service
service high
Epic Risks (4)
medium impact high prob technical

OrgHierarchyNavigator rendering NHF's full 1,400-chapter tree in a single widget may cause Flutter frame-rate drops below 60 fps on mid-range devices, making the navigator unusable for NHF national admins.

Mitigation & Contingency

Mitigation: Implement lazy expansion: only load immediate children on node expand rather than the full tree upfront. Use virtual scrolling for long sibling lists. Test with a synthetic 1,400-node dataset on a low-end Android device during development.

Contingency: If lazy expansion is insufficient, replace the tree widget with a paginated drill-down navigator (select level → select child) that avoids rendering more than 50 nodes at a time.

medium impact medium prob dependency

Bufdir may update their required export column structure or file format during or after development. If the AdminExportService hardcodes the current Bufdir schema, any format change requires a code release rather than a config update.

Mitigation & Contingency

Mitigation: Drive the Bufdir column mapping from a configuration repository rather than hardcoded constants. Abstract column definitions into a named schema config so that format changes require only a config update and re-deployment without service logic changes.

Contingency: If Bufdir format changes post-launch, release a config update within one sprint. If the change is structural (new required sections), scope a targeted service update and communicate timeline to partner organisations.

high impact medium prob integration

Role transition side-effects in UserManagementService (e.g., certification expiry removing mentor from chapter listing, pause triggering coordinator notification) may interact with external services like HLF's website sync. Incomplete side-effect handling could leave the system in an inconsistent state.

Mitigation & Contingency

Mitigation: Model side-effects as explicit domain events published after the primary state change is persisted. Implement event handlers as idempotent operations so re-processing is safe. Write integration tests that assert all side-effects fire correctly for each role transition type.

Contingency: If a side-effect fails after the primary change is persisted, log the failure with full context and trigger a manual reconciliation alert to the on-call team. Provide an admin-accessible re-trigger action for failed side-effects.

medium impact medium prob scope

If AdminStatisticsService cache TTL is set too long, org_admin may see significantly stale KPI values (e.g., a mentor newly paused an hour ago still appears as active), undermining trust in the dashboard.

Mitigation & Contingency

Mitigation: Default cache TTL to 5 minutes with a manual refresh action on the dashboard. Implement cache invalidation triggered by UserManagementService write operations that affect counted entities.

Contingency: If staleness causes org admin complaints post-launch, reduce TTL to 60 seconds and introduce a real-time Supabase subscription for high-impact counters (paused mentors, expiring certifications).