Implement node tap navigation in HierarchyTreeView
epic-organizational-hierarchy-management-core-services-task-019 — Wire each HierarchyTreeView node with an onTap handler that navigates to the HierarchyNodeEditor screen for that unit. Pass the selected unit ID as a route parameter. Ensure the widget emits the correct semantic action labels for screen reader users (VoiceOver/TalkBack), satisfying WCAG 2.2 AA requirements for interactive elements.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 8 - 48 tasks
Can start after Tier 7 completes
Implementation Notes
Separate the tappable row from the chevron by using a Row with two children: an Expanded InkWell for the node content (fires onTap navigation) and a fixed-width (44dp minimum) GestureDetector for the chevron (fires expand/collapse toggle). This prevents the tap areas from conflicting. For Semantics, wrap the entire HierarchyNodeTile in a Semantics widget with onTapHint, label, and button: true. Use MergeSemantics if child widgets each declare their own semantics labels — without MergeSemantics, screen readers may announce each sub-element individually.
Use GoRouter's context.goNamed('hierarchyNodeEditor', pathParameters: {'nodeId': node.id}) — avoid context.push() with a raw path string to keep routing refactor-safe. Test the touch target size by rendering the widget in a 320dp-wide test viewport (the narrowest supported device width).
Testing Requirements
Widget tests (flutter_test): mount HierarchyNodeTile with a mock onTap callback; simulate tap on the node row; assert callback called with correct node ID. Simulate tap on chevron area; assert navigation callback NOT called. Use flutter_test's SemanticsHandle to assert Semantics label contains node name and hint contains 'open unit details'. Integration test (on device/emulator): tap a node in the live tree and assert HierarchyNodeEditor screen appears with the correct node ID in the route.
Test with TalkBack/VoiceOver enabled manually as part of QA sign-off checklist (automated screen reader focus order is difficult to test in flutter_test).
Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.
Mitigation & Contingency
Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.
Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.
Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.
Mitigation & Contingency
Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.
Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.
Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.
Mitigation & Contingency
Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.
Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.
Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.
Mitigation & Contingency
Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.
Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.