high priority medium complexity backend pending backend specialist Tier 4

Acceptance Criteria

HierarchyChangedEvent is a typed sealed class with subtypes: UnitCreated, UnitUpdated, UnitDeleted, UnitReparented — each carrying the affected unitId(s) and a UTC timestamp
HierarchyService emits a HierarchyChangedEvent on the event stream after every successful Supabase mutation (create, update, delete, reparent)
No event is emitted if the Supabase mutation fails or is rolled back
The event stream is exposed as a Riverpod StreamProvider<HierarchyChangedEvent> that any widget or service can subscribe to without tight coupling
HierarchyTreeView widget re-renders automatically when it receives a HierarchyChangedEvent affecting a visible node
HierarchyCache (ancestor/descendant cache from task-005) invalidates only the affected unit's cache entries, not the entire cache, on receipt of an event
Multiple subscribers can listen to the stream concurrently without interference
The stream does not close or error when no mutation has occurred; it is a persistent broadcast stream
Subscribing after a mutation has already occurred does not replay past events (non-replay stream is acceptable)
Unit tests verify that each mutation type emits exactly one correctly-typed event with the correct unitId payload

Technical Requirements

frameworks
Flutter
Riverpod
BLoC
data models
HierarchyChangedEvent
UnitCreated
UnitUpdated
UnitDeleted
UnitReparented
OrganizationalUnit
performance requirements
Event emission must be synchronous relative to the mutation's future completion — no additional async delay
Stream subscription must not cause widget rebuilds for events affecting units not rendered in the current tree
security requirements
HierarchyChangedEvent payloads must not include sensitive unit data beyond the unitId and operation type
Stream must only emit events for units within the authenticated user's organization
ui components
HierarchyTreeView (subscriber)
HierarchyCache (subscriber)

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Use a Dart StreamController.broadcast() owned by the HierarchyService instance. Expose it via a Riverpod Provider or directly as a StreamProvider. In each mutation method, add the emit call inside the success branch of the Supabase response handler — never in a catch block. For Riverpod integration, use ref.invalidate() on the relevant family providers from task-005 inside a listener registered in the Riverpod container rather than coupling the stream to cache internals.

The UnitReparented event should carry both the old parentId and the new parentId to allow fine-grained cache invalidation (only nodes in the affected subtrees need invalidation). Consider using a sealed class with Dart 3 pattern matching for clean exhaustive handling in subscribers: switch (event) { case UnitCreated e => ..., case UnitReparented e => ... }. Do not use BLoC for this event bus — use Riverpod StreamProvider to stay consistent with the rest of the service layer, and let BLoC-based UI components consume it via StreamSubscription in their BLoC's onEvent handlers.

Testing Requirements

Unit tests (flutter_test): for each mutation type (create, update, delete, reparent), verify exactly one event of the correct subtype is emitted after a successful mock Supabase call. Verify no event is emitted when Supabase returns an error. Verify event payload contains the correct unitId. Integration test: simulate a reparent mutation end-to-end and assert that a subscribed HierarchyCache mock receives a UnitReparented event and calls its invalidation method with the correct unitId.

Widget test: mount HierarchyTreeView, emit a UnitUpdated event via the stream, assert the widget rebuilds. Concurrency test: attach 3 simultaneous listeners and verify all 3 receive the same event on mutation.

Component
Hierarchy Service
service high
Epic Risks (4)
high impact medium prob security

Injecting all unit assignment IDs into JWT claims for users assigned to many units (up to 5 for NHF peer mentors, many more for national coordinators) may exceed JWT size limits, causing authentication failures.

Mitigation & Contingency

Mitigation: Store unit IDs in a Supabase session variable or a dedicated Postgres function rather than embedding them directly in the JWT payload. Use set_config('app.unit_ids', ...) within RLS helper functions querying the assignments table at policy evaluation time.

Contingency: Fall back to querying the unit_assignments table directly within RLS policies using the authenticated user ID, accepting a small per-query overhead in exchange for removing the JWT size constraint.

medium impact medium prob technical

Rendering 1,400+ nodes in a recursive Flutter tree widget may cause jank or memory pressure on lower-end devices used by field peer mentors, degrading the admin experience.

Mitigation & Contingency

Mitigation: Implement lazy tree expansion — only the root level is rendered on initial load. Child nodes are rendered on demand when the parent is expanded. Use const constructors and ListView.builder for all node lists to minimize rebuild scope.

Contingency: Add a search/filter bar that scopes the visible tree to matching nodes, reducing the visible node count. Provide a 'flat list' fallback view for administrators who prefer searching over browsing the tree.

medium impact medium prob scope

Requirements for what constitutes a valid hierarchy structure may expand during NHF sign-off (e.g., mandatory coordinator assignments per chapter, minimum member counts per region), requiring repeated validator redesign.

Mitigation & Contingency

Mitigation: Design the validator as a pluggable rule engine where each check is a discrete, independently testable function. New rules can be added without changing the core validation orchestration. Surface all rules in a configuration table per organization.

Contingency: Defer non-blocking validation rules to warning-level feedback rather than hard blocks, allowing structural changes to proceed while flagging potential issues for admin review.

high impact low prob integration

Deploying RLS policy migrations to a shared Supabase project used by multiple organizations simultaneously could lock tables or interrupt active sessions, causing downtime during production migration.

Mitigation & Contingency

Mitigation: Write all RLS policies as CREATE POLICY IF NOT EXISTS statements. Schedule migrations during off-peak hours. Use Supabase's migration preview environment to validate policies against production data shapes before applying.

Contingency: Prepare rollback migration scripts for every RLS policy. If a migration causes issues, execute the rollback immediately and re-test the policy logic in staging before reattempting.