Integrate aggregation cache invalidation on activity writes
epic-organizational-hierarchy-management-assignment-aggregation-task-013 — Implement cache invalidation hooks that trigger when new activity records are written. When an activity is registered, identify all ancestor nodes in the hierarchy path and invalidate their cached aggregation values. Use Supabase realtime subscriptions to detect remote writes and propagate invalidation to local cache. Ensure cache warm-up strategies prevent cold-start latency for frequently queried nodes.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 5 - 253 tasks
Can start after Tier 4 completes
Implementation Notes
Implement the Realtime subscription in a singleton service class (AggregationCacheInvalidationService) initialized at app startup for coordinator/admin roles. Use supabase.channel('activities:org:$organizationId').onPostgresChanges(event: PostgresChangeEvent.insert, schema: 'public', table: 'activities', filter: PostgresChangeFilter(type: FilterType.eq, column: 'organization_id', value: organizationId), callback: _onActivityInserted). In _onActivityInserted: extract unit_id from payload, call _walkAncestors(unit_id) using a single recursive Supabase query, then call cache.invalidateAll(ancestorIds). For cache warm-up: maintain an LRU access-frequency map; after invalidation, schedule background re-fetch for nodes in the top-10 by access count using Future.microtask() to avoid blocking.
Reconnection: wrap the channel subscription in a retry loop with exponential backoff (initial: 1s, max: 30s, factor: 2). The ancestry walk query: use a Supabase RPC function (Postgres function) wrapping the recursive CTE to keep client code simple and keep the recursive logic server-side. Register this RPC as get_ancestor_ids(unit_id uuid) RETURNS SETOF uuid.
Testing Requirements
Unit tests (flutter_test): test ancestry walk logic with a mock hierarchy tree — assert correct set of ancestor IDs returned for a leaf node; test cache invalidation marks correct entries as stale; test idempotency — calling invalidate twice on the same key does not throw. Integration tests (against Supabase test instance): insert an activity record, assert Realtime event received within 2 seconds, assert affected ancestor cache entries marked invalid; simulate network drop on Realtime channel, assert reconnect within configured backoff limit; verify cross-org isolation — subscription for org A does not receive events from org B. Performance tests (see task-015): concurrent activity writes triggering simultaneous invalidations — assert no race conditions in cache state. Target 85%+ branch coverage on invalidation and reconnect logic.
Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.
Mitigation & Contingency
Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.
Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.
The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.
Mitigation & Contingency
Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.
Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.
The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.
Mitigation & Contingency
Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.
Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.