high priority medium complexity integration pending integration specialist Tier 5

Acceptance Criteria

When a new activity record is inserted into the activities table, the cache invalidation hook identifies the activity's unit_id and walks all ancestor nodes to the national level
All cache entries keyed to ancestor nodes of the written activity are invalidated within 1 second of the Supabase write completing
Supabase Realtime subscription on the activities table detects remote inserts (from other devices/users) and triggers the same invalidation path
Cache invalidation is idempotent — invalidating an already-invalid cache entry does not cause errors
After invalidation, the next read of an affected node triggers a cache warm-up (background re-computation) rather than blocking the caller
Cache warm-up for the top 10 most-queried nodes (by access frequency) is triggered proactively after invalidation to prevent cold-start latency
A Supabase Realtime channel is established per organization_id, not globally, to enforce tenant isolation
If the Realtime subscription drops (network interruption), it is automatically re-established with exponential backoff (max 30 seconds)
No sensitive PII is transmitted via the Realtime payload — only the activity's id, unit_id, and date are included
Cache invalidation does not block the activity write path — it runs asynchronously after the write completes
Warm-up strategy is configurable per organization (disable for low-volume orgs to reduce compute cost)

Technical Requirements

frameworks
Flutter
BLoC
Supabase Realtime (WebSocket)
flutter_test
apis
Supabase Realtime: channel subscription on activities table (INSERT events, filtered by organization_id)
Supabase PostgREST: GET /organization_units?id=in.(ancestor_ids) for ancestry walk
Internal: HierarchyAggregationService cache read/write/invalidate interface
data models
activity
annual_summary
performance requirements
Cache invalidation must complete within 1 second of Supabase write confirmation
Ancestry walk for invalidation must use a single recursive CTE query, not iterative parent lookups
Background warm-up must not block the UI thread — run in a Dart isolate or via Supabase Edge Function async trigger
Realtime subscription reconnect must complete within 30 seconds using exponential backoff
security requirements
Realtime channel scoped by organization_id using RLS — users only receive events for their own org's activities
JWT validated on every channel subscription renewal
No sensitive PII (contact names, personal details) in Realtime payloads — only row IDs and timestamps
Service role key never used client-side for Realtime subscriptions — use anon key with RLS

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Implement the Realtime subscription in a singleton service class (AggregationCacheInvalidationService) initialized at app startup for coordinator/admin roles. Use supabase.channel('activities:org:$organizationId').onPostgresChanges(event: PostgresChangeEvent.insert, schema: 'public', table: 'activities', filter: PostgresChangeFilter(type: FilterType.eq, column: 'organization_id', value: organizationId), callback: _onActivityInserted). In _onActivityInserted: extract unit_id from payload, call _walkAncestors(unit_id) using a single recursive Supabase query, then call cache.invalidateAll(ancestorIds). For cache warm-up: maintain an LRU access-frequency map; after invalidation, schedule background re-fetch for nodes in the top-10 by access count using Future.microtask() to avoid blocking.

Reconnection: wrap the channel subscription in a retry loop with exponential backoff (initial: 1s, max: 30s, factor: 2). The ancestry walk query: use a Supabase RPC function (Postgres function) wrapping the recursive CTE to keep client code simple and keep the recursive logic server-side. Register this RPC as get_ancestor_ids(unit_id uuid) RETURNS SETOF uuid.

Testing Requirements

Unit tests (flutter_test): test ancestry walk logic with a mock hierarchy tree — assert correct set of ancestor IDs returned for a leaf node; test cache invalidation marks correct entries as stale; test idempotency — calling invalidate twice on the same key does not throw. Integration tests (against Supabase test instance): insert an activity record, assert Realtime event received within 2 seconds, assert affected ancestor cache entries marked invalid; simulate network drop on Realtime channel, assert reconnect within configured backoff limit; verify cross-org isolation — subscription for org A does not receive events from org B. Performance tests (see task-015): concurrent activity writes triggering simultaneous invalidations — assert no race conditions in cache state. Target 85%+ branch coverage on invalidation and reconnect logic.

Component
Hierarchy Aggregation Service
service high
Epic Risks (3)
high impact medium prob technical

Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.

Mitigation & Contingency

Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.

Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.

medium impact medium prob scope

The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.

Mitigation & Contingency

Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.

Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.

medium impact low prob technical

The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.

Mitigation & Contingency

Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.

Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.