critical priority medium complexity backend pending backend specialist Tier 0

Acceptance Criteria

resolveScope(scopeId) returns a List<String> containing the input scopeId plus all descendant org IDs at every level
For a national-level scopeId, the returned list includes all 3 hierarchy levels (national → region → chapter) with no gaps
Deleted/soft-deleted org nodes are excluded from the returned list unless the caller passes includeDeleted: true
Circular reference detection: if node A references node B which references node A (or any cycle), the resolver terminates cleanly and logs a structured error rather than stack-overflowing
Result is cached in memory for the lifetime of the export session; a forceRefresh flag bypasses the cache
resolveScope with an unknown scopeId throws OrgHierarchyNotFoundException with the invalid ID in the message
Resolving a chapter-level scope (leaf node) returns a list containing only that single ID
Resolving a regional scope of 150 chapters completes within 500ms on first call (uncached)
Cache is invalidated automatically when org_hierarchy table changes are detected (or on app restart at minimum)
Unit tests cover: national scope, regional scope, chapter scope (leaf), deleted node exclusion, circular reference, unknown ID

Technical Requirements

frameworks
Flutter
Supabase Dart client (supabase_flutter)
Riverpod (service provider)
apis
Supabase PostgREST (org_hierarchy table)
Supabase RPC (optional: recursive CTE for server-side traversal)
data models
org_hierarchy (id, parent_id, level, is_deleted, name)
performance requirements
First call for national scope (1,400 nodes) must resolve within 500ms
Subsequent calls for same scopeId must return from in-memory cache in under 5ms
Prefer a single Supabase RPC call using a PostgreSQL recursive CTE (WITH RECURSIVE) over iterative client-side round trips
security requirements
RLS on org_hierarchy must ensure callers only see orgs within their permitted scope — server enforced, not client enforced
scopeId parameter must be validated as a non-empty UUID string before any Supabase call

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Implement traversal as a PostgreSQL recursive CTE on the Supabase side: `WITH RECURSIVE org_tree AS (SELECT id FROM org_hierarchy WHERE id = $1 UNION ALL SELECT h.id FROM org_hierarchy h JOIN org_tree t ON h.parent_id = t.id WHERE NOT h.is_deleted) SELECT id FROM org_tree`. This avoids N+1 round trips. The Dart service calls `.rpc('resolve_org_scope', params: {'scope_id': scopeId, 'include_deleted': includeDeleted})` and deserialises the array response. For circular reference protection, the CTE itself prevents infinite loops in Postgres (cycle detection is built into modern PG recursive CTEs with CYCLE clause).

Add a `visited` Set guard in the Dart fallback path. Cache using a simple `Map>` in a singleton Riverpod provider, keyed by scopeId. Expose a `invalidateCache()` method for testing and for Supabase realtime org_hierarchy change events.

Testing Requirements

Unit tests (flutter_test) with a MockOrgHierarchyRepository: (1) national scope expands to all descendants; (2) chapter scope returns single-element list; (3) deleted nodes excluded; (4) circular reference terminates with OrgHierarchyException not stack overflow; (5) unknown ID throws OrgHierarchyNotFoundException; (6) cache hit returns immediately without repository call. Integration test against Supabase local instance seeded with a 3-level hierarchy of 50 nodes including two deleted nodes and one simulated circular reference via a trigger-disabled direct insert. Assert exact ID counts and that deleted nodes are absent.

Component
Organisation Hierarchy Resolver
service medium
Epic Risks (3)
high impact medium prob technical

NHF's three-level hierarchy (national / region / chapter) with 1,400 chapters may have edge cases such as chapters belonging to multiple regions, orphaned nodes, or missing parent links in the database. Incorrect scope expansion would silently under- or over-report activities, which could invalidate a Bufdir submission.

Mitigation & Contingency

Mitigation: Obtain a full hierarchy fixture export from NHF before implementation begins. Write exhaustive unit tests covering boundary cases: single chapter, full national roll-up, chapters with no activities, and chapters assigned to multiple regions. Validate resolver output against a known-good manual count.

Contingency: If hierarchy data quality is too poor for automated resolution at launch, implement a manual scope override in the coordinator UI that allows the coordinator to explicitly select org units from a tree picker, bypassing the resolver.

medium impact high prob dependency

The activity_type_configuration table may not cover all activity types currently in use, leaving a subset unmapped at launch. Bufdir submissions with unmapped categories will be incomplete and may be rejected by Bufdir.

Mitigation & Contingency

Mitigation: Run a query against production activity data before implementation to enumerate all distinct activity type IDs. Cross-reference with Bufdir's published category schema (request from Norse Digital Products). Flag every gap as a known issue and build the warning surface into the preview panel.

Contingency: Implement a fallback 'Other' category bucket for unmapped types and surface a prominent warning in the export preview requiring coordinator acknowledgement before proceeding. Log unmapped types for post-launch cleanup.

high impact low prob security

Supabase RLS policies on generated_reports and the storage bucket must enforce strict org isolation. A misconfigured policy could allow a coordinator from one organisation to read another organisation's export files, creating a serious data breach with GDPR implications.

Mitigation & Contingency

Mitigation: Write RLS integration tests that attempt cross-org reads with explicitly different JWT tokens and assert that all attempts return empty sets or 403 errors. Include RLS policy review in the pull request checklist. Use Supabase's built-in policy tester during development.

Contingency: If a policy gap is discovered post-deployment, immediately revoke all signed URLs for affected exports, audit the access log for unauthorised reads, and issue a coordinated disclosure to affected organisations per GDPR breach notification requirements.