critical priority high complexity api pending api specialist Tier 5

Acceptance Criteria

The API endpoint accepts parameters: organization_id, report_period_id (or start_date + end_date), and optional unit_id for scoped drill-down
Response includes activity counts broken down by level_type (chapter, region, national) for the requested scope
Participant counts are deduplicated: a participant appearing in both a chapter and its parent region is counted once at each level, not double-counted in the national total
Deduplication logic uses contact_id as the unique key across the time window
Time-window filtering is inclusive of both start_date and end_date boundaries, aligned to UTC day boundaries
The endpoint returns pre-aggregated data from cache when available; cache miss triggers a full tree walk and caches the result
Cache entries are keyed by (organization_id, unit_id, start_date, end_date) and invalidated when new activity records are written within the time window
API response conforms to the Bufdir column schema defined in the bufdir_column_schema table for the organization's current schema version
The endpoint is callable server-side only (via Supabase Edge Function); mobile clients do not call it directly
Response latency is under 500ms for cached results and under 2,000ms for a full national tree walk (1,400 chapters)
Bufdir breakdown output matches hand-computed reference values for a known test dataset (verified in performance tests)
All API errors return structured JSON with error_code and message fields; no raw Postgres errors are exposed

Technical Requirements

frameworks
Supabase Edge Functions (Deno)
flutter_test
apis
Supabase PostgREST: GET /activities (filtered by organization_id, date range, unit scope)
Supabase PostgREST: GET /organization_units (hierarchy tree for scope resolution)
Internal: Bufdir Reporting API (outbound, called after aggregation)
bufdir_column_schema table (schema version lookup)
data models
activity
annual_summary
bufdir_column_schema
bufdir_export_audit_log
contact
performance requirements
Cached response latency: under 500ms p95
Full tree walk (1,400-node NHF hierarchy): under 2,000ms p95
Participant deduplication must not use N+1 queries — load all participant IDs for the time window in a single query then deduplicate in application memory
Cache warm-up on server start for the current report period to prevent cold-start latency
security requirements
Endpoint accessible via service role key only — never exposed to mobile clients
organization_id extracted from JWT claims and validated; cannot be overridden by request body
All submitted data must match Norway's GDPR requirements for government reporting (no raw PII in Bufdir payload — only aggregated counts)
Audit log entry written to bufdir_export_audit_log for every API call including caller identity, timestamp, and report_period_id
Bufdir credentials stored in integration credential vault; never in mobile app or client-accessible config

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Implement as a Supabase Edge Function (Deno + TypeScript). Structure: (1) validate JWT and extract organization_id, (2) look up report period dates, (3) check cache for (org, unit, start, end) key, (4) on miss: load full hierarchy tree in one query using a recursive CTE (WITH RECURSIVE), compute subtree for requested unit_id, (5) load all activities in the time window for the subtree unit IDs in a single IN query, (6) deduplicate participants using a Set of contact_ids at each level boundary, (7) map output to the organization's bufdir_column_schema version, (8) write audit log entry, (9) return structured JSON. For the recursive CTE: `WITH RECURSIVE subtree AS (SELECT id FROM organization_units WHERE id = $unit_id UNION ALL SELECT ou.id FROM organization_units ou JOIN subtree s ON ou.parent_id = s.id)`. Cache implementation: use Supabase's built-in caching or a simple in-memory Map with TTL in the Edge Function (acceptable since Edge Functions are stateless per invocation — use Supabase KV or Redis if persistence needed across invocations).

For Bufdir schema mapping, load the column_mappings JSON from bufdir_column_schema and apply field name translation before returning.

Testing Requirements

Unit tests (Deno test framework): test deduplication logic with overlapping participant sets — assert national total equals union count, not sum; test time-window boundary conditions (inclusive start, inclusive end, timezone handling); test schema version mapping against multiple bufdir_column_schema versions. Integration tests: deploy Edge Function to Supabase test instance, insert known activity dataset, call endpoint and assert response counts match hand-computed values; test cache miss path (cold) and cache hit path (warm) with latency assertions; test RLS enforcement by calling with a different organization's JWT and asserting empty or forbidden response. Performance tests (see task-015): simulate 1,400-chapter tree. Mutation tests: verify that adding one activity record invalidates the correct cache entries.

Target 90%+ branch coverage on aggregation and deduplication logic.

Component
Hierarchy Aggregation Service
service high
Epic Risks (3)
high impact medium prob technical

Recursive aggregation queries across four hierarchy levels (national → region → local) with 1,400 leaf nodes may be too slow for real-time dashboard requests, exceeding the 200ms target and causing spinner timeouts.

Mitigation & Contingency

Mitigation: Implement aggregation as a Supabase RPC using a single recursive CTE rather than multiple round-trip queries. Pre-compute aggregations nightly via a scheduled Edge Function and cache results. For real-time needs, aggregate only the immediate subtree on demand.

Contingency: Surface a 'Refreshing...' indicator and serve stale cached aggregations immediately. Queue an async recalculation and push updated data via Supabase Realtime when ready, avoiding blocking the admin dashboard.

medium impact medium prob scope

The 5-chapter limit and primary-assignment constraint are NHF-specific. Applying these rules globally may break HLF and Blindeforbundet configurations where different limits apply, requiring per-organization configuration that was not initially scoped.

Mitigation & Contingency

Mitigation: Make the maximum assignment count a configurable value stored in the organization's feature-flag or settings table rather than a hardcoded constant. Design the assignment service to read this limit at runtime per organization.

Contingency: Default the limit to a high value (e.g., 100) for organizations other than NHF, effectively making it non-restrictive, while keeping the enforcement logic intact for when per-org configuration is fully implemented.

medium impact low prob technical

The searchable parent dropdown in HierarchyNodeEditor must search across up to 1,400 units efficiently. Client-side filtering of the full hierarchy may be slow; server-side search adds complexity and latency.

Mitigation & Contingency

Mitigation: Use the in-memory hierarchy cache as the search corpus — since the cache already holds the flat unit list, client-side filtering with a debounced input is sufficient and avoids extra Supabase calls. Pre-build a search index on cache load.

Contingency: Cap the dropdown to showing the 50 most recently accessed units by default, with a 'search all' option that triggers a server-side full-text query. This keeps the common case fast while supporting edge cases.