high priority medium complexity backend pending backend specialist Tier 4

Acceptance Criteria

Fatal exceptions (network timeout, Supabase write failure, file generation crash) immediately abort the pipeline and set audit status to 'failed' with a machine-readable errorCode and a human-readable errorMessage
Non-fatal exceptions (individual activity row mapping failure, optional attachment missing) are caught, logged to PartialFailureReport, and do not abort the pipeline
ExportFailure includes a partialFailures list (List<PartialFailureEntry>) that surfaces which rows or attachments failed and why
If more than 20% of activity rows fail mapping, the failure is reclassified as fatal and the export is aborted
The audit record always reaches a terminal status ('completed', 'completed_with_warnings', or 'failed') even if an unhandled exception escapes — enforced by a top-level finally block
PartialFailureReport entries include: failureType (enum), affectedEntityId (String?), message (String), and timestamp (DateTime)
ExportSuccess with partial failures uses the 'completed_with_warnings' audit status and includes the PartialFailureReport
Error classification logic is defined in a separate BufdirExportErrorClassifier class so it can be unit-tested independently
Human-readable failure summaries use English and do not expose internal stack traces or database row identifiers to the UI layer
All exception types thrown by sub-services are documented in their respective class-level dartdoc comments

Technical Requirements

frameworks
Flutter
Riverpod
Dart
apis
Supabase Database REST API (audit record update)
BufdirExportAuditService (status transition)
BufdirExportErrorClassifier (new internal class)
data models
PartialFailureReport
PartialFailureEntry
ExportFailure
ExportSuccess
BufdirExportErrorCode (enum)
BufdirExportAuditRecord
performance requirements
Error classification must be synchronous and complete in <1ms per exception to avoid adding latency to the main pipeline
PartialFailureReport serialisation to JSON must not block the main isolate
security requirements
Stack traces must be logged internally (using Dart's developer.log) but must never be surfaced in ExportFailure.errorMessage returned to the UI
PartialFailureEntry.affectedEntityId must not expose personally identifiable information — use internal row identifiers only
Audit records flagged as 'failed' must remain immutable after reaching that terminal state

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Introduce a BufdirExportErrorCode enum with values: network_timeout, storage_write_failed, file_generation_failed, activity_query_failed, row_mapping_failed, attachment_missing, unknown. The BufdirExportErrorClassifier.classify(Object exception) method returns a record ({bool isFatal, BufdirExportErrorCode code, String message}). Implement the top-level finally block inside runExport as: `try { ... } catch (e, st) { await _markFailed(auditId, classifier.classify(e)); rethrow; } finally { await _ensureTerminalStatus(auditId); }` where _ensureTerminalStatus is a no-op if already terminal.

Use a try/catch per row inside the mapping loop (not around the whole mapper call) to enable row-level non-fatal capture. Store PartialFailureReport as a JSONB column partial_failures on the audit record — do not store in a separate table to keep queries simple.

Testing Requirements

Unit tests (flutter_test): test BufdirExportErrorClassifier with every known exception type and verify correct fatal/non-fatal classification. Test the 20% row-failure threshold triggers reclassification to fatal. Test that the finally block transitions audit to 'failed' even when an unexpected RuntimeException escapes all catch blocks (simulate by throwing from the mock audit service itself). Test ExportSuccess with partial failures returns 'completed_with_warnings' status.

Test PartialFailureReport serialises and deserialises without data loss. Use a recorded fake for BufdirExportAuditService to assert exactly one terminal status transition occurs regardless of failure mode. Target ≥90% branch coverage on error handling paths.

Component
Bufdir Export Orchestrator Service
service high
Epic Risks (2)
medium impact medium prob scope

Bufdir's column schema may have per-field business rules (conditional required fields, cross-field validation, organisation-specific category taxonomies) that cannot be expressed in a simple key-value mapping configuration. If the configuration model is too simple, supporting NHF's specific requirements will require hardcoded organisation logic, undermining the configuration-driven design.

Mitigation & Contingency

Mitigation: Design the column configuration schema as a full JSON document supporting field-level transformation rules, conditional expressions, and org-specific value enumerations. Validate the design against a real NHF Bufdir Excel template before implementation begins.

Contingency: If the configuration model cannot express all required rules, implement a thin transformation plugin interface where org-specific logic can be added as a named Dart class registered against the organisation ID, with the JSON config covering only the common cases.

high impact medium prob technical

For large organisations like NHF with potentially tens of thousands of activity records, the full export pipeline (query + map + generate + bundle + upload) may exceed Supabase Edge Function execution time limits (typically 150s), causing silent timeouts that leave audit records in a pending state indefinitely.

Mitigation & Contingency

Mitigation: Implement the orchestrator as a background Dart isolate with progress streaming rather than a synchronous Edge Function call. Use chunked processing for the query and mapping phases to reduce peak memory usage. Profile against realistic NHF data volumes in a staging environment.

Contingency: If processing time cannot be reduced below the timeout threshold, implement an asynchronous job model where the export is queued, processed in the background, and the user is notified via push notification when the download is ready — treating it as an eventual rather than synchronous operation.