Implement Generated Reports Repository
epic-bufdir-report-export-core-backend-task-014 — Implement the generated_reports repository layer that persists export history records to the Supabase database. Each record stores: id, org_id, requested_by_user_id, scope_id, scope_level, period_from, period_to, format, file_path, file_size_bytes, status (completed/failed), validation_warnings (JSON), and created_at. Add RLS policies ensuring coordinators only see exports for their own scope. Expose query methods: listByOrg(orgId, limit), getById(id), and createRecord().
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
Create the migration as `supabase/migrations/{timestamp}_create_generated_reports.sql`. Define the table, constraints, indexes, and RLS policies in a single migration file. The repository class should accept a `SupabaseClient` instance in its constructor for dependency injection, making it trivially testable with a mock client. Define a `GeneratedReport` TypeScript interface with all column types mapped correctly (use `string` for UUIDs, `Date` for timestamps, `object[]` for validation_warnings).
The `CreateGeneratedReportInput` type should omit auto-generated fields (id, created_at). Use `supabase.from('generated_reports').insert(data).select().single()` for createRecord to get the persisted row back in one round trip. For listByOrg, use `.order('created_at', { ascending: false }).limit(limit)`. Do not implement update or delete methods — they should not exist in this repository.
Testing Requirements
Write SQL unit tests for the migration: verify table schema matches spec, verify CHECK constraints reject invalid status/format/scope_level values, verify RLS blocks a coordinator from selecting another org's records. Write TypeScript unit tests for the repository class using a Supabase test project: test createRecord inserts and returns the full row; test listByOrg returns records in DESC order limited correctly; test getById returns null for unknown UUID. Test that createRecord fails gracefully if a required field is null. Verify the (org_id, created_at DESC) index is used via EXPLAIN output in a test script.
All DB tests should run against a test schema that is reset between test runs.
Supabase Edge Functions have a default execution timeout. For large national-scope exports aggregating tens of thousands of activities across 1,400 chapters, the edge function may time out before completing, leaving coordinators with a failed export and no partial output.
Mitigation & Contingency
Mitigation: Optimise the aggregation SQL using pre-materialised aggregation views or RPC functions that run inside the database rather than iterating records in Deno. Profile query execution time against realistic production data volumes early. Request an elevated timeout limit from Supabase if needed. Implement progress checkpointing so the export can be resumed from the last completed aggregation batch.
Contingency: For organisations exceeding a configurable threshold (e.g. >5,000 activities), switch to an asynchronous export pattern: the edge function writes a 'pending' audit record and enqueues the job; the client polls for completion and is notified via Supabase Realtime when the file is ready.
Server-side PDF generation in a Deno Edge Function environment restricts library choices. Many popular PDF libraries require Node.js APIs not available in Deno, or produce large bundle sizes that exceed edge function limits. Choosing the wrong library could block the entire PDF generation path.
Mitigation & Contingency
Mitigation: Spike PDF library selection as the first task of this epic, evaluating at least two Deno-compatible options (e.g. pdf-lib, jsPDF with Deno compatibility shim). Test bundle size and basic rendering before committing to an implementation. Document the chosen library's constraints.
Contingency: If no suitable Deno-native PDF library is found, generate a well-structured HTML report from the edge function and use a headless Chromium service (e.g. Browserless, Gotenberg) for HTML-to-PDF conversion, or temporarily ship CSV-only export while the PDF path is resolved.
Peer mentors affiliated with multiple chapters (a documented NHF scenario) must not be double-counted in participant totals. Incorrect deduplication logic would overreport participation figures to Bufdir, which could be discovered during audit and damage organisational credibility.
Mitigation & Contingency
Mitigation: Define and document the deduplication contract explicitly before coding: deduplication is per-person per-period, not per-activity. Build dedicated unit tests with fixtures containing the exact multi-chapter membership patterns described in NHF's documentation. Have a NHF representative validate test fixture outputs against known-good manual counts.
Contingency: If deduplication logic produces results that cannot be verified against manual counts before launch, surface a deduplication warning in the export preview listing the affected peer mentor IDs, and require explicit coordinator acknowledgement before finalising the export.