Implement Recurring Activity Template Repository
epic-coordinator-proxy-registration-bulk-orchestration-task-001 — Create the RecurringTemplateRepository data layer that fetches, caches, and persists recurring activity templates from Supabase. Implement CRUD operations for recurring templates including retrieval by organization, creation, update, and soft-delete. Define the RecurringTemplate data model with fields for activity type, default duration, notes template, and recurrence pattern.
Acceptance Criteria
Technical Requirements
Implementation Notes
Follow the repository pattern already established in the codebase. Define `abstract class RecurringTemplateRepository` with the four CRUD methods, then implement `SupabaseRecurringTemplateRepository`. Inject the Supabase client via constructor (not via global getter) for testability. For caching, use a simple `Map
Define `RecurrencePattern` as a Dart enum with values: `daily`, `weekly`, `biweekly`, `monthly`, `custom`. Use `fromJson`/`toJson` with `jsonDecode`/`jsonEncode` for Supabase serialization. The `deleted_at` filter in getTemplatesByOrganization should use `.isFilter('deleted_at', null)` in Supabase query builder to exclude soft-deleted records. Register via Riverpod Provider for injection into BLoC/Cubit layer.
Testing Requirements
Unit tests mocking the Supabase client for all CRUD operations. Test: (1) successful fetch returns mapped RecurringTemplate list, (2) createTemplate maps response to domain model correctly, (3) updateTemplate sends correct PATCH payload, (4) softDeleteTemplate sets deleted_at and filters from subsequent get calls, (5) Supabase exception is wrapped in RepositoryException, (6) cache hit prevents second Supabase call for same organizationId. Use flutter_test. Integration test (optional, lower priority): verify against a Supabase test project that RLS correctly blocks cross-organization access.
Partial failures in bulk registration — where some mentors succeed and others fail — create a complex UX state that is easy to mishandle. If the UI does not clearly communicate which records succeeded and which failed, coordinators may re-submit already-saved records (creating duplicates) or miss failed records entirely (creating underreporting).
Mitigation & Contingency
Mitigation: Design the per-mentor result screen as a primary deliverable of this epic, not an afterthought. Use a clear list view with success/failure indicators per mentor name, and offer a 'Retry failed' action that pre-selects only the failed mentors for resubmission.
Contingency: If partial failure UX proves too complex to deliver within scope, implement a simpler all-or-nothing submission mode for the initial release with a clear error message listing which mentors failed, and defer the partial-retry UI to a follow-up sprint.
Submitting proxy records for a large group (e.g., 30+ mentors) as individual Supabase inserts may cause latency issues or hit rate limits, degrading the coordinator experience and potentially causing timeout failures that leave data in an inconsistent state.
Mitigation & Contingency
Mitigation: Implement the BulkRegistrationOrchestrator to batch inserts using a Supabase RPC call that accepts an array of proxy records, reducing round-trips to a single network call. Add progress indication using a stream of per-record results if the RPC supports it.
Contingency: If the RPC approach is blocked by Supabase limitations, fall back to chunked parallel inserts (5 records per batch) with retry logic, capping total submission time and surface a progress bar to manage coordinator expectations.
Unifying state management for both single and bulk proxy flows in a single BLoC risks state leakage between flows — for example, a previously selected mentor list persisting when a coordinator switches from bulk to single mode — causing confusing UI states or incorrect submissions.
Mitigation & Contingency
Mitigation: Define separate, named state subtrees within the BLoC for single-proxy state and bulk-proxy state, with explicit reset events triggered on flow entry. Write unit tests for state isolation scenarios using the bloc_test package.
Contingency: If unified BLoC state becomes unmanageable, split into two separate BLoCs (ProxySingleRegistrationBLoC and ProxyBulkRegistrationBLoC) sharing only common events via a parent coordinator Cubit.