high priority low complexity database pending database specialist Tier 0

Acceptance Criteria

Given a valid organizationId, when getTemplatesByOrganization() is called, then all non-deleted templates for that organization are returned as a List<RecurringTemplate>
Given a new template object, when createTemplate() is called, then a row is inserted in Supabase and the created template (with server-assigned ID and created_at) is returned
Given an existing template ID and updated fields, when updateTemplate() is called, then the Supabase row is updated and the updated template is returned
Given an existing template ID, when softDeleteTemplate() is called, then deleted_at is set to the current UTC timestamp and the template no longer appears in getTemplatesByOrganization() results
Given a network error during any operation, when the error is caught, then a RepositoryException is thrown with a descriptive message (no raw Supabase errors leaked)
Given cached templates exist, when getTemplatesByOrganization() is called while offline, then cached templates are returned from local cache
The RecurringTemplate data model must include: id (String), organizationId (String), activityType (String), defaultDurationMinutes (int), notesTemplate (String?), recurrencePattern (RecurrencePattern), createdAt (DateTime), updatedAt (DateTime), deletedAt (DateTime?)
RLS policies on the Supabase table must restrict reads and writes to authenticated users belonging to the template's organization
All Supabase table interactions must use the `recurring_activity_templates` table name as a constant, not a hardcoded string literal

Technical Requirements

frameworks
Flutter
Dart
Riverpod
apis
Supabase REST API — recurring_activity_templates table
data models
RecurringTemplate
RecurrencePattern
ActivityType
performance requirements
Template list for an organization must load within 2 seconds on a 3G connection
Local cache must be checked before making a network request — cache-first strategy for reads
Soft-delete must be a single PATCH request, not a DELETE + re-insert
security requirements
Supabase RLS policy must enforce that coordinators can only read/write templates belonging to their organization
Template creation must validate that the activityType value is from a known enum — reject unknown strings before sending to Supabase
The notesTemplate field may contain personal guidance — do not log its contents

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Follow the repository pattern already established in the codebase. Define `abstract class RecurringTemplateRepository` with the four CRUD methods, then implement `SupabaseRecurringTemplateRepository`. Inject the Supabase client via constructor (not via global getter) for testability. For caching, use a simple `Map> _cache` keyed on organizationId, cleared on create/update/delete to invalidate stale data.

Define `RecurrencePattern` as a Dart enum with values: `daily`, `weekly`, `biweekly`, `monthly`, `custom`. Use `fromJson`/`toJson` with `jsonDecode`/`jsonEncode` for Supabase serialization. The `deleted_at` filter in getTemplatesByOrganization should use `.isFilter('deleted_at', null)` in Supabase query builder to exclude soft-deleted records. Register via Riverpod Provider for injection into BLoC/Cubit layer.

Testing Requirements

Unit tests mocking the Supabase client for all CRUD operations. Test: (1) successful fetch returns mapped RecurringTemplate list, (2) createTemplate maps response to domain model correctly, (3) updateTemplate sends correct PATCH payload, (4) softDeleteTemplate sets deleted_at and filters from subsequent get calls, (5) Supabase exception is wrapped in RepositoryException, (6) cache hit prevents second Supabase call for same organizationId. Use flutter_test. Integration test (optional, lower priority): verify against a Supabase test project that RLS correctly blocks cross-organization access.

Epic Risks (3)
high impact high prob scope

Partial failures in bulk registration — where some mentors succeed and others fail — create a complex UX state that is easy to mishandle. If the UI does not clearly communicate which records succeeded and which failed, coordinators may re-submit already-saved records (creating duplicates) or miss failed records entirely (creating underreporting).

Mitigation & Contingency

Mitigation: Design the per-mentor result screen as a primary deliverable of this epic, not an afterthought. Use a clear list view with success/failure indicators per mentor name, and offer a 'Retry failed' action that pre-selects only the failed mentors for resubmission.

Contingency: If partial failure UX proves too complex to deliver within scope, implement a simpler all-or-nothing submission mode for the initial release with a clear error message listing which mentors failed, and defer the partial-retry UI to a follow-up sprint.

medium impact medium prob technical

Submitting proxy records for a large group (e.g., 30+ mentors) as individual Supabase inserts may cause latency issues or hit rate limits, degrading the coordinator experience and potentially causing timeout failures that leave data in an inconsistent state.

Mitigation & Contingency

Mitigation: Implement the BulkRegistrationOrchestrator to batch inserts using a Supabase RPC call that accepts an array of proxy records, reducing round-trips to a single network call. Add progress indication using a stream of per-record results if the RPC supports it.

Contingency: If the RPC approach is blocked by Supabase limitations, fall back to chunked parallel inserts (5 records per batch) with retry logic, capping total submission time and surface a progress bar to manage coordinator expectations.

medium impact medium prob technical

Unifying state management for both single and bulk proxy flows in a single BLoC risks state leakage between flows — for example, a previously selected mentor list persisting when a coordinator switches from bulk to single mode — causing confusing UI states or incorrect submissions.

Mitigation & Contingency

Mitigation: Define separate, named state subtrees within the BLoC for single-proxy state and bulk-proxy state, with explicit reset events triggered on flow entry. Write unit tests for state isolation scenarios using the bloc_test package.

Contingency: If unified BLoC state becomes unmanageable, split into two separate BLoCs (ProxySingleRegistrationBLoC and ProxyBulkRegistrationBLoC) sharing only common events via a parent coordinator Cubit.