Implement MentorStatusRepository write methods
epic-peer-mentor-pause-foundation-task-005 — Add write operations to MentorStatusRepository: pauseMentor(mentorId, reason, pauseAt), reactivateMentor(mentorId), and updateMentorStatus(mentorId, status, reason). Each write must insert a corresponding row into peer_mentor_status_history within a Supabase transaction. Handle optimistic locking and conflict resolution.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 4 - 323 tasks
Can start after Tier 3 completes
Implementation Notes
Implement the atomic write as a Postgres RPC function (e.g., rpc_pause_mentor(p_mentor_id, p_reason, p_pause_at)) that performs both the UPDATE on peer_mentor_status and the INSERT on peer_mentor_status_history inside a single PL/pgSQL function. This avoids two-phase commit complexity in the Dart client. For optimistic locking, include a WHERE updated_at = :last_known_updated_at clause in the RPC and return 0 rows affected when a conflict is detected — translate this to a ConflictException in Dart. The Dart client calls supabase.rpc('rpc_pause_mentor', params: {...}) and maps the result.
Keep the three public Dart methods (pauseMentor, reactivateMentor, updateMentorStatus) as thin wrappers over two or three RPC functions to preserve clarity.
Testing Requirements
Unit tests (flutter_test + mocktail): mock Supabase RPC and REST calls, verify correct parameters are passed for each write method, verify conflict exceptions are thrown when precondition checks fail. Test that a failed history insert does not leave the status row in a modified state (mock transaction rollback scenario). Integration tests against local Supabase: verify atomicity — kill the connection mid-transaction and confirm neither table was written. Verify RLS blocks cross-mentor writes.
Test concurrent writes (two parallel pauseMentor calls) produce exactly one success and one conflict error. Minimum 90% line coverage.
Supabase RLS policies for status reads and writes must correctly distinguish between a mentor editing their own status and a coordinator editing another mentor's status within the same chapter. Incorrect policies could allow cross-chapter data leakage or silently block legitimate status updates, causing hard-to-diagnose runtime failures.
Mitigation & Contingency
Mitigation: Write RLS policies with explicit role checks (auth.uid() = mentor_id OR chapter_coordinator_check()) and verify with integration tests that cover same-chapter coordinator access, cross-chapter denial, and self-access. Review policies with a second developer before merging.
Contingency: If policy errors surface after merge, temporarily widen policy to coordinator role globally while a targeted fix is authored; use Supabase audit logs to trace any unauthorised access during the interim.
CoordinatorNotificationService must correctly resolve which coordinator(s) are responsible for a given mentor's chapter. If the chapter-coordinator mapping is incomplete or a mentor belongs to multiple chapters (as with NHF multi-chapter memberships), the service could fail to notify or duplicate notifications to the wrong coordinators.
Mitigation & Contingency
Mitigation: Use the existing chapter membership data model and query all active coordinator roles for each of the mentor's chapters. Add a de-duplication step before dispatch. Write integration tests with fixtures covering single-chapter, multi-chapter, and no-coordinator edge cases.
Contingency: If resolution logic proves too complex at this stage, fall back to notifying all coordinators in the organisation until a proper chapter-scoped resolver can be delivered in a follow-up task.
Adding new columns to peer_mentors in production could conflict with existing application code that does SELECT * queries if new non-nullable columns without defaults are introduced, causing unexpected failures in unrelated screens.
Mitigation & Contingency
Mitigation: Make all new columns nullable or provide safe defaults. Use additive migration strategy with no column renames or drops. Run migration against a staging copy of production data before applying to live.
Contingency: Prepare a rollback migration script that drops only the new columns; coordinate with the team to deploy the rollback and hotfix immediately if production issues are detected.