critical priority medium complexity backend pending backend specialist Tier 3

Acceptance Criteria

fetchMentorStatus(mentorId) returns a typed MentorStatus model or null when the mentor has no status row
fetchActiveMentorsForChapter(chapterId) returns only mentors with status = 'active' belonging to the given chapter
fetchPausedMentorsForChapter(chapterId) returns only mentors with status = 'paused' belonging to the given chapter
fetchStatusHistory(mentorId) returns all historical status rows for the mentor ordered by created_at descending
All returned models are fully type-safe Dart objects — no dynamic or Map<String, dynamic> leaking into callers
Supabase RLS policies are respected: a coordinator may only read mentors in their own chapters; mentor may only read their own status
Queries use .select() with explicit column lists — no SELECT *
All methods throw a typed RepositoryException (not raw PostgrestException) on network or query failures
Null-safety is correctly handled throughout — no late fields that can throw at runtime
Repository is injected via an abstract interface so BLoC/Riverpod consumers depend on the interface, not the concrete class

Technical Requirements

frameworks
Flutter
Supabase Dart client (supabase_flutter)
Riverpod or BLoC for DI
apis
Supabase REST/PostgREST — peer_mentor_status table
Supabase REST/PostgREST — peer_mentor_status_history table
Supabase RLS policies for chapter-scoped access
data models
MentorStatus
MentorStatusHistory
PeerMentor
Chapter
performance requirements
fetchActiveMentorsForChapter and fetchPausedMentorsForChapter must complete in < 500 ms for chapters with up to 200 mentors
fetchStatusHistory must paginate or limit to the 50 most recent rows by default to avoid unbounded result sets
Supabase queries must include indexed column filters (mentor_id, chapter_id, status) — verify indexes exist in migration
security requirements
All read methods must operate through Supabase RLS — never bypass RLS with service-role key in mobile client
mentor_id parameters must be validated as non-empty UUIDs before issuing queries to prevent malformed requests
Do not log raw Supabase responses — log only sanitized error codes and message types

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Define an abstract IMentorStatusRepository interface first — this enables easy mocking and future swap to a local cache layer. Use Supabase's .from('peer_mentor_status').select('id, mentor_id, status, paused_at, pause_reason, reactivated_at').eq('mentor_id', mentorId).maybeSingle() for fetchMentorStatus to get null instead of throwing on no row. For chapter-scoped queries, join via peer_mentor_chapter_membership rather than denormalizing chapter_id onto the status table, unless the migration in task-003 added it. Wrap all Supabase calls in try/catch converting PostgrestException to domain exceptions.

Consider a thin ResultType wrapper (Result) to make error handling explicit at the BLoC layer without try/catch in UI code. Do not introduce a caching layer in this task — that is handled in task-006.

Testing Requirements

Unit tests (flutter_test + mocktail): mock the Supabase client and verify each method constructs the correct PostgREST query (correct table, filters, ordering, column selection). Test null return when no row exists. Test that PostgrestException is wrapped into RepositoryException. Test that RLS-denied responses (403) are correctly surfaced as PermissionException.

Integration tests against a local Supabase instance (docker-compose): verify fetchActiveMentorsForChapter does not return paused mentors, verify fetchStatusHistory ordering, verify RLS blocks cross-chapter reads. Minimum 90% line coverage on the repository class.

Component
Mentor Status Repository
data low
Epic Risks (3)
high impact medium prob security

Supabase RLS policies for status reads and writes must correctly distinguish between a mentor editing their own status and a coordinator editing another mentor's status within the same chapter. Incorrect policies could allow cross-chapter data leakage or silently block legitimate status updates, causing hard-to-diagnose runtime failures.

Mitigation & Contingency

Mitigation: Write RLS policies with explicit role checks (auth.uid() = mentor_id OR chapter_coordinator_check()) and verify with integration tests that cover same-chapter coordinator access, cross-chapter denial, and self-access. Review policies with a second developer before merging.

Contingency: If policy errors surface after merge, temporarily widen policy to coordinator role globally while a targeted fix is authored; use Supabase audit logs to trace any unauthorised access during the interim.

medium impact medium prob integration

CoordinatorNotificationService must correctly resolve which coordinator(s) are responsible for a given mentor's chapter. If the chapter-coordinator mapping is incomplete or a mentor belongs to multiple chapters (as with NHF multi-chapter memberships), the service could fail to notify or duplicate notifications to the wrong coordinators.

Mitigation & Contingency

Mitigation: Use the existing chapter membership data model and query all active coordinator roles for each of the mentor's chapters. Add a de-duplication step before dispatch. Write integration tests with fixtures covering single-chapter, multi-chapter, and no-coordinator edge cases.

Contingency: If resolution logic proves too complex at this stage, fall back to notifying all coordinators in the organisation until a proper chapter-scoped resolver can be delivered in a follow-up task.

high impact low prob technical

Adding new columns to peer_mentors in production could conflict with existing application code that does SELECT * queries if new non-nullable columns without defaults are introduced, causing unexpected failures in unrelated screens.

Mitigation & Contingency

Mitigation: Make all new columns nullable or provide safe defaults. Use additive migration strategy with no column renames or drops. Run migration against a staging copy of production data before applying to live.

Contingency: Prepare a rollback migration script that drops only the new columns; coordinate with the team to deploy the rollback and hotfix immediately if production issues are detected.