critical priority medium complexity backend pending backend specialist Tier 3

Acceptance Criteria

resolveCoordinatorsForMentor(mentorId) returns a non-empty list of active coordinator user_ids for all chapters the mentor belongs to
A mentor belonging to multiple chapters (up to 5 per NHF requirements) has coordinators from all chapters included in the result — no chapter is silently dropped
Coordinators with account status inactive, blocked, or deleted are excluded from results
The same coordinator appearing in multiple chapters is deduplicated — they appear once in the result list
Cache hit is used for repeat calls within the TTL window (default 5 minutes) — verified by observing zero Supabase queries on second call
Cache is invalidated or bypassed when a coordinator's status changes — manual invalidation method is exposed on the service
An empty list is returned (not an exception) when a mentor has no chapters assigned, with a warning log
resolveCoordinatorsForMentor completes in < 300 ms on cache miss for a chapter with 10 coordinators
The service is injectable via an abstract interface for testability

Technical Requirements

frameworks
Flutter
Supabase Dart client (supabase_flutter)
Riverpod (for DI and scoped caching) or BLoC
apis
Supabase REST — chapter_memberships table (filter by mentor_id, role = 'coordinator')
Supabase REST — user_roles or profiles table (filter active coordinators by chapter_id)
data models
ChapterMembership
CoordinatorProfile
UserRole
performance requirements
Cache TTL of 5 minutes maximum — coordinators are semi-static data, short TTL is acceptable
Cache must be an in-memory Map keyed by chapter_id — no persistence to disk required
Bulk resolution for 10 mentors in the same chapter must result in exactly 1 Supabase query (cache reuse)
security requirements
Resolution logic must only return coordinators for chapters the calling user is authorised to access per RLS
Cached coordinator lists must not survive a user session logout — clear cache on auth state change
Do not cache user PII beyond user_id — no names or emails in the cache layer

Execution Context

Execution Tier
Tier 3

Tier 3 - 413 tasks

Can start after Tier 2 completes

Implementation Notes

Implement a private _coordinatorCache = coordinatorIds, DateTime cachedAt})>{} map inside the service. On each resolution call, check if chapterId is in the cache and whether DateTime.now().difference(entry.cachedAt).inMinutes < ttlMinutes. If cache miss, query Supabase for all active coordinators of the chapter and populate the cache. For a mentor in multiple chapters, resolve each chapter sequentially (or in parallel with Future.wait) and merge results with a Set for deduplication.

Expose a void invalidateChapterCache(String chapterId) and a void clearAllCache() method. Subscribe to Supabase auth state changes via supabase.auth.onAuthStateChange and call clearAllCache() on sign-out.

Testing Requirements

Unit tests (flutter_test + mocktail): mock Supabase queries and verify correct filters are applied (role = coordinator, status = active, correct chapter_ids). Test deduplication when a coordinator is in multiple chapters. Test empty result when mentor has no chapters. Test cache hit: call resolveCoordinatorsForMentor twice with same mentorId and verify the Supabase mock is only called once.

Test cache invalidation: call invalidate, then verify a fresh query is made. Test multi-chapter mentor: mock two chapter memberships and verify coordinators from both chapters are merged. Minimum 85% line coverage.

Component
Coordinator Notification Service
service medium
Epic Risks (3)
high impact medium prob security

Supabase RLS policies for status reads and writes must correctly distinguish between a mentor editing their own status and a coordinator editing another mentor's status within the same chapter. Incorrect policies could allow cross-chapter data leakage or silently block legitimate status updates, causing hard-to-diagnose runtime failures.

Mitigation & Contingency

Mitigation: Write RLS policies with explicit role checks (auth.uid() = mentor_id OR chapter_coordinator_check()) and verify with integration tests that cover same-chapter coordinator access, cross-chapter denial, and self-access. Review policies with a second developer before merging.

Contingency: If policy errors surface after merge, temporarily widen policy to coordinator role globally while a targeted fix is authored; use Supabase audit logs to trace any unauthorised access during the interim.

medium impact medium prob integration

CoordinatorNotificationService must correctly resolve which coordinator(s) are responsible for a given mentor's chapter. If the chapter-coordinator mapping is incomplete or a mentor belongs to multiple chapters (as with NHF multi-chapter memberships), the service could fail to notify or duplicate notifications to the wrong coordinators.

Mitigation & Contingency

Mitigation: Use the existing chapter membership data model and query all active coordinator roles for each of the mentor's chapters. Add a de-duplication step before dispatch. Write integration tests with fixtures covering single-chapter, multi-chapter, and no-coordinator edge cases.

Contingency: If resolution logic proves too complex at this stage, fall back to notifying all coordinators in the organisation until a proper chapter-scoped resolver can be delivered in a follow-up task.

high impact low prob technical

Adding new columns to peer_mentors in production could conflict with existing application code that does SELECT * queries if new non-nullable columns without defaults are introduced, causing unexpected failures in unrelated screens.

Mitigation & Contingency

Mitigation: Make all new columns nullable or provide safe defaults. Use additive migration strategy with no column renames or drops. Run migration against a staging copy of production data before applying to live.

Contingency: Prepare a rollback migration script that drops only the new columns; coordinate with the team to deploy the rollback and hotfix immediately if production issues are detected.