Implement auth context change auto-invalidation
epic-contact-list-management-business-logic-task-009 — Configure ContactListRiverpodProvider to watch the authenticated user's role and organisation context providers. When role or organisation changes (e.g., after session change or org switch), automatically invalidate and re-fetch both contact and peer mentor streams via ref.invalidateSelf() or ref.watch-triggered rebuild. This prevents stale contact data across multi-chapter multi-role session changes.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 6 - 158 tasks
Can start after Tier 5 completes
Implementation Notes
Use ref.watch() rather than ref.listen() for role and organisation context so the provider is automatically disposed and rebuilt by Riverpod's dependency graph — this is cleaner than manually calling ref.invalidateSelf(). Place the watch calls at the very top of the build/create method so Riverpod registers the dependency before any async work starts. For the multi-chapter NHF case, the organisation context provider should expose a single 'active organisation id' value; the ContactListRiverpodProvider watches that single value, keeping the invalidation logic simple. Avoid storing the previous organisation id inside the provider — rely entirely on Riverpod's rebuild cycle.
Use autoDispose modifier on the provider to ensure streams are cancelled when the contacts screen is not active, preventing background Supabase socket usage.
Testing Requirements
Write unit tests using ProviderContainer to verify that mutating the role provider value causes ContactListRiverpodProvider to re-build. Write integration tests simulating an organisation switch event and asserting that the contacts AsyncValue transitions through loading → data with the new org-scoped data. Test rapid consecutive switches (3 switches in <500ms) to confirm only the last context triggers a final stable fetch. Verify that the old stream subscription is cancelled by inspecting active listener counts.
All tests use flutter_test and mock Supabase client; no live network calls.
For organizations with large contact lists (NHF has 1,400 local chapters and potentially thousands of contacts), local in-memory filtering may be too slow and Supabase ILIKE queries without supporting indexes may exceed acceptable response times or accumulate excessive read costs, degrading search usability for power users.
Mitigation & Contingency
Mitigation: Define and document the list-size threshold in ContactSearchService before implementation. Confirm that indexes on name and notes columns exist in the Supabase schema before enabling server-side search. Profile ContactSearchService against realistic data volumes in the staging environment using the largest expected org.
Contingency: If response times are unacceptable in staging, introduce result-count pagination in ContactListService and add a user-visible 'showing top N results — refine your search' indicator, deferring full pagination to a follow-up task.
In NHF's multi-chapter context, when a user switches organization, Riverpod providers may emit a brief window of stale contact data scoped to the previous organization before the invalidation cycle completes, transiently exposing contacts from the wrong chapter.
Mitigation & Contingency
Mitigation: Model organization context as a Riverpod provider dependency so that any context change immediately marks contact providers as stale. Render a loading skeleton instead of the stale list during the re-fetch transition. Cover this scenario in integration tests with explicit org-switch sequences.
Contingency: If race conditions are observed during QA, add an explicit organization_id equality check in ContactListService that compares each fetched record's scope to the active session org, discarding any mismatched batch before returning results to the provider.