Add Session-Scoped Cache to ProxyContactListProvider
epic-bulk-and-proxy-registration-foundation-task-005 — Extend ProxyContactListProvider with a session-scoped in-memory cache so that the peer mentor roster is fetched from Supabase at most once per app session. Implement a manual invalidation method (invalidateCache()) for use after roster changes. Add a cache-hit/miss indicator for debug logging. This reduces perceived latency in the multi-select and single-select coordinator flows that call the provider repeatedly.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 2 - 518 tasks
Can start after Tier 1 completes
Implementation Notes
Implement the cache as a private nullable field `List
Keep the cache implementation trivially simple: this is not a generic cache layer, it is a single-use optimization. If the project adds roster mutation features later, document that those features must call `invalidateCache()` after successful writes.
Testing Requirements
Write flutter_test unit tests: (1) second call returns cached list and mock Supabase client is called exactly once, (2) invalidateCache() followed by a fetch calls Supabase again (mock called twice total), (3) simulated sign-out clears cache and next fetch goes to Supabase, (4) cache miss log is emitted on first call and cache hit log on second call (capture log output in test). All existing task-004 tests must continue to pass without modification.
Adding recorded_by_user_id to the activities table and writing correct RLS policies is error-prone: overly permissive policies would allow coordinators to record activities under arbitrary user IDs they do not manage, while overly restrictive policies would silently block valid proxy inserts. A policy defect here would either create a security vulnerability or break the entire proxy feature at runtime.
Mitigation & Contingency
Mitigation: Write RLS policies in a local Supabase emulator first. Include policy unit tests using pg_tap or supabase test helpers. Have a second reviewer check the migration SQL before merging. Explicitly test the three cases: coordinator inserting for their own mentors (should succeed), coordinator inserting for another chapter's mentors (should fail), peer mentor inserting for themselves (should succeed as before).
Contingency: If a policy defect is discovered in staging, roll back the migration with a down-migration script. Delay feature release until the policy is corrected and re-verified. Apply a feature flag to keep the proxy entry point hidden from coordinators until the fix is confirmed.
The insert_bulk_activities RPC must behave atomically — a failure on row 7 of 12 must roll back rows 1–6. If Supabase's RPC transaction handling is misconfigured or if network interruptions cause partial acknowledgements, some peer mentors could receive duplicate or missing activity records, directly corrupting Bufdir statistics for the coordinator's chapter.
Mitigation & Contingency
Mitigation: Implement the RPC as a PostgreSQL function with explicit BEGIN/EXCEPTION/END block to guarantee atomicity. Add an integration test that inserts a batch where one row violates a unique constraint and asserts zero rows are committed. Document the transaction semantics in code comments.
Contingency: If atomicity cannot be guaranteed via RPC (e.g., due to Supabase plan limitations), fall back to a sequential insert loop with a compensating DELETE in case of partial failure, and surface a clear error to the coordinator listing which mentors failed and which succeeded.