Build Speech-to-Text Adapter with Permission Handling
epic-structured-post-session-report-services-task-004 — Wrap Flutter's speech_to_text package into a clean adapter exposing start, stop, pause, and cancel controls. Implement microphone permission request flow with graceful denial handling, locale detection for Norwegian/English, lifecycle hooks to release the audio session on app background, and structured error recovery for no-speech and permission-denied states. This adapter is shared infrastructure usable by any feature requiring voice input.
Acceptance Criteria
Technical Requirements
Implementation Notes
Define SpeechToTextAdapter as an abstract class and provide FlutterSpeechToTextAdapter as the concrete implementation. This allows tests to inject a fake adapter and allows future swapping of the underlying plugin. Use a StreamController
The Riverpod provider for this adapter should be a Provider
Testing Requirements
Write unit tests using flutter_test with a MockSpeechToText class that simulates the speech_to_text plugin interface. Test matrix: (1) requestPermission granted → start() succeeds, (2) requestPermission denied → start() throws SpeechPermissionException, (3) locale nb_NO selected when device locale is Norwegian, (4) locale en_US selected as fallback for non-Norwegian locale, (5) caller-specified locale overrides detection, (6) 5-second silence → noSpeechDetected SpeechError emitted and recording stops, (7) AppLifecycleState.paused → stop() called automatically, (8) interim results streamed correctly, (9) final result emitted with isFinal=true. Integration test: on a physical iOS device, confirm the permission dialog appears, granting it allows recording, and the transcript appears in the stream.
Flutter's speech_to_text package behaviour differs meaningfully between iOS and Android — microphone permission flows, locale availability, background audio session interference, and partial-result timing all vary. Inconsistent behaviour could make voice input unreliable for the primary audience (visually impaired peer mentors on iOS VoiceOver).
Mitigation & Contingency
Mitigation: Test speech-to-text-adapter on physical iOS and Android devices from the start, not just simulators. Write platform-specific test cases for permission flows and locale detection. Design the adapter's public interface to be platform-agnostic so that a native bridge could replace the package if needed.
Contingency: If speech_to_text proves unreliable on a platform, implement a native-speech-api-bridge (already identified in the component catalogue) as a drop-in replacement within the adapter, keeping the external interface unchanged so no UI code needs to change.
The coordinator task queue notification mechanism is not fully specified. If the queue system is owned by another team or uses an external service, way-forward-task-service may block on an undefined integration contract, delaying this epic.
Mitigation & Contingency
Mitigation: Define the task queue notification interface as an abstract Dart interface early in the epic. Implement a stub that writes a flag to the database so coordinator list queries can detect new tasks, deferring the real notification integration to a later epic.
Contingency: If the queue integration remains undefined at implementation time, ship way-forward-task-service with database persistence only and add a TODO-flagged notification hook. Coordinators will still see items on next page load; push notification delivery is deferred.