Implement start and stop listening session lifecycle
epic-speech-to-text-input-speech-engine-task-004 — Implement startListening() and stopListening() methods that enforce explicit user-command-only activation (no auto-start or voice-activation). startListening() transitions internal state to recording, opens a StreamController for domain events, and delegates to NativeSpeechApiBridge. stopListening() finalizes the session, closes the stream cleanly, and emits a final FinalTranscriptionEvent. Guard against concurrent session attempts.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Use a sealed class hierarchy for SpeechRecognitionEvent (PartialTranscriptionEvent, FinalTranscriptionEvent, SpeechRecognitionErrorEvent) so callers use exhaustive switch. Model session state as a private enum _SpeechSessionState {idle, recording, stopping} and guard all transitions with a single synchronous check-then-set pattern (Dart is single-threaded per isolate, but async gaps can still cause re-entrancy — use a bool _operationInProgress guard for the async gap between startListening() call and bridge delegation). Use StreamController.broadcast() so multiple BLoC listeners can subscribe. Do NOT use StreamController.sink.add() after close() — add a _isClosed guard.
Prefer addStream() pattern from the bridge callbacks rather than manual event forwarding to reduce coupling. Norwegian locale ('nb-NO') must be passed to bridge at session start.
Testing Requirements
Unit tests (flutter_test + mockito): verify state machine transitions for all valid and invalid startListening/stopListening call sequences; verify ConcurrentSessionException on double-start; verify stopListening no-op when idle; verify FinalTranscriptionEvent emitted before stream close; verify NativeSpeechApiBridge called in correct order. Mock NativeSpeechApiBridge entirely — no platform channels. Achieve 100% branch coverage on the session state machine.
The speech_to_text Flutter package delegates accuracy entirely to the OS-native engine. Norwegian accuracy for domain-specific vocabulary (medical terms, organisation names, accessibility terminology) may fall below the 85% acceptance threshold on older devices or in noisy environments, causing user frustration and manual correction overhead that negates the time saving.
Mitigation & Contingency
Mitigation: Configure the SpeechRecognitionService with Norwegian as the explicit locale and test against a representative corpus of peer mentoring vocabulary on target devices. Expose locale switching so users can fallback to Bokmål vs Nynorsk. Clearly set user expectations in the UI that transcription is a starting point for editing, not a finished product.
Contingency: If accuracy is consistently below threshold on specific device/OS combinations, add a device-capability check that hides the dictation button with an explanatory message rather than offering a degraded experience. Document affected device models for QA and org contacts.
The speech_to_text Flutter package is a third-party dependency that may introduce breaking API changes or deprecations on major version upgrades, requiring rework of SpeechRecognitionService when Flutter or platform OS versions are updated.
Mitigation & Contingency
Mitigation: Wrap all speech_to_text API calls behind the SpeechRecognitionService interface so that package changes are isolated to one file. Pin the package version in pubspec.yaml and review changelogs before any upgrade. Write integration tests that exercise the package contract so regressions are caught immediately.
Contingency: If the package is abandoned or has unresolvable issues, NativeSpeechApiBridge already provides the platform-channel abstraction needed to implement a direct plugin replacement with minimal changes to SpeechRecognitionService.