Implement comprehensive error handling and recovery
epic-speech-to-text-input-speech-engine-task-006 — Implement error handling for all failure scenarios: permission denied, engine not available, network timeout (on-device vs cloud engine), no speech detected timeout, and platform exceptions from NativeSpeechApiBridge. Map each to a typed SpeechRecognitionErrorEvent with an error code enum. Implement automatic session cleanup on error so the service returns to a safe idle state without requiring manual dispose.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 5 - 253 tasks
Can start after Tier 4 completes
Implementation Notes
Define SpeechErrorCode as a sealed enum (or const enum with extension methods for recoverable/message). In the bridge onStatus callback, map the speech_to_text status strings ('notListening', 'error') and onError callback (SpeechRecognitionError) to the typed enum. The speech_to_text package provides SpeechRecognitionError with errorMsg — map 'error_no_match' → noSpeechDetected, 'error_speech_timeout' → noSpeechDetected, 'error_network' → networkTimeout, 'error_network_timeout' → networkTimeout, 'error_audio' → engineUnavailable, 'error_server' → networkTimeout, 'error_client' → platformException. Use a try/catch around the entire bridge delegation in startListening() to catch synchronous platform exceptions.
Implement _handleError(SpeechErrorCode code, {bool recoverable}) as a private method that: (1) emits SpeechRecognitionErrorEvent, (2) calls _cleanupSession(), (3) closes controller. For Blindeforbundet and HLF users with hearing impairments, clear Norwegian error messages are essential — store localizable error message keys in the enum.
Testing Requirements
Unit tests (flutter_test + mockito): one test per SpeechErrorCode path; verify state returns to idle after each error; verify stream closes after error event; verify startListening() works correctly after error recovery; verify recoverable flag accuracy per error type; verify no exception propagates from any error path. Test the 'double error' scenario (bridge fires two error callbacks) — second must be ignored after stream close. 100% branch coverage on error mapping.
The speech_to_text Flutter package delegates accuracy entirely to the OS-native engine. Norwegian accuracy for domain-specific vocabulary (medical terms, organisation names, accessibility terminology) may fall below the 85% acceptance threshold on older devices or in noisy environments, causing user frustration and manual correction overhead that negates the time saving.
Mitigation & Contingency
Mitigation: Configure the SpeechRecognitionService with Norwegian as the explicit locale and test against a representative corpus of peer mentoring vocabulary on target devices. Expose locale switching so users can fallback to Bokmål vs Nynorsk. Clearly set user expectations in the UI that transcription is a starting point for editing, not a finished product.
Contingency: If accuracy is consistently below threshold on specific device/OS combinations, add a device-capability check that hides the dictation button with an explanatory message rather than offering a degraded experience. Document affected device models for QA and org contacts.
The speech_to_text Flutter package is a third-party dependency that may introduce breaking API changes or deprecations on major version upgrades, requiring rework of SpeechRecognitionService when Flutter or platform OS versions are updated.
Mitigation & Contingency
Mitigation: Wrap all speech_to_text API calls behind the SpeechRecognitionService interface so that package changes are isolated to one file. Pin the package version in pubspec.yaml and review changelogs before any upgrade. Write integration tests that exercise the package contract so regressions are caught immediately.
Contingency: If the package is abandoned or has unresolvable issues, NativeSpeechApiBridge already provides the platform-channel abstraction needed to implement a direct plugin replacement with minimal changes to SpeechRecognitionService.