critical priority high complexity backend pending backend specialist Tier 5

Acceptance Criteria

All error scenarios map to SpeechRecognitionErrorEvent with a SpeechErrorCode enum value — no raw platform exceptions leak to callers
SpeechErrorCode covers at minimum: permissionDenied, engineUnavailable, networkTimeout, noSpeechDetected, platformException, unknownError
After any error, internal session state returns to idle without requiring caller to call dispose() or stopListening()
After error recovery, a subsequent startListening() call succeeds normally (service is fully reusable)
SpeechRecognitionErrorEvent is emitted on the event stream (not thrown as exception) so BLoC stream handlers receive it
Stream is closed cleanly after emitting the error event — no dangling open streams after errors
networkTimeout error only fires when the engine selected is cloud-based — on-device engine produces engineUnavailable instead
noSpeechDetected fires after the platform's silence timeout (typically 5–10s depending on OS) — not prematurely
permissionDenied error includes a SpeechErrorEvent.recoverable=true flag so UI can prompt for permission settings
Platform exceptions from NativeSpeechApiBridge are caught and wrapped — they never propagate as unhandled exceptions

Technical Requirements

frameworks
Flutter
speech_to_text package
Dart error handling
apis
SpeechToText.listen() onSoundLevelChange, onStatus callbacks
NativeSpeechApiBridge error propagation
Permission handler package
data models
SpeechRecognitionErrorEvent {errorCode: SpeechErrorCode, message: String, recoverable: bool, timestamp: DateTime}
SpeechErrorCode (enum)
_SpeechSessionState
performance requirements
Error detection and stream emission must complete within 100ms of platform error callback
Session cleanup (state reset, stream close) must complete synchronously after error event emission
security requirements
Error messages must not include raw platform error strings that could expose system internals in production logs
Permission denial must not retry automatically — user must explicitly re-trigger

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Define SpeechErrorCode as a sealed enum (or const enum with extension methods for recoverable/message). In the bridge onStatus callback, map the speech_to_text status strings ('notListening', 'error') and onError callback (SpeechRecognitionError) to the typed enum. The speech_to_text package provides SpeechRecognitionError with errorMsg — map 'error_no_match' → noSpeechDetected, 'error_speech_timeout' → noSpeechDetected, 'error_network' → networkTimeout, 'error_network_timeout' → networkTimeout, 'error_audio' → engineUnavailable, 'error_server' → networkTimeout, 'error_client' → platformException. Use a try/catch around the entire bridge delegation in startListening() to catch synchronous platform exceptions.

Implement _handleError(SpeechErrorCode code, {bool recoverable}) as a private method that: (1) emits SpeechRecognitionErrorEvent, (2) calls _cleanupSession(), (3) closes controller. For Blindeforbundet and HLF users with hearing impairments, clear Norwegian error messages are essential — store localizable error message keys in the enum.

Testing Requirements

Unit tests (flutter_test + mockito): one test per SpeechErrorCode path; verify state returns to idle after each error; verify stream closes after error event; verify startListening() works correctly after error recovery; verify recoverable flag accuracy per error type; verify no exception propagates from any error path. Test the 'double error' scenario (bridge fires two error callbacks) — second must be ignored after stream close. 100% branch coverage on error mapping.

Component
Speech Recognition Service
service high
Epic Risks (2)
medium impact medium prob technical

The speech_to_text Flutter package delegates accuracy entirely to the OS-native engine. Norwegian accuracy for domain-specific vocabulary (medical terms, organisation names, accessibility terminology) may fall below the 85% acceptance threshold on older devices or in noisy environments, causing user frustration and manual correction overhead that negates the time saving.

Mitigation & Contingency

Mitigation: Configure the SpeechRecognitionService with Norwegian as the explicit locale and test against a representative corpus of peer mentoring vocabulary on target devices. Expose locale switching so users can fallback to Bokmål vs Nynorsk. Clearly set user expectations in the UI that transcription is a starting point for editing, not a finished product.

Contingency: If accuracy is consistently below threshold on specific device/OS combinations, add a device-capability check that hides the dictation button with an explanatory message rather than offering a degraded experience. Document affected device models for QA and org contacts.

medium impact low prob dependency

The speech_to_text Flutter package is a third-party dependency that may introduce breaking API changes or deprecations on major version upgrades, requiring rework of SpeechRecognitionService when Flutter or platform OS versions are updated.

Mitigation & Contingency

Mitigation: Wrap all speech_to_text API calls behind the SpeechRecognitionService interface so that package changes are isolated to one file. Pin the package version in pubspec.yaml and review changelogs before any upgrade. Write integration tests that exercise the package contract so regressions are caught immediately.

Contingency: If the package is abandoned or has unresolvable issues, NativeSpeechApiBridge already provides the platform-channel abstraction needed to implement a direct plugin replacement with minimal changes to SpeechRecognitionService.