Write integration tests with mocked platform responses
epic-speech-to-text-input-speech-engine-task-009 — Write integration tests using flutter_test that exercise SpeechRecognitionService end-to-end against a fake NativeSpeechApiBridge that simulates realistic platform response sequences: successful Norwegian transcription, mid-session network drop, engine unavailable on cold start, and back-to-back session requests. Validate event ordering and stream completion.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 8 - 48 tasks
Can start after Tier 7 completes
Implementation Notes
The key difference from unit tests (task-008) is that FakeNativeSpeechApiBridge here is a fully implemented test double (not a mockito mock) that drives callback sequences asynchronously. Implement FakeNativeSpeechApiBridge with a configureScenario(FakeSessionScenario) method. FakeSessionScenario is a simple data class: {List
Bridge fires callbacks using a chain of Future.microtask(() => _onPartialResult(partials[0])).then((_) => Future.microtask(() => _onPartialResult(partials[1]))).etc. For back-to-back session test: after first session stream closes, call service.startListening() again and verify the second session's stream is a fresh Stream with its own events — confirm no events from session 1 appear on session 2's stream. The back-to-back test is the most important for real-world use: Blindeforbundet users may start multiple sessions per home visit report; HLF users may dictate multiple paragraphs separately.
Testing Requirements
Integration tests in speech_recognition_service_integration_test.dart. Use a hand-written FakeNativeSpeechApiBridge that accepts a scenario configuration and fires callbacks in realistic async sequences using Future.microtask() chains. Use StreamQueue from the async package for step-by-step stream assertion. Scenarios must use realistic Norwegian text ('jeg trenger hjelp', 'kan du hjelpe meg') to validate encoding/locale handling.
Run as part of flutter test — no widget test environment needed (pure Dart layer). Verify stream isDone after each scenario using expectLater(stream, emitsDone).
The speech_to_text Flutter package delegates accuracy entirely to the OS-native engine. Norwegian accuracy for domain-specific vocabulary (medical terms, organisation names, accessibility terminology) may fall below the 85% acceptance threshold on older devices or in noisy environments, causing user frustration and manual correction overhead that negates the time saving.
Mitigation & Contingency
Mitigation: Configure the SpeechRecognitionService with Norwegian as the explicit locale and test against a representative corpus of peer mentoring vocabulary on target devices. Expose locale switching so users can fallback to Bokmål vs Nynorsk. Clearly set user expectations in the UI that transcription is a starting point for editing, not a finished product.
Contingency: If accuracy is consistently below threshold on specific device/OS combinations, add a device-capability check that hides the dictation button with an explanatory message rather than offering a degraded experience. Document affected device models for QA and org contacts.
The speech_to_text Flutter package is a third-party dependency that may introduce breaking API changes or deprecations on major version upgrades, requiring rework of SpeechRecognitionService when Flutter or platform OS versions are updated.
Mitigation & Contingency
Mitigation: Wrap all speech_to_text API calls behind the SpeechRecognitionService interface so that package changes are isolated to one file. Pin the package version in pubspec.yaml and review changelogs before any upgrade. Write integration tests that exercise the package contract so regressions are caught immediately.
Contingency: If the package is abandoned or has unresolvable issues, NativeSpeechApiBridge already provides the platform-channel abstraction needed to implement a direct plugin replacement with minimal changes to SpeechRecognitionService.