medium priority medium complexity testing pending testing specialist Tier 1

Acceptance Criteria

Test file exists at test/adapters/speech_to_text_adapter_test.dart
Permission grant flow: mock bridge returns 'granted' → adapter transitions to SpeechAdapterState.ready
Permission denied flow: mock bridge returns 'denied' → adapter emits SpeechAdapterError.permissionDenied and does not start recording
Permission permanently denied flow: mock bridge returns 'permanentlyDenied' → adapter emits SpeechAdapterError.permissionPermanentlyDenied with guidance to open app settings
Start recording: after permission granted, startRecording() transitions adapter to SpeechAdapterState.listening
Stop recording: stopRecording() transitions adapter to SpeechAdapterState.idle and returns final transcript
Partial transcript streaming: mock bridge emits 3 partial transcripts → adapter stream emits 3 intermediate SpeechResult.partial events before final
Locale switching: switching locale mid-session stops current session and restarts with new locale without losing accumulated transcript
App-background session release: when app lifecycle transitions to AppLifecycleState.paused, adapter automatically releases microphone and transitions to SpeechAdapterState.idle
No-speech timeout: mock bridge emits a timeout event after 30s → adapter emits SpeechAdapterError.noSpeechTimeout and transitions to SpeechAdapterState.idle
Error recovery: after any error state, calling startRecording() again (with permission granted) successfully restarts recording
No real microphone or OS speech API is invoked in any test — all platform channel calls are intercepted by mock bridge

Technical Requirements

frameworks
Flutter
flutter_test
apis
Native speech API bridge (mocked via MethodChannel mock or fake implementation)
Flutter app lifecycle (WidgetsBindingObserver)
data models
SpeechResult
SpeechAdapterState
SpeechAdapterError
SpeechLocale
performance requirements
All integration tests complete in under 10 seconds
Streaming tests must not use real delays — use fake async with fakeAsync()
security requirements
Tests must confirm adapter does NOT retain audio data after stopRecording() or session release
Microphone access must only be active in SpeechAdapterState.listening — verify adapter releases on all exit paths

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

The key challenge in this test suite is correctly mocking the MethodChannel without a real device. Use the pattern: final log = []; channel.setMockMethodCallHandler((call) async { log.add(call); return mockResponses[call.method]; }). For partial transcript streaming, use a StreamController in the mock and advance it step by step within fakeAsync(). The app-background test requires the adapter to implement WidgetsBindingObserver — verify this is done before writing the test, as it is the mechanism for lifecycle detection.

For locale switching, confirm the adapter's contract: does it emit a final partial transcript before restarting, or discard it? Document this behaviour explicitly in a test description string. Important user context: Blindeforbundet and HLF require speech-to-text for post-session report writing (not during sessions) — tests should reflect this: recording is user-initiated post-session, never auto-started.

Testing Requirements

Integration tests using flutter_test. Use TestWidgetsFlutterBinding and mock the native MethodChannel (e.g., 'com.app/speech') using TestDefaultBinaryMessengerBinding.instance.defaultBinaryMessenger.setMockMethodCallHandler() to intercept platform calls. Use fakeAsync() for timeout and streaming tests to avoid real delays. Use StreamController in the mock bridge to emit partial transcripts in sequence.

For lifecycle tests, call ServicesBinding.instance.handleAppLifecycleStateChanged(AppLifecycleState.paused) to simulate backgrounding. Verify state transitions using expectLater() on the adapter's state stream. All tests grouped by lifecycle phase: 'permissions', 'recording lifecycle', 'streaming', 'locale', 'background', 'error recovery'.

Component
Speech-to-Text Adapter
infrastructure medium
Epic Risks (2)
medium impact high prob technical

Flutter's speech_to_text package behaviour differs meaningfully between iOS and Android — microphone permission flows, locale availability, background audio session interference, and partial-result timing all vary. Inconsistent behaviour could make voice input unreliable for the primary audience (visually impaired peer mentors on iOS VoiceOver).

Mitigation & Contingency

Mitigation: Test speech-to-text-adapter on physical iOS and Android devices from the start, not just simulators. Write platform-specific test cases for permission flows and locale detection. Design the adapter's public interface to be platform-agnostic so that a native bridge could replace the package if needed.

Contingency: If speech_to_text proves unreliable on a platform, implement a native-speech-api-bridge (already identified in the component catalogue) as a drop-in replacement within the adapter, keeping the external interface unchanged so no UI code needs to change.

medium impact medium prob dependency

The coordinator task queue notification mechanism is not fully specified. If the queue system is owned by another team or uses an external service, way-forward-task-service may block on an undefined integration contract, delaying this epic.

Mitigation & Contingency

Mitigation: Define the task queue notification interface as an abstract Dart interface early in the epic. Implement a stub that writes a flag to the database so coordinator list queries can detect new tasks, deferring the real notification integration to a later epic.

Contingency: If the queue integration remains undefined at implementation time, ship way-forward-task-service with database persistence only and add a TODO-flagged notification hook. Coordinators will still see items on next page load; push notification delivery is deferred.