critical priority medium complexity backend pending backend specialist Tier 0

Acceptance Criteria

SpeechRecognitionService is defined as an abstract class (or interface) in Dart with the following methods: Future<void> initialize(), Future<void> startListening({required Locale locale}), Future<void> stopListening(), Future<void> dispose()
SpeechRecognitionEvent is a sealed class with at minimum these subtypes: SpeechPartialResult({required String text}), SpeechFinalResult({required String text, required double confidence}), SpeechError({required SpeechErrorCode code, required String message}), SpeechStatusChange({required SpeechStatus status})
SpeechErrorCode is an enum covering: permissionDenied, permissionPermanentlyDenied, engineUnavailable, networkError, noSpeechDetected, localeUnsupported
SpeechStatus is an enum covering: idle, listening, processing, stopped
A SpeechLocaleConfig value object is defined with: primaryLocale (default nb-NO), fallbackLocales ([no-NO, en-US])
The interface exposes a Stream<SpeechRecognitionEvent> that consumers subscribe to for all events
All types are in a dedicated speech_recognition_service.dart file with no platform-specific imports
The interface compiles cleanly with dart analyze showing zero errors or warnings

Technical Requirements

frameworks
Flutter
Riverpod
data models
SpeechRecognitionEvent
SpeechLocaleConfig
SpeechErrorCode
SpeechStatus
performance requirements
Interface design must not force synchronous operations — all state-changing methods are async
security requirements
Interface must not expose raw audio buffers or byte streams — only text transcription events are surfaced to consumers

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Use Dart sealed classes (Dart 3+) for SpeechRecognitionEvent to get exhaustive pattern matching in switch statements — this is critical for UI state management. Define SpeechLocaleConfig as an immutable value object (use @immutable and const constructor). Place the mock implementation in test/mocks/mock_speech_recognition_service.dart. Norwegian locale identifier nb-NO maps to Locale('nb', 'NO') in Flutter.

Design the interface so that it can be backed by either the speech_to_text package or a native platform channel implementation — avoid leaking implementation details into the contract. Ensure the interface is Riverpod-friendly: the service should be injectable as a provider override for testing.

Testing Requirements

No runtime tests for this task — it is a pure interface/contract definition. Validate via dart analyze. A mock implementation of SpeechRecognitionService should be created alongside the interface to facilitate testing in dependent tasks. The mock must implement all interface methods and expose a way to manually emit SpeechRecognitionEvent values for test control.

Component
Speech Recognition Service
service high
Epic Risks (2)
medium impact medium prob technical

The speech_to_text Flutter package delegates accuracy entirely to the OS-native engine. Norwegian accuracy for domain-specific vocabulary (medical terms, organisation names, accessibility terminology) may fall below the 85% acceptance threshold on older devices or in noisy environments, causing user frustration and manual correction overhead that negates the time saving.

Mitigation & Contingency

Mitigation: Configure the SpeechRecognitionService with Norwegian as the explicit locale and test against a representative corpus of peer mentoring vocabulary on target devices. Expose locale switching so users can fallback to Bokmål vs Nynorsk. Clearly set user expectations in the UI that transcription is a starting point for editing, not a finished product.

Contingency: If accuracy is consistently below threshold on specific device/OS combinations, add a device-capability check that hides the dictation button with an explanatory message rather than offering a degraded experience. Document affected device models for QA and org contacts.

medium impact low prob dependency

The speech_to_text Flutter package is a third-party dependency that may introduce breaking API changes or deprecations on major version upgrades, requiring rework of SpeechRecognitionService when Flutter or platform OS versions are updated.

Mitigation & Contingency

Mitigation: Wrap all speech_to_text API calls behind the SpeechRecognitionService interface so that package changes are isolated to one file. Pin the package version in pubspec.yaml and review changelogs before any upgrade. Write integration tests that exercise the package contract so regressions are caught immediately.

Contingency: If the package is abandoned or has unresolvable issues, NativeSpeechApiBridge already provides the platform-channel abstraction needed to implement a direct plugin replacement with minimal changes to SpeechRecognitionService.