critical priority medium complexity infrastructure pending backend specialist Tier 0

Acceptance Criteria

Abstract class NativeSpeechApiBridge is defined in Dart with no platform-specific imports
Method signature Future<SpeechPermissionResult> requestPermission() is declared and documented
Method signature void startRecognition({required String locale, required void Function(String) onPartial, required void Function(String) onFinal, required void Function(SpeechError) onError}) is declared
Method signature Future<void> stopRecognition() is declared
Method signature Future<bool> isAvailable() is declared
SpeechPermissionResult sealed class or enum covers: granted, denied, permanentlyDenied, restricted
SpeechRecognitionEvent sealed class has three subtypes: SpeechPartialEvent(String text), SpeechFinalEvent(String text), SpeechErrorEvent(SpeechError error)
SpeechError enum covers: networkUnavailable, noSpeechDetected, permissionDenied, audioSessionError, recognitionUnavailable, unknown
All public APIs have Dart doc comments explaining contract behavior and expected platform behavior
No platform-specific classes (dart:io Platform checks, MethodChannel) appear in the abstract interface file
Interface compiles cleanly with flutter analyze reporting zero issues
A CHANGELOG or doc note records that this interface is frozen and breaking changes require a new major version

Technical Requirements

frameworks
Flutter
Dart
data models
SpeechPermissionResult
SpeechRecognitionEvent
SpeechError
NativeSpeechApiBridge
performance requirements
Interface methods must not block the main isolate — all async methods return Future
Callback-based streaming (onPartial/onFinal/onError) must not buffer results — deliver as received
security requirements
Interface contract must never expose raw audio buffers or audio bytes — only transcribed text is passed through callbacks
SpeechError enum must not expose internal OS error codes that could reveal device/OS details

Execution Context

Execution Tier
Tier 0

Tier 0 - 440 tasks

Implementation Notes

Use Dart sealed classes (Dart 3.0+) for SpeechRecognitionEvent and SpeechPermissionResult to get exhaustive switch checking at compile time. Place all types in a single file lib/features/speech/native_speech_api_bridge.dart to keep the contract co-located. Do NOT use abstract interface with factory constructors yet — keep the class abstract with no constructor so Riverpod providers can inject platform-specific subclasses cleanly. Locale parameter should be typed as a String (e.g., 'nb-NO') rather than a Locale object to avoid Flutter framework dependency in this infrastructure layer.

Define the callbacks as typedef void SpeechPartialCallback(String text) etc. for readability. Avoid using Stream as the primary API — callback-based is simpler to implement on both iOS and Android MethodChannel bridges without additional StreamController lifecycle concerns.

Testing Requirements

Unit tests should verify that all sealed class variants of SpeechRecognitionEvent and SpeechPermissionResult are exhaustively handled in switch expressions (compile-time guarantee). Write a stub implementation of NativeSpeechApiBridge and assert that it satisfies the interface — this acts as a contract test. Verify that SpeechError enum covers all expected error conditions by asserting enum.values.length matches the documented count. No platform or integration tests are required at this stage; the interface is pure Dart.

Component
Native Speech API Bridge
infrastructure medium
Epic Risks (3)
high impact medium prob technical

iOS 15 on-device speech recognition has a 1-minute session limit and requires network fallback for longer sessions. Peer mentor way-forward dictation may routinely exceed this limit, causing silent truncation of transcribed content without user feedback.

Mitigation & Contingency

Mitigation: Implement session-chunking logic in NativeSpeechApiBridge that automatically restarts recognition before the limit is reached, preserving continuity via partial concatenation. Document the iOS 15 vs iOS 16 on-device recognition behaviour difference in code comments.

Contingency: If chunking causes user-visible interruptions, surface a non-blocking informational banner on iOS 15 devices informing users that very long dictation sessions may need to be broken into segments, and use PartialTranscriptionRepository to persist each chunk immediately.

high impact medium prob scope

On iOS, speech recognition permission can only be requested once. If the user denies the permission, the app cannot re-request it. A poor first-impression permission flow will permanently disable dictation for those users, impacting the Blindeforbundet blind-user base who rely on dictation most.

Mitigation & Contingency

Mitigation: Design the NativeSpeechApiBridge permission flow to show a clear pre-permission rationale screen before the OS dialog. Implement a graceful degradation path that hides the microphone button and shows a settings deep-link when permission is permanently denied.

Contingency: If users have already denied permission before the rationale screen is added, provide a settings deep-link in DictationScopeGuard's denial message directing users to iOS Settings > Privacy > Speech Recognition to re-enable manually.

medium impact low prob integration

The approved field IDs and screen routes configuration in DictationScopeGuard may fall out of sync with the actual report form schema as new fields are added by org administrators, silently blocking dictation on legitimately approved fields.

Mitigation & Contingency

Mitigation: Source the approved field configuration from the same org-field-config-loader used by the report form, rather than a hardcoded list. Add a developer-time assertion that logs a warning when a dictation-eligible field type is rendered but not in the approved routes map.

Contingency: Provide a runtime override mechanism in the scope guard that coordinators or admins can use to temporarily whitelist a field ID while the config is updated, with an automatic expiry.