critical priority low complexity infrastructure pending backend specialist Tier 2

Acceptance Criteria

nativeSpeechApiBridgeProvider is defined as a Riverpod Provider<NativeSpeechApiBridge>
Provider returns IosSpeechApiBridge when Platform.isIOS is true
Provider returns AndroidSpeechApiBridge when Platform.isAndroid is true
Provider throws an UnsupportedError with a clear message for non-mobile platforms (web, desktop)
MockSpeechApiBridge implementation exists in test/ directory implementing NativeSpeechApiBridge with configurable responses
MockSpeechApiBridge can be injected via ProviderScope overrides in widget and integration tests
Provider is declared at application scope (not inside a widget) so the bridge instance is reused across features
No audio session or microphone resource is held by the provider itself — resources are only acquired inside startRecognition()
flutter analyze reports zero issues on the provider file
A ProviderContainer test verifies that the correct platform implementation is returned

Technical Requirements

frameworks
Flutter
Dart
Riverpod
apis
dart:io Platform
data models
NativeSpeechApiBridge
IosSpeechApiBridge
AndroidSpeechApiBridge
MockSpeechApiBridge
performance requirements
Provider must be a lazy singleton — bridge instance is created once and reused
Provider creation must complete synchronously — no async initialization in the provider factory
security requirements
Mock implementation must only be injectable via ProviderScope.overrides — it must never be returned in production builds
Platform check must use dart:io Platform, not kIsWeb or compile-time constants, to ensure correct runtime behavior

Execution Context

Execution Tier
Tier 2

Tier 2 - 518 tasks

Can start after Tier 1 completes

Implementation Notes

Define the provider in lib/features/speech/providers/speech_providers.dart. Use Riverpod's Provider (not StateProvider or AsyncNotifierProvider) since NativeSpeechApiBridge is stateless infrastructure. To make Platform.isIOS testable, consider wrapping the platform check in an abstract PlatformInfo class injected via a platformInfoProvider — this avoids direct dart:io Platform static calls in provider logic. MockSpeechApiBridge should support: configurable permission result, a list of SpeechRecognitionEvent objects to emit in sequence, and a configurable delay to simulate async streaming.

Use keepAlive: false (default) so the provider can be disposed when no longer needed — but since the bridge itself holds no resources until startRecognition(), this is safe. Co-locate the mock in test/mocks/mock_speech_api_bridge.dart and annotate with @visibleForTesting.

Testing Requirements

Write unit tests using ProviderContainer to verify: (1) on a mocked Platform.isIOS=true environment, the provider returns an IosSpeechApiBridge instance, (2) on Platform.isAndroid=true, it returns AndroidSpeechApiBridge, (3) MockSpeechApiBridge can be injected via ProviderScope.overrides and all methods return configured responses. Write a widget test using ProviderScope with MockSpeechApiBridge override to confirm a dependent widget can call requestPermission() without native channel calls. Since Platform checks cannot be overridden at runtime in Dart, use a platformProvider abstraction or inject platform detection via a helper to enable testability.

Component
Native Speech API Bridge
infrastructure medium
Epic Risks (3)
high impact medium prob technical

iOS 15 on-device speech recognition has a 1-minute session limit and requires network fallback for longer sessions. Peer mentor way-forward dictation may routinely exceed this limit, causing silent truncation of transcribed content without user feedback.

Mitigation & Contingency

Mitigation: Implement session-chunking logic in NativeSpeechApiBridge that automatically restarts recognition before the limit is reached, preserving continuity via partial concatenation. Document the iOS 15 vs iOS 16 on-device recognition behaviour difference in code comments.

Contingency: If chunking causes user-visible interruptions, surface a non-blocking informational banner on iOS 15 devices informing users that very long dictation sessions may need to be broken into segments, and use PartialTranscriptionRepository to persist each chunk immediately.

high impact medium prob scope

On iOS, speech recognition permission can only be requested once. If the user denies the permission, the app cannot re-request it. A poor first-impression permission flow will permanently disable dictation for those users, impacting the Blindeforbundet blind-user base who rely on dictation most.

Mitigation & Contingency

Mitigation: Design the NativeSpeechApiBridge permission flow to show a clear pre-permission rationale screen before the OS dialog. Implement a graceful degradation path that hides the microphone button and shows a settings deep-link when permission is permanently denied.

Contingency: If users have already denied permission before the rationale screen is added, provide a settings deep-link in DictationScopeGuard's denial message directing users to iOS Settings > Privacy > Speech Recognition to re-enable manually.

medium impact low prob integration

The approved field IDs and screen routes configuration in DictationScopeGuard may fall out of sync with the actual report form schema as new fields are added by org administrators, silently blocking dictation on legitimately approved fields.

Mitigation & Contingency

Mitigation: Source the approved field configuration from the same org-field-config-loader used by the report form, rather than a hardcoded list. Add a developer-time assertion that logs a warning when a dictation-eligible field type is rendered but not in the approved routes map.

Contingency: Provide a runtime override mechanism in the scope guard that coordinators or admins can use to temporarily whitelist a field ID while the config is updated, with an automatic expiry.