high priority low complexity frontend pending frontend specialist Tier 1

Acceptance Criteria

Microphone icon button is visible in the search field trailing slot when speech recognition is available on the device
Microphone button is completely hidden (not just disabled) when speech recognition is unavailable
Tapping the microphone button starts speech recognition and inserts the transcript into the search field
While listening, the microphone icon changes to a visual 'listening' indicator (e.g., animated waveform or color change) and the Semantics label reads 'Listening — tap to stop'
If speech recognition is cancelled or fails, the field retains its previous value and a Semantics announcement says 'Voice input cancelled'
Microphone button has a minimum touch target of 44×44 dp
Microphone button Semantics label is 'Start voice search' in the default state
Permission denial is handled gracefully: if microphone permission is denied, button is hidden and no crash occurs
Feature works on iOS (SFSpeechRecognizer) and Android (SpeechRecognizer) via the native-speech-api-bridge

Technical Requirements

frameworks
Flutter
flutter_test
apis
native-speech-api-bridge component
Platform channel / method channel for speech
Flutter Semantics API
performance requirements
Speech session must start within 500 ms of button tap
Transcript insertion into TextEditingController must not cause full widget rebuild of SearchResultsList
security requirements
Microphone permission must be requested via standard platform permission flow before first use
Speech audio must not be stored or transmitted; only the recognized transcript is used
Permission rationale text must explain why microphone access is needed (WCAG transparency principle)
ui components
AccessibleSearchInputField
MicrophoneButton (new internal widget)
ListeningIndicator

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Abstract the speech bridge behind an interface (e.g., SpeechRecognitionService) so the widget can be tested with a mock. Check availability synchronously from a cached capability flag set at app startup — avoid async gap when rendering the button. Use a local StatefulWidget or Riverpod StateNotifier to manage the 'listening' boolean; do not leak this state into the global search state. The design token system should supply the microphone icon color — use the existing color token for interactive icons rather than hardcoding.

Ensure the listening animation respects prefers-reduced-motion by checking MediaQuery.disableAnimations.

Testing Requirements

Unit tests: verify button renders when bridge reports availability = true, verify button is absent when availability = false, verify TextEditingController receives transcript on success, verify error path does not crash. Use dependency injection or a mock for the native-speech-api-bridge to avoid platform channel calls in unit tests. Widget test: pump AccessibleSearchInputField with mock bridge, simulate tap, assert controller value. Manual accessibility test: activate button using VoiceOver Switch Control to confirm motor-impaired usability.

Epic Risks (2)
medium impact medium prob technical

Flutter's Semantics live region support for announcing dynamic result count changes may behave inconsistently between VoiceOver (iOS) and TalkBack (Android), particularly regarding announcement throttling and focus management, causing the feature to pass testing on one platform and fail on the other.

Mitigation & Contingency

Mitigation: Test live region announcements on both iOS (VoiceOver) and Android (TalkBack) early in development using the existing accessibility test harness. Reference the existing LiveRegionAnnouncer component (608-live-region-announcer) patterns used elsewhere in the app.

Contingency: If cross-platform consistency cannot be achieved, implement a platform-specific announcement strategy using the SemanticsService.announce API with platform-conditional announcement timing to work around OS-specific throttling behaviour.

low impact low prob dependency

Voice-to-text progressive enhancement for Blindeforbundet may not be available or may behave unpredictably on all device/OS combinations, particularly older Android devices, potentially causing crashes or silent failures that degrade the search experience.

Mitigation & Contingency

Mitigation: Implement voice-to-text as a strictly optional enhancement: detect availability at runtime, show the microphone button only when the platform speech API reports availability, and wrap all voice invocations in try/catch with graceful degradation to standard text input.

Contingency: If voice-to-text causes instability on a subset of devices discovered during TestFlight/beta, disable the feature flag for that platform version while a fix is investigated, without impacting the core text-based search functionality.