critical priority medium complexity testing pending testing specialist Tier 5

Acceptance Criteria

DictationMicrophoneButton hit area is at minimum 44×44 logical pixels in all states (idle, recording, processing, error) — verified via AccessibilityTestHarness size assertions
All foreground/background colour pairs in DictationMicrophoneButton, RecordingStateIndicator, and TranscriptionPreviewField achieve a contrast ratio of at least 4.5:1 for normal text and 3:1 for large text (≥18pt or ≥14pt bold) across all states
VoiceOver (iOS) and TalkBack (Android) announce a meaningful state-change string when recording starts (e.g., 'Recording started'), when transcription is processing, and when recording stops or is cancelled — verified manually on device and via Semantics widget assertions in flutter_test
Animated waveform in RecordingStateIndicator is fully suppressed (not merely slowed) when MediaQuery.disableAnimations is true or when the device Reduce Motion setting is enabled
Every interactive element (DictationMicrophoneButton, cancel button) has a non-empty Semantics label that describes its current action — label updates dynamically with state
No WCAG 2.2 AA failures remain in the WcagComplianceChecker report at sign-off — all findings are resolved, not waived
Blindeforbundet-specific requirement met: screen reader announces recording start/stop so visually impaired users know when the microphone is active
All fixes are implemented in source widgets in src/visualization or relevant Flutter widget files — no accessibility workarounds applied at test level only

Technical Requirements

frameworks
Flutter
flutter_test
AccessibilityTestHarness (component 619)
WcagComplianceChecker (component 620)
apis
speech_to_text Flutter Package (screen reader announcement requirement)
data models
accessibility_preferences
performance requirements
Live region Semantics announcements must fire within 300 ms of the state transition to be perceivable
Suppressing the waveform animation must not cause a visible layout shift or extra frame drop
security requirements
Accessibility audit must not expose any PII in Semantics labels or live region strings — labels describe function, not data
Screen reader announcement when microphone is active is a GDPR-adjacent privacy notice requirement per speech_to_text security model
ui components
DictationMicrophoneButton
RecordingStateIndicator (component 657)
TranscriptionPreviewField (component 658)
AccessibilityTestHarness (component 619)
WcagComplianceChecker (component 620)

Execution Context

Execution Tier
Tier 5

Tier 5 - 253 tasks

Can start after Tier 4 completes

Implementation Notes

Flutter's Semantics widget supports liveRegion: true for announcing state changes — wrap the status text in RecordingStateIndicator with Semantics(liveRegion: true, label: stateLabel). For touch target enforcement use a GestureDetector or InkWell wrapped in a ConstrainedBox(constraints: BoxConstraints(minWidth: 44, minHeight: 44)) — do not rely on the visual icon size alone. For contrast, check the design token values against WCAG contrast formula; if dark-mode tokens are used, test both themes. Use MediaQuery.of(context).disableAnimations to gate the waveform animation — set the AnimationController's duration to Duration.zero and call stop() when true.

The AccessibilityPreferences data model (font_scale_factor, contrast_mode, haptic_feedback_enabled) should be read via Riverpod provider and passed into widget constructors to ensure audit covers scaled-font and high-contrast states.

Testing Requirements

Use flutter_test with AccessibilityTestHarness to programmatically verify touch target sizes and Semantics labels for all three components in all states. Use WcagComplianceChecker to generate a contrast ratio report — attach the report output to the PR. Perform manual testing on a physical iOS device (VoiceOver) and an Android device (TalkBack) for live region announcements — document results in a checklist. Test reduce-motion suppression by setting MediaQuery(disableAnimations: true) in a widget test and asserting the waveform AnimationController is not running.

All automated assertions must pass in CI; manual results documented in PR description.

Component
Recording State Indicator
ui low
Epic Risks (3)
medium impact medium prob technical

Merging dictated text at the current cursor position in a TextField that already contains user-typed content is non-trivial in Flutter — TextEditingController cursor offsets can behave unexpectedly with IME composition, emoji, or RTL characters, potentially corrupting the user's existing notes.

Mitigation & Contingency

Mitigation: Implement the merge logic using TextEditingController.value replacement with explicit selection range calculation rather than direct text manipulation. Write targeted widget tests covering edge cases: cursor at start, cursor at end, cursor mid-word, existing content with emoji, and content that was modified during an active partial-results stream.

Contingency: If cursor-position merging proves too fragile for the initial release, scope the merge behaviour to always append dictated text at the end of the existing field content and add the cursor-position insertion as a follow-on task after the feature is in TestFlight with real user feedback.

high impact medium prob technical

VoiceOver on iOS and TalkBack on Android handle rapid sequential live region announcements differently. If recording start, partial-result, and recording-stop announcements arrive within a short window, they may queue, overlap, or be dropped, leaving screen reader users without critical state information.

Mitigation & Contingency

Mitigation: Implement announcement queuing in AccessibilityLiveRegionAnnouncer with a minimum inter-announcement delay and priority ordering (assertive recording start/stop always takes precedence over polite partial-result updates). Test announcement behaviour on physical iOS and Android devices with VoiceOver/TalkBack enabled as part of the acceptance test plan.

Contingency: If platform differences make reliable queuing impossible, reduce partial-result announcements to a single 'transcription updating' message with debouncing, preserving the critical start/stop announcements. Coordinate with the screen-reader-support feature team to leverage the existing SemanticsServiceFacade patterns already established in the codebase.

medium impact low prob integration

The DictationMicrophoneButton must integrate with the dynamic-field-renderer which generates form fields from org-specific schemas at runtime. If the renderer does not expose a stable field metadata API for dictation eligibility checks, the scope guard and button visibility logic will require invasive changes to the report form architecture.

Mitigation & Contingency

Mitigation: Coordinate with the post-session report feature team early in the epic to confirm that dynamic-field-renderer exposes a field metadata interface including field type and sensitivity flags. Add a dictation_eligible flag to the field schema that the renderer passes to DictationMicrophoneButton as a constructor parameter.

Contingency: If the renderer cannot be modified without breaking changes, implement dictation eligibility as a separate lookup against org-field-config-loader using the field key as the lookup identifier, bypassing the renderer integration and keeping the dictation components fully decoupled from the report form architecture.