critical priority low complexity frontend pending frontend specialist Tier 1

Acceptance Criteria

Transitioning to 'recording' state triggers a polite VoiceOver/TalkBack announcement reading 'Recording started' within 200ms of state entry
Transitioning to 'processing' state triggers a polite announcement reading 'Processing speech' within 200ms of state entry
Transitioning to 'ready' state triggers a polite announcement reading 'Transcription ready' within 200ms of state entry
Transitioning to 'cancelled' state triggers a polite announcement reading 'Recording cancelled' within 200ms of state entry
Error state transitions trigger an assertive announcement containing the specific error message text
No announcement is emitted when the indicator transitions to 'hidden' state
Announcements do not queue multiple times if rapid state transitions occur — only the final state announcement is emitted
On iOS, announcements are verified with VoiceOver enabled (Settings > Accessibility > VoiceOver) and confirmed audible
On Android, announcements are verified with TalkBack enabled and confirmed audible
Announcement strings are localizable — no hardcoded strings in widget logic; all strings sourced from AppLocalizations
AccessibilityLiveRegionAnnouncer component (664) is injected via constructor parameter or Riverpod provider — not instantiated inline
Widget renders correctly with accessibility features both enabled and disabled

Technical Requirements

frameworks
Flutter
Riverpod
apis
Flutter Semantics API (SemanticsService.announce)
AccessibilityLiveRegionAnnouncer (component 664)
TranscriptionStateManager provider
performance requirements
Announcement emitted within 200ms of state transition
Zero UI frame drops caused by semantics announcement calls — SemanticsService.announce is async fire-and-forget
security requirements
Announcement text must not include any personal data or partial transcription content — only state labels
DictationScopeGuard must remain active; announcements must not fire when microphone is blocked for sensitive peer conversations
ui components
RecordingStateIndicator (component 657)
AccessibilityLiveRegionAnnouncer (component 664)

Execution Context

Execution Tier
Tier 1

Tier 1 - 540 tasks

Can start after Tier 0 completes

Implementation Notes

Use `SemanticsService.announce(message, textDirection, assertiveness: Assertiveness.polite/assertive)` — this is the cross-platform Flutter API that maps to UIAccessibilityPostNotification on iOS and AccessibilityEvent.TYPE_ANNOUNCEMENT on Android. Do NOT use `Semantics(liveRegion: true)` alone as it does not provide reliable announcement triggering on state changes without a widget rebuild with new text content. Wire the announcer by listening to the TranscriptionStateManager stream in a `ref.listen` call inside the widget's build method (Riverpod pattern) — do not use initState to avoid lifecycle issues. Map each TranscriptionState enum value to its announcement string via a switch expression to ensure exhaustive coverage and compile-time safety.

For error states, extract the error message from the state object and append it to the assertive announcement. Add a guard: if the previous state equals the new state, skip the announcement to prevent duplicate reads on widget rebuilds.

Testing Requirements

Unit tests: Test that each TranscriptionState value triggers the correct SemanticsService.announce call with the correct politeness level using a mocked SemanticsService. Assert that error states use TextAnnouncement.assertive and all others use TextAnnouncement.polite. Assert no announcement fires on 'hidden' state. Assert rapid multi-transition sequences result in only the final announcement.

Widget tests: Render RecordingStateIndicator in a testable widget tree with a fake TranscriptionStateManager; drive state changes and verify Semantics tree contains expected live-region nodes. Manual tests: Run on a physical iOS device with VoiceOver active — confirm each announcement is audible and correct. Run on a physical Android device with TalkBack active — confirm equivalence. WCAG 2.2 AA conformance check for live region politeness.

Component
Recording State Indicator
ui low
Epic Risks (3)
medium impact medium prob technical

Merging dictated text at the current cursor position in a TextField that already contains user-typed content is non-trivial in Flutter — TextEditingController cursor offsets can behave unexpectedly with IME composition, emoji, or RTL characters, potentially corrupting the user's existing notes.

Mitigation & Contingency

Mitigation: Implement the merge logic using TextEditingController.value replacement with explicit selection range calculation rather than direct text manipulation. Write targeted widget tests covering edge cases: cursor at start, cursor at end, cursor mid-word, existing content with emoji, and content that was modified during an active partial-results stream.

Contingency: If cursor-position merging proves too fragile for the initial release, scope the merge behaviour to always append dictated text at the end of the existing field content and add the cursor-position insertion as a follow-on task after the feature is in TestFlight with real user feedback.

high impact medium prob technical

VoiceOver on iOS and TalkBack on Android handle rapid sequential live region announcements differently. If recording start, partial-result, and recording-stop announcements arrive within a short window, they may queue, overlap, or be dropped, leaving screen reader users without critical state information.

Mitigation & Contingency

Mitigation: Implement announcement queuing in AccessibilityLiveRegionAnnouncer with a minimum inter-announcement delay and priority ordering (assertive recording start/stop always takes precedence over polite partial-result updates). Test announcement behaviour on physical iOS and Android devices with VoiceOver/TalkBack enabled as part of the acceptance test plan.

Contingency: If platform differences make reliable queuing impossible, reduce partial-result announcements to a single 'transcription updating' message with debouncing, preserving the critical start/stop announcements. Coordinate with the screen-reader-support feature team to leverage the existing SemanticsServiceFacade patterns already established in the codebase.

medium impact low prob integration

The DictationMicrophoneButton must integrate with the dynamic-field-renderer which generates form fields from org-specific schemas at runtime. If the renderer does not expose a stable field metadata API for dictation eligibility checks, the scope guard and button visibility logic will require invasive changes to the report form architecture.

Mitigation & Contingency

Mitigation: Coordinate with the post-session report feature team early in the epic to confirm that dynamic-field-renderer exposes a field metadata interface including field type and sensitivity flags. Add a dictation_eligible flag to the field schema that the renderer passes to DictationMicrophoneButton as a constructor parameter.

Contingency: If the renderer cannot be modified without breaking changes, implement dictation eligibility as a separate lookup against org-field-config-loader using the field key as the lookup identifier, bypassing the renderer integration and keeping the dictation components fully decoupled from the report form architecture.