Integrate dictation UI into post-session report screen
epic-speech-to-text-input-user-interface-task-009 — Wire DictationMicrophoneButton, RecordingStateIndicator, and TranscriptionPreviewField into the post-session report screen (component 076-post-session-report-screen), specifically within the way-forward section (078-way-forward-section-widget). The microphone button must appear inline next to each free-text field. Only one dictation session may be active at a time across all fields on the screen.
Acceptance Criteria
Technical Requirements
Execution Context
Tier 3 - 413 tasks
Can start after Tier 2 completes
Implementation Notes
Introduce a `ScreenDictationCoordinator` Riverpod provider (or extend TranscriptionStateManager) that tracks which field ID currently owns the active dictation session. DictationMicrophoneButton receives its field ID and checks the coordinator to determine if it is the active owner. Each free-text field in WayForwardSectionWidget should be replaced with TranscriptionPreviewField, passing a fieldId string that the microphone button also receives. Place the single RecordingStateIndicator instance in a Stack at the PostSessionReportScreen level, positioned using `Positioned` to appear above the keyboard and below the app bar.
Use Riverpod's `select` to avoid unnecessary rebuilds — each microphone button only rebuilds when the active field ID changes, not on every partial transcription update. For the Activity data model: ensure the dictated text is written to the same field key that keyboard input uses — no separate 'dictated' field; this is purely a UI input mechanism.
Testing Requirements
Integration tests (flutter_test with real widget tree): Mount the post-session report screen with all dictation components. Assert microphone buttons appear for each free-text field in the way-forward section. Simulate tapping mic button for field A — assert TranscriptionStateManager transitions to recording, other mic buttons disabled. Simulate tapping mic button for field B while field A is recording — assert field A session is cancelled first.
Drive a final transcription result and assert only field A's TextEditingController is updated. Assert the report save action submits the correct Activity model data. Unit tests: Test the single-session-at-a-time enforcement logic in the TranscriptionStateManager or a ScreenDictationCoordinator. Test DictationScopeGuard evaluation logic.
Manual/E2E tests: Full end-to-end on device — dictate into each free-text field, verify merge, save report, verify Supabase Activity record contains the dictated text. Test with VoiceOver/TalkBack active throughout.
Merging dictated text at the current cursor position in a TextField that already contains user-typed content is non-trivial in Flutter — TextEditingController cursor offsets can behave unexpectedly with IME composition, emoji, or RTL characters, potentially corrupting the user's existing notes.
Mitigation & Contingency
Mitigation: Implement the merge logic using TextEditingController.value replacement with explicit selection range calculation rather than direct text manipulation. Write targeted widget tests covering edge cases: cursor at start, cursor at end, cursor mid-word, existing content with emoji, and content that was modified during an active partial-results stream.
Contingency: If cursor-position merging proves too fragile for the initial release, scope the merge behaviour to always append dictated text at the end of the existing field content and add the cursor-position insertion as a follow-on task after the feature is in TestFlight with real user feedback.
VoiceOver on iOS and TalkBack on Android handle rapid sequential live region announcements differently. If recording start, partial-result, and recording-stop announcements arrive within a short window, they may queue, overlap, or be dropped, leaving screen reader users without critical state information.
Mitigation & Contingency
Mitigation: Implement announcement queuing in AccessibilityLiveRegionAnnouncer with a minimum inter-announcement delay and priority ordering (assertive recording start/stop always takes precedence over polite partial-result updates). Test announcement behaviour on physical iOS and Android devices with VoiceOver/TalkBack enabled as part of the acceptance test plan.
Contingency: If platform differences make reliable queuing impossible, reduce partial-result announcements to a single 'transcription updating' message with debouncing, preserving the critical start/stop announcements. Coordinate with the screen-reader-support feature team to leverage the existing SemanticsServiceFacade patterns already established in the codebase.
The DictationMicrophoneButton must integrate with the dynamic-field-renderer which generates form fields from org-specific schemas at runtime. If the renderer does not expose a stable field metadata API for dictation eligibility checks, the scope guard and button visibility logic will require invasive changes to the report form architecture.
Mitigation & Contingency
Mitigation: Coordinate with the post-session report feature team early in the epic to confirm that dynamic-field-renderer exposes a field metadata interface including field type and sensitivity flags. Add a dictation_eligible flag to the field schema that the renderer passes to DictationMicrophoneButton as a constructor parameter.
Contingency: If the renderer cannot be modified without breaking changes, implement dictation eligibility as a separate lookup against org-field-config-loader using the field key as the lookup identifier, bypassing the renderer integration and keeping the dictation components fully decoupled from the report form architecture.