high priority medium complexity integration pending frontend specialist Tier 4

Acceptance Criteria

DynamicFieldRenderer renders a DictationMicrophoneButton immediately adjacent to every field with type 'text' or 'textarea' when speech_to_text is available on the device
Fields with types other than 'text' and 'textarea' (e.g., date, number, select) do NOT render a DictationMicrophoneButton — no visual or behavioural change
Each DictationMicrophoneButton receives the correct fieldKey and fieldId from the enclosing renderer, verified by inspecting the widget tree in tests
TranscriptionPreviewField receives a cursor-position callback bound to the specific TextEditingController of the target field, so transcription merges at the cursor of the correct field even when multiple dictation fields exist on the same screen
Tapping the microphone button on field A while field B is active does not corrupt field B's content
The integration compiles and renders correctly when DictationAvailabilityState is 'unavailable' — button is hidden, field layout unchanged
No existing golden-file snapshots break for non-dictation field types
The feature flag / availability guard is evaluated once per renderer build cycle, not per keystroke, to avoid unnecessary rebuilds

Technical Requirements

frameworks
Flutter
BLoC
Riverpod
flutter_test
apis
speech_to_text Flutter Package (iOS SFSpeechRecognizer / Android SpeechRecognizer)
performance requirements
DynamicFieldRenderer rebuild triggered by dictation state changes must complete within one frame (16 ms) to avoid jank
Conditional widget insertion must not cause layout reflow for fields that do not have dictation enabled
TranscriptionStateManager BLoC event dispatch from button tap must be handled synchronously within the same event loop tick
security requirements
DictationScopeGuard must be active — microphone cannot be enabled during sensitive peer-conversation screens
Microphone permission must be checked before rendering an active DictationMicrophoneButton; display a disabled state if permission is denied
Audio never uploaded to third-party servers — on-device recognition only per speech_to_text package security model
ui components
DictationMicrophoneButton (component 077 integration point)
TranscriptionPreviewField (component 658)
DynamicFieldRenderer (component 077)
RecordingStateIndicator (component 657)

Execution Context

Execution Tier
Tier 4

Tier 4 - 323 tasks

Can start after Tier 3 completes

Implementation Notes

Use a thin wrapper widget (e.g., DictationAwareFieldWrapper) rather than embedding dictation logic directly in DynamicFieldRenderer — this keeps the renderer schema-agnostic. The wrapper checks `context.read().state.isAvailable` and the field's type before conditionally inserting the button. Pass a `TextEditingController` reference and a `FocusNode` down to both the base field widget and TranscriptionPreviewField so cursor position is shared by reference, not copied. Avoid storing cursor offset as a plain integer — use the controller's selection property directly.

The DictationScopeGuard should be an InheritedWidget ancestor checked inside the wrapper; if the guard is active, render the button as permanently disabled with a tooltip explaining why. Keep the wrapper stateless where possible — let BLoC own all mutable state.

Testing Requirements

Write flutter_test widget tests: (1) render DynamicFieldRenderer with a schema containing one 'text' field and assert DictationMicrophoneButton is present; (2) render with a 'date' field and assert DictationMicrophoneButton is absent; (3) mock DictationAvailabilityState as unavailable and assert button is hidden; (4) tap the button and verify the correct fieldKey is dispatched via BLoC; (5) simulate two 'text' fields on the same screen and verify cursor merging targets the tapped field only. Achieve 100% branch coverage on the conditional rendering logic. No integration tests required for this task — covered by task-012.

Component
Transcription Preview Field
ui medium
Epic Risks (3)
medium impact medium prob technical

Merging dictated text at the current cursor position in a TextField that already contains user-typed content is non-trivial in Flutter — TextEditingController cursor offsets can behave unexpectedly with IME composition, emoji, or RTL characters, potentially corrupting the user's existing notes.

Mitigation & Contingency

Mitigation: Implement the merge logic using TextEditingController.value replacement with explicit selection range calculation rather than direct text manipulation. Write targeted widget tests covering edge cases: cursor at start, cursor at end, cursor mid-word, existing content with emoji, and content that was modified during an active partial-results stream.

Contingency: If cursor-position merging proves too fragile for the initial release, scope the merge behaviour to always append dictated text at the end of the existing field content and add the cursor-position insertion as a follow-on task after the feature is in TestFlight with real user feedback.

high impact medium prob technical

VoiceOver on iOS and TalkBack on Android handle rapid sequential live region announcements differently. If recording start, partial-result, and recording-stop announcements arrive within a short window, they may queue, overlap, or be dropped, leaving screen reader users without critical state information.

Mitigation & Contingency

Mitigation: Implement announcement queuing in AccessibilityLiveRegionAnnouncer with a minimum inter-announcement delay and priority ordering (assertive recording start/stop always takes precedence over polite partial-result updates). Test announcement behaviour on physical iOS and Android devices with VoiceOver/TalkBack enabled as part of the acceptance test plan.

Contingency: If platform differences make reliable queuing impossible, reduce partial-result announcements to a single 'transcription updating' message with debouncing, preserving the critical start/stop announcements. Coordinate with the screen-reader-support feature team to leverage the existing SemanticsServiceFacade patterns already established in the codebase.

medium impact low prob integration

The DictationMicrophoneButton must integrate with the dynamic-field-renderer which generates form fields from org-specific schemas at runtime. If the renderer does not expose a stable field metadata API for dictation eligibility checks, the scope guard and button visibility logic will require invasive changes to the report form architecture.

Mitigation & Contingency

Mitigation: Coordinate with the post-session report feature team early in the epic to confirm that dynamic-field-renderer exposes a field metadata interface including field type and sensitivity flags. Add a dictation_eligible flag to the field schema that the renderer passes to DictationMicrophoneButton as a constructor parameter.

Contingency: If the renderer cannot be modified without breaking changes, implement dictation eligibility as a separate lookup against org-field-config-loader using the field key as the lookup identifier, bypassing the renderer integration and keeping the dictation components fully decoupled from the report form architecture.