high priority medium complexity testing pending testing specialist Tier 6

Acceptance Criteria

Widget test: DictationMicrophoneButton is present in the widget tree when DictationAvailabilityState is 'available' and absent when 'unavailable'
Widget test: tapping DictationMicrophoneButton dispatches a StartDictationIntent (or equivalent BLoC event) with the correct fieldKey — verified by inspecting the mocked BLoC state stream
Widget test: RecordingStateIndicator renders the correct icon, colour, and label for each enum value (idle, recording, processing, error, cancelled) — use golden tests for visual verification
Widget test: Semantics live-region string changes on each RecordingStateIndicator state transition — verified via SemanticsController in flutter_test
Widget test: TranscriptionPreviewField displays partial transcription text as it arrives (streaming), then merges the final result at the saved cursor position when recognition completes
Widget test: pressing cancel during transcription restores the TextEditingController to the exact pre-dictation string and cursor position
Integration test: full dictation flow on the post-session report screen — tap microphone, simulate speech_to_text result event, verify field updated, submit report, verify Supabase PostSessionReport draft saved with correct field_values
All tests pass in CI (GitHub Actions or equivalent) with no flaky failures across 10 consecutive runs
Test coverage for dictation UI component files is ≥90% line coverage

Technical Requirements

frameworks
Flutter
BLoC
flutter_test
Riverpod
apis
speech_to_text Flutter Package
Supabase Auth (for integration test authenticated session)
data models
activity
assignment
performance requirements
Each widget test must complete in under 2 seconds to keep the CI suite fast
Integration test for end-to-end flow must complete in under 30 seconds
security requirements
Integration tests must use a dedicated Supabase test project or local emulator — never run against production
Test fixtures must not include real personnummer or PII — use synthetic data only
ui components
DictationMicrophoneButton
RecordingStateIndicator (component 657)
TranscriptionPreviewField (component 658)
Post-session report screen

Execution Context

Execution Tier
Tier 6

Tier 6 - 158 tasks

Can start after Tier 5 completes

Implementation Notes

Use bloc_test's `whenListen` and `expectLater` utilities to assert BLoC event sequences without needing a real speech engine. For cursor-position merge tests, pre-populate a TextEditingController with 'Hello world' and set selection to offset 5, then emit a final transcription result and assert the controller value is 'Hello TRANSCRIBED world' with selection at 5 + transcription.length. For cancellation tests, save a snapshot of the controller's value and selection before dictation starts; on cancel, restore via controller.value = savedValue. Golden tests for RecordingStateIndicator should cover both light and dark themes.

For the integration test, prefer integration_test package (flutter 2.5+) over flutter_driver as it runs in the same process. Use a Supabase local emulator (supabase start) to avoid test pollution.

Testing Requirements

This task IS the testing deliverable. Use flutter_test for all widget tests. Mock the speech_to_text plugin using a FakeSpeechToText stub that emits controlled SpeechRecognitionResult events. Mock the TranscriptionStateManager BLoC with MockBloc (bloc_test package).

For integration tests, use flutter_driver or integration_test package with a real (emulator) device. Structure tests in test/dictation/ directory mirroring the src/components/ structure. Each component gets its own test file. Run tests with `flutter test --coverage` and enforce the ≥90% threshold in CI via lcov.

Component
Transcription Preview Field
ui medium
Epic Risks (3)
medium impact medium prob technical

Merging dictated text at the current cursor position in a TextField that already contains user-typed content is non-trivial in Flutter — TextEditingController cursor offsets can behave unexpectedly with IME composition, emoji, or RTL characters, potentially corrupting the user's existing notes.

Mitigation & Contingency

Mitigation: Implement the merge logic using TextEditingController.value replacement with explicit selection range calculation rather than direct text manipulation. Write targeted widget tests covering edge cases: cursor at start, cursor at end, cursor mid-word, existing content with emoji, and content that was modified during an active partial-results stream.

Contingency: If cursor-position merging proves too fragile for the initial release, scope the merge behaviour to always append dictated text at the end of the existing field content and add the cursor-position insertion as a follow-on task after the feature is in TestFlight with real user feedback.

high impact medium prob technical

VoiceOver on iOS and TalkBack on Android handle rapid sequential live region announcements differently. If recording start, partial-result, and recording-stop announcements arrive within a short window, they may queue, overlap, or be dropped, leaving screen reader users without critical state information.

Mitigation & Contingency

Mitigation: Implement announcement queuing in AccessibilityLiveRegionAnnouncer with a minimum inter-announcement delay and priority ordering (assertive recording start/stop always takes precedence over polite partial-result updates). Test announcement behaviour on physical iOS and Android devices with VoiceOver/TalkBack enabled as part of the acceptance test plan.

Contingency: If platform differences make reliable queuing impossible, reduce partial-result announcements to a single 'transcription updating' message with debouncing, preserving the critical start/stop announcements. Coordinate with the screen-reader-support feature team to leverage the existing SemanticsServiceFacade patterns already established in the codebase.

medium impact low prob integration

The DictationMicrophoneButton must integrate with the dynamic-field-renderer which generates form fields from org-specific schemas at runtime. If the renderer does not expose a stable field metadata API for dictation eligibility checks, the scope guard and button visibility logic will require invasive changes to the report form architecture.

Mitigation & Contingency

Mitigation: Coordinate with the post-session report feature team early in the epic to confirm that dynamic-field-renderer exposes a field metadata interface including field type and sensitivity flags. Add a dictation_eligible flag to the field schema that the renderer passes to DictationMicrophoneButton as a constructor parameter.

Contingency: If the renderer cannot be modified without breaking changes, implement dictation eligibility as a separate lookup against org-field-config-loader using the field key as the lookup identifier, bypassing the renderer integration and keeping the dictation components fully decoupled from the report form architecture.