User Interface low complexity mobile
1
Dependencies
0
Dependents
0
Entities
1
Integrations

Description

A prominent in-field overlay or banner that displays the current dictation state (idle, recording, processing, error) using both visual and semantic cues. Ensures users — including those relying on screen readers — are always aware of whether audio capture is active, satisfying the requirement that state changes be announced as live regions.

Feature: Speech-to-Text Input

recording-state-indicator

Summaries

The Recording State Indicator ensures users always know whether audio capture is active, addressing a core trust and safety concern in voice-enabled applications. Unambiguous recording feedback is not just good UX — in regulated industries such as healthcare, legal, and inspection services, users must be able to confirm at a glance that their dictation is being captured (or has stopped). This component eliminates the risk of users believing they are dictating when they are not, which would result in lost data and frustrated professionals. It also supports users with visual or cognitive impairments through screen-reader-compatible live region announcements, broadening the accessible user base and reducing compliance risk.

Clear error messaging further reduces support ticket volume by helping users self-diagnose microphone or engine issues without contacting support.

This low-complexity UI component is self-contained with a single dependency on the Transcription State Manager. Delivery is fast, but the accessibility requirements — specifically the live region announcements with correct assertiveness levels — add a QA step that must not be skipped, as incorrect ARIA-equivalent behaviour on mobile screen readers can be subtle and platform-specific. Plan for device testing on both iOS VoiceOver and Android TalkBack, which behave differently for live region updates. The error message display path requires coordination with the copy team to agree on user-facing wording before final QA sign-off.

This component is on the critical path for any end-to-end dictation acceptance test, so it should be merged and testable before integration sprints begin.

The Recording State Indicator is a Flutter overlay widget driven entirely by the Transcription State Manager's state stream. Its five interface methods (`showRecording()`, `showProcessing()`, `showIdle()`, `showError()`, `updateLiveRegion()`) map directly to DictationState enum values and should be called reactively from a BlocListener or StreamBuilder in the parent widget tree. The animated 'Recording…' indicator should use a looping animation controller tied to the recording state so it stops cleanly on transition. `updateLiveRegion()` must wrap announcements in Flutter's `SemanticsService.announce()` with the correct `TextDirection` and use `assertive` priority for start/stop events and `polite` for completion.

Error messages rendered via `showError()` should come from a localised string map keyed on error codes from the Speech Recognition Service, not raw exception messages. Keep the component stateless externally — all state is owned by the manager.

Responsibilities

  • Display 'Recording…' animated indicator when audio capture is active
  • Show 'Processing…' state between stop and final transcription delivery
  • Announce state transitions as accessibility live regions (assertive for start/stop, polite for completion)
  • Display user-facing error messages when speech recognition fails

Interfaces

showRecording()
showProcessing()
showIdle()
showError(String message)
updateLiveRegion(String announcement)

Relationships

Dependencies (1)

Components this component depends on

Used Integrations (1)

External integrations and APIs this component relies on