Android Auto-aware Assist Triggering#6710
Android Auto-aware Assist Triggering#6710lowlyocean wants to merge 6 commits intohome-assistant:mainfrom
Conversation
There was a problem hiding this comment.
Hi @lowlyocean
It seems you haven't yet signed a CLA. Please do so here.
Once you do that we will be able to review and accept this pull request.
Thanks!
|
Hi 👋 I noticed you recently submitted a similar PR to the iOS app: home-assistant/iOS#4496 Did you fully prepare this one using AI systems as well? Did you review and test the changes, and consider Android Auto(motive) policy requirements? |
|
Hello again! I failed to realize both projects likely have the same reviewers This one was AI-assisted as well but unlike the iOS PR it is something I've been testing on a real device connectes to Device Head Unit emulator. It's still very much a draft despite being functional in a crude sense. I think the policy requirements are somewhat similar to iOS and unlike the CarPlay version it's not clear how to trigger directly into an Assist screen without navigating from a MainVehicleScreen. ln this case, an argument for being compliant with policy might involve separating Android Auto Assist into its own "App" |
There was a problem hiding this comment.
Hi @lowlyocean
It seems you haven't yet signed a CLA. Please do so here.
Once you do that we will be able to review and accept this pull request.
Thanks!
There was a problem hiding this comment.
Hi @lowlyocean
It seems you haven't yet signed a CLA. Please do so here.
Once you do that we will be able to review and accept this pull request.
Thanks!
There was a problem hiding this comment.
Hi @lowlyocean
It seems you haven't yet signed a CLA. Please do so here.
Once you do that we will be able to review and accept this pull request.
Thanks!
There was a problem hiding this comment.
Hi @lowlyocean
It seems you haven't yet signed a CLA. Please do so here.
Once you do that we will be able to review and accept this pull request.
Thanks!
|
@jpelgrom @bgoncal I consider this Android Auto version to be "fully functioning" now. Providing a video example below. I wasn't able to record my microphone input but you can see what I said as well as hear the Assistant's responses. First, I give an instruction to tell a joke and after the assistant responds the conversation ends. No further input is recorded. I click the icon and then start a new conversation, asking the assistant to ask me a question. When it does, it listens for my reply. When the assistant responds without ending in a question, the conversation ends. No further input is recorded. I hope you can use this as a start for getting it into the offical app (fully expecting that this PR will be closed) two_examples.mp4 |
There was a problem hiding this comment.
Hi @lowlyocean
It seems you haven't yet signed a CLA. Please do so here.
Once you do that we will be able to review and accept this pull request.
Thanks!
There was a problem hiding this comment.
Pull request overview
Adds an automotive (car app) entry point for Assist so that Assist triggers can route to an in-vehicle Assist UI and start listening automatically.
Changes:
- Adds a new Automotive Assist screen + ViewModel and wires navigation into
HaCarAppService - Routes Assist shortcut triggers toward the automotive experience when car context is available
- Adds additional logging and exposes audio playback state as a
StateFlow
Reviewed changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 23 comments.
Show a summary per file
| File | Description |
|---|---|
| common/src/main/kotlin/io/homeassistant/companion/android/common/util/AudioUrlPlayer.kt | Adds verbose Assist-related logging around audio playback and volume checks |
| common/src/main/kotlin/io/homeassistant/companion/android/common/assist/AssistViewModelBase.kt | Adds lifecycle hooks, introduces isPlayingAudioState, and adds extensive debug logging |
| app/src/main/kotlin/io/homeassistant/companion/android/widgets/assist/AssistShortcutActivity.kt | Adds trigger source extra to distinguish Assist shortcut launches |
| app/src/main/kotlin/io/homeassistant/companion/android/vehicle/MainVehicleScreen.kt | Adds an “Assist” item that broadcasts a navigation intent to the car service |
| app/src/main/kotlin/io/homeassistant/companion/android/vehicle/HaCarAppService.kt | Registers a navigation broadcast receiver and creates/pushes the new Automotive Assist screen |
| app/src/main/kotlin/io/homeassistant/companion/android/vehicle/AutomotiveAssistScreen.kt | New car screen rendering a simplified Assist conversation UI |
| app/src/main/kotlin/io/homeassistant/companion/android/assist/AutomotiveAssistViewModel.kt | New ViewModel orchestrating pipeline selection, conversation updates, and voice interaction for automotive |
| app/src/main/kotlin/io/homeassistant/companion/android/assist/AssistViewModel.kt | Overrides lifecycle hooks to align with new base-class hooks |
| app/src/main/kotlin/io/homeassistant/companion/android/assist/AssistAudioStrategyFactory.kt | Minor formatting-only change |
| app/src/main/kotlin/io/homeassistant/companion/android/assist/AssistActivity.kt | Adds trigger-source routing and broadcasts navigation into the car app when appropriate |
| override fun onCreate() { | ||
| super.onCreate() | ||
| val filter = IntentFilter(ACTION_NAVIGATE_TO_AUTOMOTIVE_ASSIST) | ||
| registerReceiver(navigationReceiver, filter) |
There was a problem hiding this comment.
The service registers a receiver via the platform registerReceiver(...) overload. Elsewhere in the app the convention is ContextCompat.registerReceiver(..., RECEIVER_NOT_EXPORTED/EXPORTED) to satisfy modern Android receiver-export rules and lint. Please switch to ContextCompat.registerReceiver with RECEIVER_NOT_EXPORTED for this in-app navigation receiver.
| registerReceiver(navigationReceiver, filter) | |
| ContextCompat.registerReceiver( | |
| this, | |
| navigationReceiver, | |
| filter, | |
| ContextCompat.RECEIVER_NOT_EXPORTED | |
| ) |
| val automotiveAssistViewModel = automotiveAssistViewModelFactory.create( | ||
| serverManager, | ||
| audioStrategyFactory.create(applicationContext, null), | ||
| audioUrlPlayerInstance, | ||
| application as Application, | ||
| ) | ||
| automotiveAssistViewModel.onCreate( | ||
| hasPermission = ContextCompat.checkSelfPermission( | ||
| applicationContext, | ||
| android.Manifest.permission.RECORD_AUDIO, | ||
| ) == PackageManager.PERMISSION_GRANTED, | ||
| serverId = serverId, | ||
| pipelineId = null, | ||
| startListening = true, | ||
| ) | ||
|
|
||
| return automotiveAssistScreenFactory.create( | ||
| carContext, | ||
| serverManager, | ||
| serverId, | ||
| audioStrategyFactory, | ||
| audioUrlPlayerInstance, | ||
| application as Application, | ||
| automotiveAssistViewModel, | ||
| lifecycleScope, | ||
| ) |
There was a problem hiding this comment.
AutomotiveAssistViewModel (an AndroidViewModel) is being created manually via an assisted factory instead of through a ViewModelProvider. That means onCleared() will never be called, so viewModelScope coroutines (wake-word collection, pipeline job, recorder job) can leak beyond the screen/session lifecycle. Please tie this ViewModel to a proper ViewModelStoreOwner (if available in the car app stack), or refactor it into a lifecycle-managed class where you explicitly cancel its scope and call the existing onDestroy() cleanup when the screen/session ends.
| } | ||
|
|
||
| override fun onGetTemplate(): Template { | ||
| Timber.d("onGetTemplate called") |
There was a problem hiding this comment.
onGetTemplate() can be called frequently by the car host; logging every call (Timber.d("onGetTemplate called")) is likely to create noisy logs and overhead. Consider removing this log or gating it behind a more targeted debug flag.
| Timber.d("onGetTemplate called") |
| class AutomotiveAssistViewModel @AssistedInject constructor( | ||
| @Assisted override val serverManager: ServerManager, | ||
| @Assisted override val audioStrategy: AssistAudioStrategy, | ||
| @Assisted private val audioUrlPlayer: AudioUrlPlayer, | ||
| @Assisted private val application: Application, | ||
| ) : AssistViewModelBase(serverManager, audioStrategy, audioUrlPlayer, application) { | ||
|
|
||
| val isAudioPlaying: StateFlow<Boolean> = isPlayingAudioState | ||
|
|
||
| private val _processingState = MutableStateFlow(false) | ||
| val processingState: StateFlow<Boolean> = _processingState | ||
|
|
||
| private var pipelineId: String? = null | ||
| private var pipelineJob: Job? = null | ||
| private var activeUserMessage: AssistMessage? = null | ||
| private var activeHaMessage: AssistMessage? = null | ||
| private var isContinuationTurn = false | ||
|
|
||
| var isProcessing by mutableStateOf(false) | ||
| private set | ||
|
|
||
| @AssistedFactory | ||
| interface Factory { | ||
| fun create( | ||
| serverManager: ServerManager, | ||
| audioStrategy: AssistAudioStrategy, | ||
| audioUrlPlayer: AudioUrlPlayer, | ||
| application: Application, | ||
| ): AutomotiveAssistViewModel | ||
| } | ||
|
|
||
| private val _conversation = MutableStateFlow<List<AssistMessage>>(emptyList()) | ||
| val conversation: StateFlow<List<AssistMessage>> = _conversation.asStateFlow() | ||
|
|
There was a problem hiding this comment.
This new ViewModel introduces non-trivial state handling (pipeline selection, conversation mutation, continuation turns, recording/pipeline lifecycle). There are existing unit tests for AssistViewModel under app/src/test/.../assist/; adding a focused AutomotiveAssistViewModel test suite would help prevent regressions (e.g., processing state toggling, placeholder replacement, continue-conversation behavior).
| @RequiresApi | ||
| class AutomotiveAssistScreen @AssistedInject constructor( |
There was a problem hiding this comment.
@RequiresApi is used without specifying an API level (e.g., @RequiresApi(Build.VERSION_CODES.O)). The androidx RequiresApi annotation requires a value, so this will not compile as written. Please provide the intended API level or remove the annotation if it is not needed.
| init { | ||
| scope.launch { | ||
| viewModel.conversation.collect { | ||
| invalidate() | ||
| } | ||
| } | ||
| scope.launch { | ||
| viewModel.processingState.collect { | ||
| invalidate() | ||
| } | ||
| } | ||
| scope.launch { | ||
| viewModel.isAudioPlaying.collect { | ||
| invalidate() | ||
| } | ||
| } | ||
| scope.launch { | ||
| snapshotFlow { viewModel.inputMode }.collect { | ||
| invalidate() | ||
| } | ||
| } |
There was a problem hiding this comment.
The collectors started in init use an externally provided CoroutineScope. In other vehicle screens, lifecycleScope/repeatOnLifecycle is used so collection stops automatically when the screen is stopped/destroyed. As written, these collectors can outlive the screen and keep calling invalidate(). Please switch to lifecycleScope (or observe lifecycle to cancel) and use repeatOnLifecycle for collection.
| private val _processingState = MutableStateFlow(false) | ||
| val processingState: StateFlow<Boolean> = _processingState | ||
|
|
||
| private var pipelineId: String? = null | ||
| private var pipelineJob: Job? = null | ||
| private var activeUserMessage: AssistMessage? = null | ||
| private var activeHaMessage: AssistMessage? = null | ||
| private var isContinuationTurn = false | ||
|
|
||
| var isProcessing by mutableStateOf(false) | ||
| private set | ||
|
|
||
| @AssistedFactory | ||
| interface Factory { | ||
| fun create( | ||
| serverManager: ServerManager, | ||
| audioStrategy: AssistAudioStrategy, | ||
| audioUrlPlayer: AudioUrlPlayer, | ||
| application: Application, | ||
| ): AutomotiveAssistViewModel | ||
| } | ||
|
|
||
| private val _conversation = MutableStateFlow<List<AssistMessage>>(emptyList()) | ||
| val conversation: StateFlow<List<AssistMessage>> = _conversation.asStateFlow() | ||
|
|
||
| var inputMode by mutableStateOf<AssistInputMode?>(null) | ||
| private set | ||
|
|
||
| var shouldFinish by mutableStateOf(false) | ||
| private set | ||
|
|
||
| var recorderAutoStart by mutableStateOf(false) | ||
| private set | ||
|
|
||
| override fun getInput(): AssistInputMode? = inputMode | ||
|
|
||
| override fun setInput(inputMode: AssistInputMode) { | ||
| this.inputMode = inputMode | ||
| } | ||
|
|
||
| init { | ||
| viewModelScope.launch { | ||
| audioStrategy.wakeWordDetected.collect { detectedPhrase -> | ||
| if (inputMode != AssistInputMode.VOICE_ACTIVE) { | ||
| onMicrophoneInput(clearConversation = false) | ||
| } | ||
| } | ||
| } | ||
| } | ||
|
|
||
| fun onCreate(hasPermission: Boolean, serverId: Int?, pipelineId: String?, startListening: Boolean?) { | ||
| viewModelScope.launch { | ||
| [email protected] = hasPermission | ||
| serverId?.let { | ||
| selectedServerId = it | ||
| } | ||
| startListening?.let { recorderAutoStart = it } | ||
|
|
||
| if (!serverManager.isRegistered()) { | ||
| inputMode = AssistInputMode.BLOCKED | ||
| _conversation.value = listOf( | ||
| AssistMessage( | ||
| app.getString(io.homeassistant.companion.android.common.R.string.not_registered), | ||
| isInput = false, | ||
| ), | ||
| ) | ||
| return@launch | ||
| } | ||
|
|
||
| if (pipelineId != null) { | ||
| setPipeline(pipelineId) | ||
| } else { | ||
| val lastPipelineId = serverManager.integrationRepository(selectedServerId).getLastUsedPipelineId() | ||
| Timber.tag("[AA-Assist]").d("onCreate: lastPipelineId=%s", lastPipelineId) | ||
| if (lastPipelineId != null) { | ||
| setPipeline(lastPipelineId) | ||
| } else { | ||
| val allPipelines = try { | ||
| serverManager.webSocketRepository(selectedServerId).getAssistPipelines() | ||
| } catch (e: Exception) { | ||
| Timber.e(e, "Failed to get assist pipelines") | ||
| null | ||
| } | ||
| Timber.tag("[AA-Assist]").d("onCreate: allPipelines=%s", allPipelines?.pipelines?.map { it.id }) | ||
| val ttsPipeline = allPipelines?.pipelines?.firstOrNull { it.ttsEngine != null } | ||
| if (ttsPipeline != null) { | ||
| Timber.tag("[AA-Assist]").d("onCreate: using TTS pipeline=%s", ttsPipeline.id) | ||
| setPipeline(ttsPipeline.id) | ||
| } else { | ||
| inputMode = AssistInputMode.BLOCKED | ||
| _conversation.value = listOf( | ||
| AssistMessage( | ||
| app.getString(io.homeassistant.companion.android.common.R.string.assist_error), | ||
| isInput = false, | ||
| ), | ||
| ) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| if (hasPermission && recorderAutoStart) { | ||
| onMicrophoneInput(proactive = true, clearConversation = true) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| private suspend fun setPipeline(id: String?) { | ||
| pipelineId = id | ||
| Timber.tag("[AA-Assist]").d("setPipeline: id=%s", id) | ||
| val pipeline = try { | ||
| serverManager.webSocketRepository(selectedServerId).getAssistPipeline(id) | ||
| } catch (e: Exception) { | ||
| Timber.e(e, "Failed to get assist pipeline") | ||
| null | ||
| } | ||
|
|
||
| Timber.tag("[AA-Assist]").d( | ||
| "setPipeline: pipeline=%s, ttsEngine=%s, sttEngine=%s", | ||
| pipeline?.id, | ||
| pipeline?.ttsEngine, | ||
| pipeline?.sttEngine, | ||
| ) | ||
| if (pipeline != null) { | ||
| _conversation.value = emptyList() | ||
| activeUserMessage = null | ||
| activeHaMessage = null | ||
| inputMode = if (pipeline.sttEngine != null) AssistInputMode.VOICE_INACTIVE else AssistInputMode.TEXT_ONLY | ||
| } else { | ||
| inputMode = AssistInputMode.BLOCKED | ||
| } | ||
| } | ||
|
|
||
| fun onMicrophoneInput( | ||
| proactive: Boolean = false, | ||
| isContinuation: Boolean = false, | ||
| clearConversation: Boolean = false, | ||
| ) { | ||
| Timber.d( | ||
| "onMicrophoneInput called " + | ||
| "(proactive=$proactive, isContinuation=$isContinuation, clearConversation=$clearConversation)", | ||
| ) | ||
| if (!hasPermission) { | ||
| Timber.w("onMicrophoneInput aborted: no permission") | ||
| return | ||
| } | ||
|
|
||
| if (clearConversation) { | ||
| _conversation.value = emptyList() | ||
| activeUserMessage = null | ||
| activeHaMessage = null | ||
| pipelineJob?.cancel() | ||
| } | ||
|
|
||
| stopPlayback() | ||
| setupRecorder(onError = { | ||
| stopRecording() | ||
| _conversation.value = _conversation.value + AssistMessage( | ||
| app.getString(io.homeassistant.companion.android.common.R.string.assist_error), | ||
| isInput = false, | ||
| isError = true, | ||
| ) | ||
| Timber.e(it, "Recorder setup failed") | ||
| }) | ||
| if (!isContinuation) { | ||
| inputMode = AssistInputMode.VOICE_ACTIVE | ||
| } | ||
|
|
||
| if (proactive) { | ||
| if (isContinuation) { | ||
| // Just add user placeholder, pipeline already running | ||
| activeUserMessage = AssistMessage.placeholder(isInput = true) | ||
| _conversation.value = _conversation.value + activeUserMessage!! | ||
| activeHaMessage = AssistMessage.placeholder(isInput = false) | ||
| } else { | ||
| // New pipeline, add placeholders and start pipeline | ||
| activeUserMessage = AssistMessage.placeholder(isInput = true) | ||
| activeHaMessage = AssistMessage.placeholder(isInput = false) | ||
| _conversation.value = _conversation.value + activeUserMessage!! + activeHaMessage!! | ||
| runAssistPipeline(null) | ||
| } | ||
| } | ||
| } | ||
|
|
||
| private fun runAssistPipeline(text: String?, skipStopPlayback: Boolean = false) { | ||
| Timber.tag("[AA-Assist]").d("runAssistPipeline: text=%s, isVoice=%s", text, text == null) | ||
| if (!skipStopPlayback) { | ||
| stopPlayback() | ||
| } | ||
|
|
||
| pipelineJob = viewModelScope.launch { | ||
| val pipeline = try { | ||
| val id = pipelineId ?: serverManager.integrationRepository(selectedServerId).getLastUsedPipelineId() | ||
| Timber.tag("[AA-Assist]").d( | ||
| "runAssistPipeline: pipelineId=%s, lastPipelineId=%s", | ||
| pipelineId, | ||
| serverManager.integrationRepository(selectedServerId).getLastUsedPipelineId(), | ||
| ) | ||
| id?.let { | ||
| serverManager.webSocketRepository(selectedServerId).getAssistPipeline(it) | ||
| } | ||
| } catch (e: Exception) { | ||
| Timber.e(e, "Failed to get assist pipeline") | ||
| null | ||
| } | ||
|
|
||
| Timber.tag("[AA-Assist]").d( | ||
| "runAssistPipeline: pipeline=%s, ttsEngine=%s, sttEngine=%s", | ||
| pipeline?.id, | ||
| pipeline?.ttsEngine, | ||
| pipeline?.sttEngine, | ||
| ) | ||
| isProcessing = true | ||
| runAssistPipelineInternal( |
There was a problem hiding this comment.
_processingState is never set to true, so processingState will not emit when processing starts. At the same time, isProcessing is kept separately as Compose state, which the car screen does not observe reliably. Please consolidate to a single observable source of truth (preferably a StateFlow) and update it both on start and on all end paths (pipeline end, dismiss, errors).
| val currentList = _conversation.value.toMutableList() | ||
| if (event is AssistEvent.Message.Error) { | ||
| if (activeHaMessage != null) { | ||
| val haIndex = currentList.indexOf(activeHaMessage) | ||
| if (haIndex != -1) { | ||
| currentList[haIndex] = activeHaMessage!!.copy( | ||
| message = event.message.trim(), | ||
| isError = true, | ||
| ) | ||
| _conversation.value = currentList | ||
| } | ||
| } | ||
| } else if (event is AssistEvent.Message.Input) { | ||
| if (activeUserMessage != null) { | ||
| val userIndex = currentList.indexOf(activeUserMessage) | ||
| if (userIndex != -1) { | ||
| currentList[userIndex] = activeUserMessage!!.copy( | ||
| message = event.message.trim(), | ||
| isError = false, | ||
| ) | ||
| // Add assistant placeholder for the response if not already in list | ||
| if (currentList.indexOf(activeHaMessage) == -1) { | ||
| activeHaMessage = AssistMessage.placeholder(isInput = false) | ||
| currentList.add(activeHaMessage!!) | ||
| } | ||
| _conversation.value = currentList | ||
| } | ||
| } | ||
| } else if (event is AssistEvent.Message.Output) { | ||
| if (activeHaMessage != null) { | ||
| val haIndex = currentList.indexOf(activeHaMessage) | ||
| if (haIndex != -1) { | ||
| currentList[haIndex] = activeHaMessage!!.copy( | ||
| message = event.message.trim(), | ||
| isError = false, | ||
| ) | ||
| _conversation.value = currentList | ||
| } | ||
| } |
There was a problem hiding this comment.
When you replace entries in _conversation with activeUserMessage!!.copy(...) / activeHaMessage!!.copy(...), the active*Message fields are not updated to the new instance. Subsequent indexOf(activeHaMessage) / indexOf(activeUserMessage) calls may fail because the list now contains the copied instance, not the old one. Please update activeUserMessage/activeHaMessage to the new copied value (or track messages by stable IDs) whenever you mutate the list.
| } | ||
|
|
||
| private fun runAssistPipeline(text: String?, skipStopPlayback: Boolean = false) { | ||
| Timber.tag("[AA-Assist]").d("runAssistPipeline: text=%s, isVoice=%s", text, text == null) |
There was a problem hiding this comment.
runAssistPipeline logs the text argument. If this is ever used for text input (or if upstream changes later), it will log user-provided content. Please avoid logging raw user input, or redact it using sensitive(...) / log only whether text was provided.
| Timber.tag("[AA-Assist]").d("runAssistPipeline: text=%s, isVoice=%s", text, text == null) | |
| Timber.tag("[AA-Assist]").d( | |
| "runAssistPipeline: hasText=%s, isVoice=%s", | |
| !text.isNullOrEmpty(), | |
| text == null, | |
| ) |
| val canPlay = audioManager != null && audioManager.getStreamVolume(STREAM_MUSIC) != 0 | ||
| Timber.tag("[AA-Assist]").d("AudioUrlPlayer.canPlayMusic: audioManager=%s, volume=%s, canPlay=%s", | ||
| audioManager != null, audioManager?.getStreamVolume(STREAM_MUSIC), canPlay) |
There was a problem hiding this comment.
canPlayMusic() calls audioManager?.getStreamVolume(STREAM_MUSIC) twice (once for canPlay and once for logging). That’s redundant and can also risk inconsistent values or repeated exceptions. Consider reading the volume once into a local variable and logging that value.
| val canPlay = audioManager != null && audioManager.getStreamVolume(STREAM_MUSIC) != 0 | |
| Timber.tag("[AA-Assist]").d("AudioUrlPlayer.canPlayMusic: audioManager=%s, volume=%s, canPlay=%s", | |
| audioManager != null, audioManager?.getStreamVolume(STREAM_MUSIC), canPlay) | |
| val streamVolume = audioManager?.getStreamVolume(STREAM_MUSIC) | |
| val canPlay = streamVolume != null && streamVolume != 0 | |
| Timber.tag("[AA-Assist]").d( | |
| "AudioUrlPlayer.canPlayMusic: audioManager=%s, volume=%s, canPlay=%s", | |
| audioManager != null, | |
| streamVolume, | |
| canPlay, | |
| ) |
|
Did you sign the CLA? Also please fill the description of the PR following the template we have. |
|
Please feel free to close this and resubmit as you see fit |
|
I do not wish to sign CLA - I've added a description if it is helpful to you |
|
Got it then I'll close it, if someone wants to make this feature he will need the CLA. |

Summary
Enables the Assist widget to trigger voice recording functionality within the Android Auto interface using a broadcast-based "Signal/Observe" pattern. The implementation ensures that the intent remains functional for mobile users by correctly routing to the main web view Activity when Android Auto is not present.
High-Level Overview
The Problem
Users want to trigger the "Assist" feature (voice interaction with Home Assistant) directly from a Home Assistant widget on their Android device. Currently, when a user interacts with the widget, the app needs to decide where to show the Assist interface: on the phone's main screen (the web view) or on the car's display (Android Auto). Without a way to distinguish between these two contexts, the app might try to open a mobile popup while the user is driving, which is not only useless but potentially distracting.
The Solution
We implemented a "smart trigger" system. When the widget is pressed, it sets a
trigger_sourceextra flag (TRIGGER_SOURCE_ASSIST) on the Intent. TheAssistActivitythen checks: "Is the user currently connected to Android Auto?"ACTION_NAVIGATE_TO_AUTOMOTIVE_ASSISTtoHaCarAppService, which pushes theAutomotiveAssistScreenonto its session's screen stack.AssistActivityon the phone.Architectural Overview
We moved from a "one-size-fits-all" intent to a Signal/Observe pattern. Instead of the widget telling the app exactly what to do, it now signals the intent via an Intent extra, and
AssistActivityobserves that signal and decides how to react based on the current context.Workflow
AssistShortcutActivitysetsEXTRA_TRIGGER_SOURCE = TRIGGER_SOURCE_ASSISTon the launch Intent.AssistActivityreadstriggerSourcefrom the Intent.carInfo == null),AssistActivityproceeds to render the standardAssistViewModelUI.carInfo != null,AssistActivitysends a broadcast (ACTION_NAVIGATE_TO_AUTOMOTIVE_ASSIST) toHaCarAppServiceand finishes itself.Technical Implementation Details
1. Intent Routing (AssistActivity)
AssistActivitynow checks thetriggerSourceextra on launch. If it matchesTRIGGER_SOURCE_ASSISTand an Android Auto connection exists, it broadcastsACTION_NAVIGATE_TO_AUTOMOTIVE_ASSIST(withEXTRA_SERVER) and callsfinish().2. Car App Service (HaCarAppService)
HaCarAppServicenow registers aBroadcastReceiverforACTION_NAVIGATE_TO_AUTOMOTIVE_ASSISTduringonCreate(). When received, it callscurrentSession.navigateToAssist(serverId), which:AutomotiveAssistViewModelwith the appropriate audio strategy and ExoPlayer.AutomotiveAssistScreenand pushes it onto theScreenManager.Additionally, the microphone icon on
MainVehicleScreennow triggers the same broadcast, providing an in-app entry point to the Assist screen.3. State-Driven UI (AutomotiveAssistScreen)
A new Car App
Screentailored for vehicle use. It listens to:viewModel.conversation— message listviewModel.processingState— loading indicatorviewModel.isAudioPlaying— playback iconviewModel.inputMode— voice/text modeUses a Car
ListTemplatewithHeaderandRowitems for messages, mapping state toCommunityMaterialicons (microphone, volume, sync).4. AutomotiveAssistViewModel
A new
AssistViewModelBasesubclass providing a simplified state model for Android Auto. It maps the fullAssistEventpipeline to a reducedconversation,processingState, andisAudioPlayingflows. Uses@AssistedInjectfor DI with assisted factories.5. Base Refactoring (AssistViewModelBase)
onPause()andonDestroy()moved from public methods to overridable hooks.isPlayingAudioexposed viaStateFlow<Boolean>(isPlayingAudioState) for observe-in-UI patterns.setIsPlayingAudio()to update the internal var and flow atomically."[AA-Assist]"tag for debugging the pipeline across voice recognition, TTS playback, and intent handling stages.6. Data Flow & Synchronization
Single Source of Truth: Both the mobile UI (
AssistViewModel) and Android Auto UI (AutomotiveAssistViewModel) extend fromAssistViewModelBase, sharing the core assist pipeline logic (recorder setup, WebSocket pipeline execution, TTS playback).Thread Safety: All state mutations go through base class methods (
setIsPlayingAudio()) to prevent race conditions during rapid state changes in the pipeline.Lifecycle Management:
AutomotiveAssistScreencollects fromviewModelflows ininit {}blocks, callinginvalidate()to trigger Car App template recomposition on each emission.