Skip to content

Android Auto-aware Assist Triggering#6710

Closed
lowlyocean wants to merge 6 commits intohome-assistant:mainfrom
lowlyocean:android_auto_assist
Closed

Android Auto-aware Assist Triggering#6710
lowlyocean wants to merge 6 commits intohome-assistant:mainfrom
lowlyocean:android_auto_assist

Conversation

@lowlyocean
Copy link
Copy Markdown

@lowlyocean lowlyocean commented Apr 15, 2026

Summary

Enables the Assist widget to trigger voice recording functionality within the Android Auto interface using a broadcast-based "Signal/Observe" pattern. The implementation ensures that the intent remains functional for mobile users by correctly routing to the main web view Activity when Android Auto is not present.

High-Level Overview

The Problem

Users want to trigger the "Assist" feature (voice interaction with Home Assistant) directly from a Home Assistant widget on their Android device. Currently, when a user interacts with the widget, the app needs to decide where to show the Assist interface: on the phone's main screen (the web view) or on the car's display (Android Auto). Without a way to distinguish between these two contexts, the app might try to open a mobile popup while the user is driving, which is not only useless but potentially distracting.

The Solution

We implemented a "smart trigger" system. When the widget is pressed, it sets a trigger_source extra flag (TRIGGER_SOURCE_ASSIST) on the Intent. The AssistActivity then checks: "Is the user currently connected to Android Auto?"

  • If YES (Android Auto): It sends a broadcast with ACTION_NAVIGATE_TO_AUTOMOTIVE_ASSIST to HaCarAppService, which pushes the AutomotiveAssistScreen onto its session's screen stack.
  • If NO (Mobile): It launches the standard Assist interface directly within AssistActivity on the phone.

Architectural Overview

We moved from a "one-size-fits-all" intent to a Signal/Observe pattern. Instead of the widget telling the app exactly what to do, it now signals the intent via an Intent extra, and AssistActivity observes that signal and decides how to react based on the current context.

Workflow

  1. Widget Interaction: User taps the widget → AssistShortcutActivity sets EXTRA_TRIGGER_SOURCE = TRIGGER_SOURCE_ASSIST on the launch Intent.
  2. Context Check: AssistActivity reads triggerSource from the Intent.
  3. Path A (Mobile): If no Android Auto connection (carInfo == null), AssistActivity proceeds to render the standard AssistViewModel UI.
  4. Path B (Android Auto): If carInfo != null, AssistActivity sends a broadcast (ACTION_NAVIGATE_TO_AUTOMOTIVE_ASSIST) to HaCarAppService and finishes itself.

Technical Implementation Details

1. Intent Routing (AssistActivity)

AssistActivity now checks the triggerSource extra on launch. If it matches TRIGGER_SOURCE_ASSIST and an Android Auto connection exists, it broadcasts ACTION_NAVIGATE_TO_AUTOMOTIVE_ASSIST (with EXTRA_SERVER) and calls finish().

2. Car App Service (HaCarAppService)

HaCarAppService now registers a BroadcastReceiver for ACTION_NAVIGATE_TO_AUTOMOTIVE_ASSIST during onCreate(). When received, it calls currentSession.navigateToAssist(serverId), which:

  • Creates an AutomotiveAssistViewModel with the appropriate audio strategy and ExoPlayer.
  • Creates an AutomotiveAssistScreen and pushes it onto the ScreenManager.

Additionally, the microphone icon on MainVehicleScreen now triggers the same broadcast, providing an in-app entry point to the Assist screen.

3. State-Driven UI (AutomotiveAssistScreen)

A new Car App Screen tailored for vehicle use. It listens to:

  • viewModel.conversation — message list
  • viewModel.processingState — loading indicator
  • viewModel.isAudioPlaying — playback icon
  • viewModel.inputMode — voice/text mode

Uses a Car ListTemplate with Header and Row items for messages, mapping state to CommunityMaterial icons (microphone, volume, sync).

4. AutomotiveAssistViewModel

A new AssistViewModelBase subclass providing a simplified state model for Android Auto. It maps the full AssistEvent pipeline to a reduced conversation, processingState, and isAudioPlaying flows. Uses @AssistedInject for DI with assisted factories.

5. Base Refactoring (AssistViewModelBase)

  • onPause() and onDestroy() moved from public methods to overridable hooks.
  • isPlayingAudio exposed via StateFlow<Boolean> (isPlayingAudioState) for observe-in-UI patterns.
  • All state changes now use setIsPlayingAudio() to update the internal var and flow atomically.
  • Heavy Timber logging added with "[AA-Assist]" tag for debugging the pipeline across voice recognition, TTS playback, and intent handling stages.

6. Data Flow & Synchronization

Single Source of Truth: Both the mobile UI (AssistViewModel) and Android Auto UI (AutomotiveAssistViewModel) extend from AssistViewModelBase, sharing the core assist pipeline logic (recorder setup, WebSocket pipeline execution, TTS playback).

Thread Safety: All state mutations go through base class methods (setIsPlayingAudio()) to prevent race conditions during rapid state changes in the pipeline.

Lifecycle Management: AutomotiveAssistScreen collects from viewModel flows in init {} blocks, calling invalidate() to trigger Car App template recomposition on each emission.

Copy link
Copy Markdown

@home-assistant home-assistant Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lowlyocean

It seems you haven't yet signed a CLA. Please do so here.

Once you do that we will be able to review and accept this pull request.

Thanks!

@jpelgrom
Copy link
Copy Markdown
Member

Hi 👋

I noticed you recently submitted a similar PR to the iOS app: home-assistant/iOS#4496

Did you fully prepare this one using AI systems as well? Did you review and test the changes, and consider Android Auto(motive) policy requirements?

@lowlyocean
Copy link
Copy Markdown
Author

Hello again! I failed to realize both projects likely have the same reviewers

This one was AI-assisted as well but unlike the iOS PR it is something I've been testing on a real device connectes to Device Head Unit emulator. It's still very much a draft despite being functional in a crude sense.

I think the policy requirements are somewhat similar to iOS and unlike the CarPlay version it's not clear how to trigger directly into an Assist screen without navigating from a MainVehicleScreen. ln this case, an argument for being compliant with policy might involve separating Android Auto Assist into its own "App"

Copy link
Copy Markdown

@home-assistant home-assistant Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lowlyocean

It seems you haven't yet signed a CLA. Please do so here.

Once you do that we will be able to review and accept this pull request.

Thanks!

Copy link
Copy Markdown

@home-assistant home-assistant Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lowlyocean

It seems you haven't yet signed a CLA. Please do so here.

Once you do that we will be able to review and accept this pull request.

Thanks!

Copy link
Copy Markdown

@home-assistant home-assistant Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lowlyocean

It seems you haven't yet signed a CLA. Please do so here.

Once you do that we will be able to review and accept this pull request.

Thanks!

Copy link
Copy Markdown

@home-assistant home-assistant Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lowlyocean

It seems you haven't yet signed a CLA. Please do so here.

Once you do that we will be able to review and accept this pull request.

Thanks!

@lowlyocean
Copy link
Copy Markdown
Author

lowlyocean commented Apr 23, 2026

@jpelgrom @bgoncal I consider this Android Auto version to be "fully functioning" now. Providing a video example below. I wasn't able to record my microphone input but you can see what I said as well as hear the Assistant's responses.

First, I give an instruction to tell a joke and after the assistant responds the conversation ends. No further input is recorded.

I click the icon and then start a new conversation, asking the assistant to ask me a question. When it does, it listens for my reply. When the assistant responds without ending in a question, the conversation ends. No further input is recorded.

I hope you can use this as a start for getting it into the offical app (fully expecting that this PR will be closed)

two_examples.mp4

Copy link
Copy Markdown

@home-assistant home-assistant Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hi @lowlyocean

It seems you haven't yet signed a CLA. Please do so here.

Once you do that we will be able to review and accept this pull request.

Thanks!

@lowlyocean lowlyocean marked this pull request as ready for review April 29, 2026 12:57
Copilot AI review requested due to automatic review settings April 29, 2026 12:57
Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an automotive (car app) entry point for Assist so that Assist triggers can route to an in-vehicle Assist UI and start listening automatically.

Changes:

  • Adds a new Automotive Assist screen + ViewModel and wires navigation into HaCarAppService
  • Routes Assist shortcut triggers toward the automotive experience when car context is available
  • Adds additional logging and exposes audio playback state as a StateFlow

Reviewed changes

Copilot reviewed 10 out of 10 changed files in this pull request and generated 23 comments.

Show a summary per file
File Description
common/src/main/kotlin/io/homeassistant/companion/android/common/util/AudioUrlPlayer.kt Adds verbose Assist-related logging around audio playback and volume checks
common/src/main/kotlin/io/homeassistant/companion/android/common/assist/AssistViewModelBase.kt Adds lifecycle hooks, introduces isPlayingAudioState, and adds extensive debug logging
app/src/main/kotlin/io/homeassistant/companion/android/widgets/assist/AssistShortcutActivity.kt Adds trigger source extra to distinguish Assist shortcut launches
app/src/main/kotlin/io/homeassistant/companion/android/vehicle/MainVehicleScreen.kt Adds an “Assist” item that broadcasts a navigation intent to the car service
app/src/main/kotlin/io/homeassistant/companion/android/vehicle/HaCarAppService.kt Registers a navigation broadcast receiver and creates/pushes the new Automotive Assist screen
app/src/main/kotlin/io/homeassistant/companion/android/vehicle/AutomotiveAssistScreen.kt New car screen rendering a simplified Assist conversation UI
app/src/main/kotlin/io/homeassistant/companion/android/assist/AutomotiveAssistViewModel.kt New ViewModel orchestrating pipeline selection, conversation updates, and voice interaction for automotive
app/src/main/kotlin/io/homeassistant/companion/android/assist/AssistViewModel.kt Overrides lifecycle hooks to align with new base-class hooks
app/src/main/kotlin/io/homeassistant/companion/android/assist/AssistAudioStrategyFactory.kt Minor formatting-only change
app/src/main/kotlin/io/homeassistant/companion/android/assist/AssistActivity.kt Adds trigger-source routing and broadcasts navigation into the car app when appropriate

override fun onCreate() {
super.onCreate()
val filter = IntentFilter(ACTION_NAVIGATE_TO_AUTOMOTIVE_ASSIST)
registerReceiver(navigationReceiver, filter)
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The service registers a receiver via the platform registerReceiver(...) overload. Elsewhere in the app the convention is ContextCompat.registerReceiver(..., RECEIVER_NOT_EXPORTED/EXPORTED) to satisfy modern Android receiver-export rules and lint. Please switch to ContextCompat.registerReceiver with RECEIVER_NOT_EXPORTED for this in-app navigation receiver.

Suggested change
registerReceiver(navigationReceiver, filter)
ContextCompat.registerReceiver(
this,
navigationReceiver,
filter,
ContextCompat.RECEIVER_NOT_EXPORTED
)

Copilot uses AI. Check for mistakes.
Comment on lines +152 to +177
val automotiveAssistViewModel = automotiveAssistViewModelFactory.create(
serverManager,
audioStrategyFactory.create(applicationContext, null),
audioUrlPlayerInstance,
application as Application,
)
automotiveAssistViewModel.onCreate(
hasPermission = ContextCompat.checkSelfPermission(
applicationContext,
android.Manifest.permission.RECORD_AUDIO,
) == PackageManager.PERMISSION_GRANTED,
serverId = serverId,
pipelineId = null,
startListening = true,
)

return automotiveAssistScreenFactory.create(
carContext,
serverManager,
serverId,
audioStrategyFactory,
audioUrlPlayerInstance,
application as Application,
automotiveAssistViewModel,
lifecycleScope,
)
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

AutomotiveAssistViewModel (an AndroidViewModel) is being created manually via an assisted factory instead of through a ViewModelProvider. That means onCleared() will never be called, so viewModelScope coroutines (wake-word collection, pipeline job, recorder job) can leak beyond the screen/session lifecycle. Please tie this ViewModel to a proper ViewModelStoreOwner (if available in the car app stack), or refactor it into a lifecycle-managed class where you explicitly cancel its scope and call the existing onDestroy() cleanup when the screen/session ends.

Copilot uses AI. Check for mistakes.
}

override fun onGetTemplate(): Template {
Timber.d("onGetTemplate called")
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

onGetTemplate() can be called frequently by the car host; logging every call (Timber.d("onGetTemplate called")) is likely to create noisy logs and overhead. Consider removing this log or gating it behind a more targeted debug flag.

Suggested change
Timber.d("onGetTemplate called")

Copilot uses AI. Check for mistakes.
Comment on lines +27 to +60
class AutomotiveAssistViewModel @AssistedInject constructor(
@Assisted override val serverManager: ServerManager,
@Assisted override val audioStrategy: AssistAudioStrategy,
@Assisted private val audioUrlPlayer: AudioUrlPlayer,
@Assisted private val application: Application,
) : AssistViewModelBase(serverManager, audioStrategy, audioUrlPlayer, application) {

val isAudioPlaying: StateFlow<Boolean> = isPlayingAudioState

private val _processingState = MutableStateFlow(false)
val processingState: StateFlow<Boolean> = _processingState

private var pipelineId: String? = null
private var pipelineJob: Job? = null
private var activeUserMessage: AssistMessage? = null
private var activeHaMessage: AssistMessage? = null
private var isContinuationTurn = false

var isProcessing by mutableStateOf(false)
private set

@AssistedFactory
interface Factory {
fun create(
serverManager: ServerManager,
audioStrategy: AssistAudioStrategy,
audioUrlPlayer: AudioUrlPlayer,
application: Application,
): AutomotiveAssistViewModel
}

private val _conversation = MutableStateFlow<List<AssistMessage>>(emptyList())
val conversation: StateFlow<List<AssistMessage>> = _conversation.asStateFlow()

Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This new ViewModel introduces non-trivial state handling (pipeline selection, conversation mutation, continuation turns, recording/pipeline lifecycle). There are existing unit tests for AssistViewModel under app/src/test/.../assist/; adding a focused AutomotiveAssistViewModel test suite would help prevent regressions (e.g., processing state toggling, placeholder replacement, continue-conversation behavior).

Copilot uses AI. Check for mistakes.
Comment on lines +37 to +38
@RequiresApi
class AutomotiveAssistScreen @AssistedInject constructor(
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@RequiresApi is used without specifying an API level (e.g., @RequiresApi(Build.VERSION_CODES.O)). The androidx RequiresApi annotation requires a value, so this will not compile as written. Please provide the intended API level or remove the annotation if it is not needed.

Copilot uses AI. Check for mistakes.
Comment on lines +49 to +69
init {
scope.launch {
viewModel.conversation.collect {
invalidate()
}
}
scope.launch {
viewModel.processingState.collect {
invalidate()
}
}
scope.launch {
viewModel.isAudioPlaying.collect {
invalidate()
}
}
scope.launch {
snapshotFlow { viewModel.inputMode }.collect {
invalidate()
}
}
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The collectors started in init use an externally provided CoroutineScope. In other vehicle screens, lifecycleScope/repeatOnLifecycle is used so collection stops automatically when the screen is stopped/destroyed. As written, these collectors can outlive the screen and keep calling invalidate(). Please switch to lifecycleScope (or observe lifecycle to cancel) and use repeatOnLifecycle for collection.

Copilot uses AI. Check for mistakes.
Comment on lines +36 to +248
private val _processingState = MutableStateFlow(false)
val processingState: StateFlow<Boolean> = _processingState

private var pipelineId: String? = null
private var pipelineJob: Job? = null
private var activeUserMessage: AssistMessage? = null
private var activeHaMessage: AssistMessage? = null
private var isContinuationTurn = false

var isProcessing by mutableStateOf(false)
private set

@AssistedFactory
interface Factory {
fun create(
serverManager: ServerManager,
audioStrategy: AssistAudioStrategy,
audioUrlPlayer: AudioUrlPlayer,
application: Application,
): AutomotiveAssistViewModel
}

private val _conversation = MutableStateFlow<List<AssistMessage>>(emptyList())
val conversation: StateFlow<List<AssistMessage>> = _conversation.asStateFlow()

var inputMode by mutableStateOf<AssistInputMode?>(null)
private set

var shouldFinish by mutableStateOf(false)
private set

var recorderAutoStart by mutableStateOf(false)
private set

override fun getInput(): AssistInputMode? = inputMode

override fun setInput(inputMode: AssistInputMode) {
this.inputMode = inputMode
}

init {
viewModelScope.launch {
audioStrategy.wakeWordDetected.collect { detectedPhrase ->
if (inputMode != AssistInputMode.VOICE_ACTIVE) {
onMicrophoneInput(clearConversation = false)
}
}
}
}

fun onCreate(hasPermission: Boolean, serverId: Int?, pipelineId: String?, startListening: Boolean?) {
viewModelScope.launch {
[email protected] = hasPermission
serverId?.let {
selectedServerId = it
}
startListening?.let { recorderAutoStart = it }

if (!serverManager.isRegistered()) {
inputMode = AssistInputMode.BLOCKED
_conversation.value = listOf(
AssistMessage(
app.getString(io.homeassistant.companion.android.common.R.string.not_registered),
isInput = false,
),
)
return@launch
}

if (pipelineId != null) {
setPipeline(pipelineId)
} else {
val lastPipelineId = serverManager.integrationRepository(selectedServerId).getLastUsedPipelineId()
Timber.tag("[AA-Assist]").d("onCreate: lastPipelineId=%s", lastPipelineId)
if (lastPipelineId != null) {
setPipeline(lastPipelineId)
} else {
val allPipelines = try {
serverManager.webSocketRepository(selectedServerId).getAssistPipelines()
} catch (e: Exception) {
Timber.e(e, "Failed to get assist pipelines")
null
}
Timber.tag("[AA-Assist]").d("onCreate: allPipelines=%s", allPipelines?.pipelines?.map { it.id })
val ttsPipeline = allPipelines?.pipelines?.firstOrNull { it.ttsEngine != null }
if (ttsPipeline != null) {
Timber.tag("[AA-Assist]").d("onCreate: using TTS pipeline=%s", ttsPipeline.id)
setPipeline(ttsPipeline.id)
} else {
inputMode = AssistInputMode.BLOCKED
_conversation.value = listOf(
AssistMessage(
app.getString(io.homeassistant.companion.android.common.R.string.assist_error),
isInput = false,
),
)
}
}
}

if (hasPermission && recorderAutoStart) {
onMicrophoneInput(proactive = true, clearConversation = true)
}
}
}

private suspend fun setPipeline(id: String?) {
pipelineId = id
Timber.tag("[AA-Assist]").d("setPipeline: id=%s", id)
val pipeline = try {
serverManager.webSocketRepository(selectedServerId).getAssistPipeline(id)
} catch (e: Exception) {
Timber.e(e, "Failed to get assist pipeline")
null
}

Timber.tag("[AA-Assist]").d(
"setPipeline: pipeline=%s, ttsEngine=%s, sttEngine=%s",
pipeline?.id,
pipeline?.ttsEngine,
pipeline?.sttEngine,
)
if (pipeline != null) {
_conversation.value = emptyList()
activeUserMessage = null
activeHaMessage = null
inputMode = if (pipeline.sttEngine != null) AssistInputMode.VOICE_INACTIVE else AssistInputMode.TEXT_ONLY
} else {
inputMode = AssistInputMode.BLOCKED
}
}

fun onMicrophoneInput(
proactive: Boolean = false,
isContinuation: Boolean = false,
clearConversation: Boolean = false,
) {
Timber.d(
"onMicrophoneInput called " +
"(proactive=$proactive, isContinuation=$isContinuation, clearConversation=$clearConversation)",
)
if (!hasPermission) {
Timber.w("onMicrophoneInput aborted: no permission")
return
}

if (clearConversation) {
_conversation.value = emptyList()
activeUserMessage = null
activeHaMessage = null
pipelineJob?.cancel()
}

stopPlayback()
setupRecorder(onError = {
stopRecording()
_conversation.value = _conversation.value + AssistMessage(
app.getString(io.homeassistant.companion.android.common.R.string.assist_error),
isInput = false,
isError = true,
)
Timber.e(it, "Recorder setup failed")
})
if (!isContinuation) {
inputMode = AssistInputMode.VOICE_ACTIVE
}

if (proactive) {
if (isContinuation) {
// Just add user placeholder, pipeline already running
activeUserMessage = AssistMessage.placeholder(isInput = true)
_conversation.value = _conversation.value + activeUserMessage!!
activeHaMessage = AssistMessage.placeholder(isInput = false)
} else {
// New pipeline, add placeholders and start pipeline
activeUserMessage = AssistMessage.placeholder(isInput = true)
activeHaMessage = AssistMessage.placeholder(isInput = false)
_conversation.value = _conversation.value + activeUserMessage!! + activeHaMessage!!
runAssistPipeline(null)
}
}
}

private fun runAssistPipeline(text: String?, skipStopPlayback: Boolean = false) {
Timber.tag("[AA-Assist]").d("runAssistPipeline: text=%s, isVoice=%s", text, text == null)
if (!skipStopPlayback) {
stopPlayback()
}

pipelineJob = viewModelScope.launch {
val pipeline = try {
val id = pipelineId ?: serverManager.integrationRepository(selectedServerId).getLastUsedPipelineId()
Timber.tag("[AA-Assist]").d(
"runAssistPipeline: pipelineId=%s, lastPipelineId=%s",
pipelineId,
serverManager.integrationRepository(selectedServerId).getLastUsedPipelineId(),
)
id?.let {
serverManager.webSocketRepository(selectedServerId).getAssistPipeline(it)
}
} catch (e: Exception) {
Timber.e(e, "Failed to get assist pipeline")
null
}

Timber.tag("[AA-Assist]").d(
"runAssistPipeline: pipeline=%s, ttsEngine=%s, sttEngine=%s",
pipeline?.id,
pipeline?.ttsEngine,
pipeline?.sttEngine,
)
isProcessing = true
runAssistPipelineInternal(
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

_processingState is never set to true, so processingState will not emit when processing starts. At the same time, isProcessing is kept separately as Compose state, which the car screen does not observe reliably. Please consolidate to a single observable source of truth (preferably a StateFlow) and update it both on start and on all end paths (pipeline end, dismiss, errors).

Copilot uses AI. Check for mistakes.
Comment on lines +255 to +293
val currentList = _conversation.value.toMutableList()
if (event is AssistEvent.Message.Error) {
if (activeHaMessage != null) {
val haIndex = currentList.indexOf(activeHaMessage)
if (haIndex != -1) {
currentList[haIndex] = activeHaMessage!!.copy(
message = event.message.trim(),
isError = true,
)
_conversation.value = currentList
}
}
} else if (event is AssistEvent.Message.Input) {
if (activeUserMessage != null) {
val userIndex = currentList.indexOf(activeUserMessage)
if (userIndex != -1) {
currentList[userIndex] = activeUserMessage!!.copy(
message = event.message.trim(),
isError = false,
)
// Add assistant placeholder for the response if not already in list
if (currentList.indexOf(activeHaMessage) == -1) {
activeHaMessage = AssistMessage.placeholder(isInput = false)
currentList.add(activeHaMessage!!)
}
_conversation.value = currentList
}
}
} else if (event is AssistEvent.Message.Output) {
if (activeHaMessage != null) {
val haIndex = currentList.indexOf(activeHaMessage)
if (haIndex != -1) {
currentList[haIndex] = activeHaMessage!!.copy(
message = event.message.trim(),
isError = false,
)
_conversation.value = currentList
}
}
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When you replace entries in _conversation with activeUserMessage!!.copy(...) / activeHaMessage!!.copy(...), the active*Message fields are not updated to the new instance. Subsequent indexOf(activeHaMessage) / indexOf(activeUserMessage) calls may fail because the list now contains the copied instance, not the old one. Please update activeUserMessage/activeHaMessage to the new copied value (or track messages by stable IDs) whenever you mutate the list.

Copilot uses AI. Check for mistakes.
}

private fun runAssistPipeline(text: String?, skipStopPlayback: Boolean = false) {
Timber.tag("[AA-Assist]").d("runAssistPipeline: text=%s, isVoice=%s", text, text == null)
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

runAssistPipeline logs the text argument. If this is ever used for text input (or if upstream changes later), it will log user-provided content. Please avoid logging raw user input, or redact it using sensitive(...) / log only whether text was provided.

Suggested change
Timber.tag("[AA-Assist]").d("runAssistPipeline: text=%s, isVoice=%s", text, text == null)
Timber.tag("[AA-Assist]").d(
"runAssistPipeline: hasText=%s, isVoice=%s",
!text.isNullOrEmpty(),
text == null,
)

Copilot uses AI. Check for mistakes.
Comment on lines +152 to +154
val canPlay = audioManager != null && audioManager.getStreamVolume(STREAM_MUSIC) != 0
Timber.tag("[AA-Assist]").d("AudioUrlPlayer.canPlayMusic: audioManager=%s, volume=%s, canPlay=%s",
audioManager != null, audioManager?.getStreamVolume(STREAM_MUSIC), canPlay)
Copy link

Copilot AI Apr 29, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

canPlayMusic() calls audioManager?.getStreamVolume(STREAM_MUSIC) twice (once for canPlay and once for logging). That’s redundant and can also risk inconsistent values or repeated exceptions. Consider reading the volume once into a local variable and logging that value.

Suggested change
val canPlay = audioManager != null && audioManager.getStreamVolume(STREAM_MUSIC) != 0
Timber.tag("[AA-Assist]").d("AudioUrlPlayer.canPlayMusic: audioManager=%s, volume=%s, canPlay=%s",
audioManager != null, audioManager?.getStreamVolume(STREAM_MUSIC), canPlay)
val streamVolume = audioManager?.getStreamVolume(STREAM_MUSIC)
val canPlay = streamVolume != null && streamVolume != 0
Timber.tag("[AA-Assist]").d(
"AudioUrlPlayer.canPlayMusic: audioManager=%s, volume=%s, canPlay=%s",
audioManager != null,
streamVolume,
canPlay,
)

Copilot uses AI. Check for mistakes.
@TimoPtr
Copy link
Copy Markdown
Member

TimoPtr commented May 5, 2026

Did you sign the CLA? Also please fill the description of the PR following the template we have.

@TimoPtr TimoPtr marked this pull request as draft May 5, 2026 09:33
@lowlyocean
Copy link
Copy Markdown
Author

Please feel free to close this and resubmit as you see fit

@TimoPtr
Copy link
Copy Markdown
Member

TimoPtr commented May 5, 2026

Please feel free to close this and resubmit as you see fit

Why closing? I'm asking for a proper description of your PR
image

@lowlyocean
Copy link
Copy Markdown
Author

I do not wish to sign CLA - I've added a description if it is helpful to you

@TimoPtr
Copy link
Copy Markdown
Member

TimoPtr commented May 5, 2026

Got it then I'll close it, if someone wants to make this feature he will need the CLA.

@TimoPtr TimoPtr closed this May 5, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants