Severity: Medium — affects all users of tool-based A2UI emission
Summary
A2uiSchemaManager.generate_system_prompt() unconditionally injects <a2ui-json> tag workflow instructions into the system prompt, even when include_schema=False. When used alongside SendA2uiToClientToolset (the tool-based approach), this creates conflicting instructions:
- The system prompt tells the LLM: "wrap A2UI JSON in
<a2ui-json> tags" (inline text output)
- The toolset tells the LLM: "call
send_a2ui_json_to_client" (structured function call)
With Gemini 2.5 Flash, this causes non-deterministic behavior — the model randomly chooses between proper tool use, malformed code-style output, or inline text with tags.
Reproduction
Setup
# agent.py
from google.adk.agents.llm_agent import LlmAgent
from a2ui.core.schema.manager import A2uiSchemaManager
from a2ui.basic_catalog.provider import BasicCatalog
from a2ui.core.schema.constants import VERSION_0_8
from a2ui.adk.a2a_extension.send_a2ui_to_client_toolset import SendA2uiToClientToolset
schema_manager = A2uiSchemaManager(
version=VERSION_0_8,
catalogs=[BasicCatalog.get_config(
version=VERSION_0_8,
examples_path="examples/standard",
)],
)
selected_catalog = schema_manager.get_selected_catalog()
examples = schema_manager.load_examples(selected_catalog, validate=False)
# This injects <a2ui-json> tag instructions — even with include_schema=False
instruction = schema_manager.generate_system_prompt(
role_description="You are a helpful assistant.",
include_schema=False,
include_examples=False,
)
root_agent = LlmAgent(
model="gemini-2.5-flash",
name="my_agent",
instruction=instruction,
tools=[
SendA2uiToClientToolset(
a2ui_enabled=True,
a2ui_catalog=selected_catalog,
a2ui_examples=examples or "",
),
],
)
Run with adk api_server and send via /run_sse with streaming: true.
What gets injected
Even with include_schema=False, include_examples=False, generate_system_prompt() always adds:
The generated response MUST follow these rules:
- The response can contain one or more A2UI JSON blocks.
- Each A2UI JSON block MUST be wrapped in `<a2ui-json>` and `</a2ui-json>` tags.
- Between or around these blocks, you can provide conversational text.
- The JSON part MUST be a single, raw JSON object...
This directly conflicts with SendA2uiToClientToolset, which registers send_a2ui_json_to_client as a structured function call.
Observed behavior
Sending "Create a contact card for Alice, email alice@test.com" produces three different outcomes non-deterministically:
1. Proper tool call (desired) — ~33% of the time with gemini-2.5-flash:
functionCall: { name: "send_a2ui_json_to_client", args: { a2ui_json: "[...]" } }
2. Malformed code-style call — ~20%:
Malformed function call: print(default_api.send_a2ui_json_to_client(a2ui_json='[...]'))
3. Inline text with <a2ui-json> tags — ~40%:
Here is the contact card:
<a2ui-json>
[{"beginRendering": ...}, {"surfaceUpdate": ...}]
</a2ui-json>
Test results
We ran systematic tests across models and FunctionCallingConfigMode:
| Model |
FunctionCallingConfigMode |
Success Rate |
Failures |
| gemini-2.5-flash |
AUTO (default) |
1/3 (33%) |
1 malformed, 1 inline |
| gemini-2.5-flash |
ANY |
3/3 (100%) |
— |
| gemini-2.5-pro |
AUTO (default) |
2/3 (67%) |
1 unknown |
| gemini-2.5-pro |
ANY |
3/3 (100%) |
— |
Current workarounds
Workaround 1: Skip generate_system_prompt entirely
Write the instruction directly. SendA2uiToClientToolset.process_llm_request() injects schema and examples automatically — generate_system_prompt is redundant when using the toolset.
# Instead of generate_system_prompt(), write directly:
instruction = """You are a helpful assistant.
When visual presentation helps, call the send_a2ui_json_to_client tool.
For simple questions, respond with text only."""
# Result: 15/15 success with gemini-2.5-flash + streaming=true
Workaround 2: Force FunctionCallingConfigMode.ANY
from google.genai import types
root_agent = LlmAgent(
model="gemini-2.5-flash",
instruction=instruction,
generate_content_config=types.GenerateContentConfig(
tool_config=types.ToolConfig(
function_calling_config=types.FunctionCallingConfig(
mode=types.FunctionCallingConfigMode.ANY,
)
)
),
tools=[SendA2uiToClientToolset(...)],
)
# Result: 3/3 success even with the large 52k prompt
Drawback of ANY mode: The model must call a tool on every turn. Simple text queries like "What is 2+2?" will still trigger a send_a2ui_json_to_client call instead of responding with plain text. This breaks the natural mixed-mode behavior where the agent decides whether UI is appropriate.
Suggested fix
Option A: Add include_workflow parameter
instruction = schema_manager.generate_system_prompt(
role_description="You are a helpful assistant.",
include_schema=False,
include_examples=False,
include_workflow=False, # NEW: skip <a2ui-json> tag instructions
)
Option B: Auto-detect tool-based usage
If SendA2uiToClientToolset is registered, generate_system_prompt should not inject the <a2ui-json> tag workflow since the toolset handles prompt injection via process_llm_request.
Option C: Document incompatibility
At minimum, document that generate_system_prompt() should NOT be used with SendA2uiToClientToolset. The rizzcharts sample works around this by using a strong WORKFLOW_DESCRIPTION that overrides the injected tag instructions, but this is fragile and non-obvious.
Note on the rizzcharts sample
The rizzcharts sample in this repo uses both generate_system_prompt() and SendA2uiToClientToolset together. It works because:
- Its
WORKFLOW_DESCRIPTION explicitly says "Call the send_a2ui_json_to_client tool" — overriding the tag instructions
- It uses
BuiltInPlanner with ThinkingConfig(include_thoughts=True) which may improve function call reliability
- It may be tested with models or configurations that handle the conflict better
However, following the same pattern without the strong workflow override produces the non-deterministic behavior described above.
Environment
google-adk==1.28.0
a2ui-agent==0.1.0 (from source, agent_sdks/python)
gemini-2.5-flash and gemini-2.5-pro via Vertex AI
- Python 3.14, macOS
Severity: Medium — affects all users of tool-based A2UI emission
Summary
A2uiSchemaManager.generate_system_prompt()unconditionally injects<a2ui-json>tag workflow instructions into the system prompt, even wheninclude_schema=False. When used alongsideSendA2uiToClientToolset(the tool-based approach), this creates conflicting instructions:<a2ui-json>tags" (inline text output)send_a2ui_json_to_client" (structured function call)With Gemini 2.5 Flash, this causes non-deterministic behavior — the model randomly chooses between proper tool use, malformed code-style output, or inline text with tags.
Reproduction
Setup
Run with
adk api_serverand send via/run_ssewithstreaming: true.What gets injected
Even with
include_schema=False, include_examples=False,generate_system_prompt()always adds:This directly conflicts with
SendA2uiToClientToolset, which registerssend_a2ui_json_to_clientas a structured function call.Observed behavior
Sending "Create a contact card for Alice, email alice@test.com" produces three different outcomes non-deterministically:
1. Proper tool call (desired) — ~33% of the time with
gemini-2.5-flash:2. Malformed code-style call — ~20%:
3. Inline text with
<a2ui-json>tags — ~40%:Test results
We ran systematic tests across models and
FunctionCallingConfigMode:Current workarounds
Workaround 1: Skip
generate_system_promptentirelyWrite the instruction directly.
SendA2uiToClientToolset.process_llm_request()injects schema and examples automatically —generate_system_promptis redundant when using the toolset.Workaround 2: Force
FunctionCallingConfigMode.ANYDrawback of
ANYmode: The model must call a tool on every turn. Simple text queries like "What is 2+2?" will still trigger asend_a2ui_json_to_clientcall instead of responding with plain text. This breaks the natural mixed-mode behavior where the agent decides whether UI is appropriate.Suggested fix
Option A: Add
include_workflowparameterOption B: Auto-detect tool-based usage
If
SendA2uiToClientToolsetis registered,generate_system_promptshould not inject the<a2ui-json>tag workflow since the toolset handles prompt injection viaprocess_llm_request.Option C: Document incompatibility
At minimum, document that
generate_system_prompt()should NOT be used withSendA2uiToClientToolset. The rizzcharts sample works around this by using a strongWORKFLOW_DESCRIPTIONthat overrides the injected tag instructions, but this is fragile and non-obvious.Note on the rizzcharts sample
The
rizzchartssample in this repo uses bothgenerate_system_prompt()andSendA2uiToClientToolsettogether. It works because:WORKFLOW_DESCRIPTIONexplicitly says "Call thesend_a2ui_json_to_clienttool" — overriding the tag instructionsBuiltInPlannerwithThinkingConfig(include_thoughts=True)which may improve function call reliabilityHowever, following the same pattern without the strong workflow override produces the non-deterministic behavior described above.
Environment
google-adk==1.28.0a2ui-agent==0.1.0(from source,agent_sdks/python)gemini-2.5-flashandgemini-2.5-provia Vertex AI