Skip to main content

Installation

pip install pandaprobe[google-adk]

Setup

from pandaprobe.integrations.google_adk import GoogleADKAdapter

adapter = GoogleADKAdapter(
    session_id="conversation-123",
    user_id="user-abc",
    tags=["production"],
)
adapter.instrument()
Call instrument() once at application startup, before creating any ADK agents or runners.

Usage

from google.adk.agents import LlmAgent
from google.adk.runners import Runner

agent = LlmAgent(
    name="my-agent",
    model="gemini-2.5-flash",
    instruction="You are a helpful assistant.",
)

runner = Runner(agent=agent, app_name="my-app", session_service=session_service)

async for event in runner.run_async(session_id=session.id, user_id="user-1", new_message=content):
    if event.content and event.content.parts:
        print(event.content.parts[0].text, end="")

What gets traced

ADK ComponentSpan KindDescription
Runner.run_asyncCHAINRoot trace boundary
BaseAgent.run_asyncAGENTAgent execution with session messages as input
BaseLlmFlow._call_llm_asyncLLMLLM calls with model, params, thinking, token usage, TTFT
BaseTool.run_asyncTOOLBase tool execution
FunctionTool.run_asyncTOOLFunction tool execution
McpTool.run_asyncTOOLMCP tool execution

Token usage mapping

ADK FieldPandaProbe Field
prompt_token_countprompt_tokens
candidates_token_countcompletion_tokens
total_token_counttotal_tokens
thoughts_token_countreasoning_tokens
cached_content_token_countcache_read_tokens

Thinking mode

When using Gemini models with thinking enabled, thought parts are automatically separated from answer parts. Thinking content is stored in metadata as reasoning_summary.