Documentation Index
Fetch the complete documentation index at: https://docs.pandaprobe.com/llms.txt
Use this file to discover all available pages before exploring further.
Installation
pip install "pandaprobe[openai-agents]"
uv add "pandaprobe[openai-agents]"
from pandaprobe.integrations.openai_agents import OpenAIAgentsAdapter
adapter = OpenAIAgentsAdapter(
session_id="conversation-123",
user_id="user-abc",
tags=["production"],
)
adapter.instrument()
We recommend using UUIDs for session_id and user_id so traces can be grouped reliably across runs.
from agents import Agent, Runner
agent = Agent(
name="my-assistant",
instructions="You are a helpful assistant.",
model="gpt-5.4",
)
result = Runner.run_sync(agent, "What is PandaProbe?")
print(result.final_output)
What gets traced
| SDK Span Type | PandaProbe Kind | Description |
|---|
agent | AGENT | Agent execution with tools, handoffs, output_type metadata |
handoff | AGENT | Agent handoff with from_agent / to_agent metadata |
response | LLM | Responses API call with input, output, model, usage, reasoning |
generation | LLM | Chat Completions call with messages, model, usage |
function | TOOL | Function tool call with JSON-parsed input/output |
guardrail | OTHER | Guardrail check with triggered metadata |
custom | OTHER | Custom span with custom_data metadata |
I/O propagation
The adapter automatically propagates input/output from LLM spans to their parent agent and the root trace:
- First LLM input becomes the agent span’s input
- Last LLM output becomes the agent span’s output
- Last user message becomes the trace input; last assistant message becomes the trace output
Token usage mapping
| OpenAI Agents Field | PandaProbe Field |
|---|
input_tokens / prompt_tokens | prompt_tokens |
output_tokens / completion_tokens | completion_tokens |
total_tokens | total_tokens |
input_tokens_details.cached_tokens | cache_read_tokens |
output_tokens_details.reasoning_tokens | reasoning_tokens |
import asyncio
import uuid
from agents import Agent, Runner, function_tool
import pandaprobe
from pandaprobe.integrations.openai_agents import OpenAIAgentsAdapter
SESSION_ID = str(uuid.uuid4())
USER_ID = "user_1"
@function_tool
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
weather_data = {
"london": "Cloudy, 15°C, 70% humidity",
"tokyo": "Sunny, 28°C, 45% humidity",
"new york": "Partly cloudy, 22°C, 55% humidity",
"paris": "Rainy, 12°C, 85% humidity",
}
return weather_data.get(city.lower(), f"No weather data for {city}")
@function_tool
def get_population(city: str) -> str:
"""Get the approximate population of a city."""
populations = {
"london": "8.8 million",
"tokyo": "13.9 million",
"new york": "8.3 million",
"paris": "2.2 million",
}
return populations.get(city.lower(), f"Unknown population for {city}")
async def main():
adapter = OpenAIAgentsAdapter(
session_id=SESSION_ID,
user_id=USER_ID,
tags=["tool-agent", "example"],
)
adapter.instrument()
agent = Agent(
name="City Info Agent",
instructions=(
"You are a helpful assistant with access to weather and population tools. "
"Use the tools to answer questions about cities."
),
model="gpt-5.4-nano",
tools=[get_weather, get_population],
)
result = await Runner.run(agent, "What's the weather in London and what's its population?")
print(f"Agent: {result.final_output}")
pandaprobe.flush()
pandaprobe.shutdown()
print(f"\nTrace sent to PandaProbe backend (session={SESSION_ID}).")
if __name__ == "__main__":
asyncio.run(main())
This produces a trace with: CHAIN (root) → AGENT (City Info Agent) → LLM (model call) → TOOL (get_weather) → TOOL (get_population) → LLM (final response).