Wrappers provide the simplest way to add tracing to your LLM calls. Wrap your provider client once and every API call is automatically traced — no other code changes required. The wrapper returns the same client type with tracing injected. Your existing code continues to work identically. All wrappers automatically capture: input messages, output messages, model name, token usage, model parameters, streaming support, time-to-first-token for streaming, and error details.Documentation Index
Fetch the complete documentation index at: https://docs.pandaprobe.com/llms.txt
Use this file to discover all available pages before exploring further.
Comparison
| Wrapper | Install Extra | Provider Methods Traced |
|---|---|---|
wrap_openai | "pandaprobe[openai]" | chat.completions.create, responses.create |
wrap_anthropic | "pandaprobe[anthropic]" | messages.create, messages.stream |
wrap_gemini | "pandaprobe[gemini]" | models.generate_content, models.generate_content_stream |
Quick example
Wrappers work seamlessly with manual instrumentation. If a wrapper call happens inside a
pandaprobe.start_trace() or @pandaprobe.trace context, the LLM span is automatically nested as a child span.Provider guides
OpenAI
Chat Completions and Responses API, streaming, and tool spans
Anthropic
Messages API, streaming patterns, and extended thinking
Google Gemini
generate_content, async, streaming, and thinking mode
