Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.pandaprobe.com/llms.txt

Use this file to discover all available pages before exploring further.

Wrappers provide the simplest way to add tracing to your LLM calls. Wrap your provider client once and every API call is automatically traced — no other code changes required. The wrapper returns the same client type with tracing injected. Your existing code continues to work identically. All wrappers automatically capture: input messages, output messages, model name, token usage, model parameters, streaming support, time-to-first-token for streaming, and error details.

Comparison

WrapperInstall ExtraProvider Methods Traced
wrap_openai"pandaprobe[openai]"chat.completions.create, responses.create
wrap_anthropic"pandaprobe[anthropic]"messages.create, messages.stream
wrap_gemini"pandaprobe[gemini]"models.generate_content, models.generate_content_stream
Install only the provider extras you use so dependency resolution stays minimal.

Quick example

from pandaprobe.wrappers import wrap_openai
from openai import OpenAI

client = wrap_openai(OpenAI())
# Use client exactly as before — all calls are traced automatically
Wrappers work seamlessly with manual instrumentation. If a wrapper call happens inside a pandaprobe.start_trace() or @pandaprobe.trace context, the LLM span is automatically nested as a child span.

Provider guides

OpenAI

Chat Completions and Responses API, streaming, and tool spans

Anthropic

Messages API, streaming patterns, and extended thinking

Google Gemini

generate_content, async, streaming, and thinking mode