Traces and spans are persisted and indexed for search, filtering, and evaluation. Instrumentation is opt-in per deploy via environment configuration; when tracing is disabled, the SDK avoids network I/O and mutation.
Layer 1 — Wrappers (zero-code LLM tracing)
Wrap an LLM provider client to automatically trace every API call. The wrapper returns the same client type as the underlying SDK—no refactors beyond the wrap call.wrap_openai(client)— Chat Completions API and Responses APIwrap_gemini(client)—generate_contentandgenerate_content_streamwrap_anthropic(client)—messages.createandmessages.stream
Layer 2 — Integrations (automatic agent framework tracing)
Hook into an agent framework to trace the full lifecycle: LLM calls, tool invocations, sub-agent handoffs, guardrails, and related steps—without instrumenting each call yourself. Supported stacks include LangGraph, Google ADK, Claude Agent SDK, CrewAI, and OpenAI Agents SDK.Layer 3 — Manual instrumentation (full control)
Define exactly what gets traced, how spans are named, and what metadata you attach.@pandaprobe.traceand@pandaprobe.spandecoratorspandaprobe.start_trace()andt.span()context managers
When to use each layer
- Use Wrappers when you want LLM call visibility with minimal code changes—especially for direct provider usage without a heavy agent framework.
- Use Integrations when you run on a supported agent framework and want end-to-end traces (LLM + tools + orchestration) with consistent semantics.
- Use Manual instrumentation when you need custom span names, kinds, metadata, or you are building your own agent runtime and wrappers do not fit.
Wrappers
Zero-code LLM tracing
Integrations
Agent framework tracing
Manual
Decorators and context managers
