Skip to main content
This page describes the tracing data model: what a trace and span contain, how status and kinds are interpreted, and how input/output are represented for LLM workloads.
Think of a trace as the outer envelope for one logical run, and spans as the tree of steps inside it. Parent/child links preserve causality and nesting depth for UIs and exports.

Traces

A trace represents a single end-to-end operation in your application—typically one user request or agent run. Every trace has:
  • trace_id — auto-generated UUID
  • name — a descriptive name (for example, customer-support-agent)
  • status — one of: PENDING, RUNNING, COMPLETED, ERROR
  • input / output — the request and response data
  • session_id / user_id — for grouping related traces
  • tags — arbitrary string labels
  • metadata — arbitrary key-value pairs
  • environment / release — from SDK configuration
  • spans — ordered list of spans (maximum 500 per trace)

Spans

A span represents a single unit of work within a trace. Spans can be nested to form a tree. Every span has:
  • span_id — auto-generated UUID
  • parent_span_id — links to the parent span (null for root spans)
  • name — descriptive name
  • kind — categorizes the type of work (see SpanKind below)
  • statusUNSET, OK, or ERROR
  • input / output — span-level data
  • model — the LLM model name (for LLM spans)
  • token_usage — dict with prompt_tokens, completion_tokens, total_tokens, and optionally reasoning_tokens, cache_read_tokens
  • model_parameters — dict with temperature, top_p, max_tokens, and similar fields
  • cost — dict with total and optionally other breakdowns
  • completion_start_time — timestamp of first token (time-to-first-token)
  • metadata — arbitrary key-value pairs
  • error — error message string if the span failed

SpanKind

ValueDescription
LLMA call to a language model API
TOOLA tool or function call execution
AGENTAn autonomous agent step or sub-agent
CHAINA chain or pipeline of operations
RETRIEVERA retrieval operation (for example, vector search)
EMBEDDINGAn embedding generation call
OTHERAny other operation

SpanStatusCode

ValueDescription
UNSETStatus not explicitly set (default)
OKOperation completed successfully
ERROROperation failed
On clean exit, spans auto-promote from UNSET to OK. On exception, status is set to ERROR.

TraceStatus

ValueDescription
PENDINGTrace created but not started
RUNNINGTrace is actively executing
COMPLETEDTrace finished successfully
ERRORTrace failed with an error

Input/Output convention

LLM spans use a standard messages format:
{
    "messages": [
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Hello!"},
        {"role": "assistant", "content": "Hi there! How can I help?"}
    ]
}
At the trace level, input contains only the last user message and output contains only the last assistant message. This convention is enforced automatically by wrappers, integrations, and the @trace decorator.
The messages format is validated but non-conforming data is accepted with a warning. You can pass arbitrary JSON as input/output if needed.
If a span exits without error and you never set an explicit status, the SDK promotes UNSET to OK. If an exception propagates out of the instrumented region, the span is marked ERROR and error details are attached when available.
Traces typically move PENDING to RUNNING while work is in flight, then COMPLETED on success or ERROR when the trace fails. Final status drives aggregation and alerting in downstream tooling.