Traces not appearing in the dashboard
Traces not appearing in the dashboard
Possible causes:
- Missing API key or project name — The SDK requires both
PANDAPROBE_API_KEYandPANDAPROBE_PROJECT_NAME. Without them, auto-initialization silently fails and no traces are sent. - Wrong endpoint — If using a self-hosted instance, ensure
PANDAPROBE_ENDPOINTpoints to your server. - SDK disabled — Check that
PANDAPROBE_ENABLEDis not set tofalse. - Short-lived script — The background transport thread may not have time to flush before the process exits. Add
pandaprobe.flush()before exit. - Queue full — If you’re generating traces faster than they can be sent, older items are dropped. Increase
PANDAPROBE_MAX_QUEUE_SIZEorPANDAPROBE_BATCH_SIZE.
ValueError: PandaProbe API key is required
ValueError: PandaProbe API key is required
This error occurs when:
PANDAPROBE_ENABLEDistrue(the default)- But
PANDAPROBE_API_KEYis not set
- Set the
PANDAPROBE_API_KEYenvironment variable - Pass
api_key=topandaprobe.init() - Set
PANDAPROBE_ENABLED=falseif you don’t want tracing in this environment
RuntimeError: No PandaProbe client available
RuntimeError: No PandaProbe client available
This error occurs when calling
pandaprobe.start_trace() without a configured client. The other convenience functions (flush, shutdown, score) silently no-op instead.Solutions:- Set
PANDAPROBE_API_KEYandPANDAPROBE_PROJECT_NAMEenvironment variables - Call
pandaprobe.init()before usingstart_trace()
Traces are incomplete or missing spans
Traces are incomplete or missing spans
- Ensure all wrappers are called before making API calls:
client = wrap_openai(OpenAI()) - For integrations, call
adapter.instrument()once at startup before using the framework - For LangGraph, ensure the handler is passed in
config={"callbacks": [handler]}for every invocation - Check for exceptions in span code — exceptions set span status to ERROR but the trace still completes
How to enable debug logging
How to enable debug logging
pandaprobe logger to DEBUG level. You’ll see detailed logs about initialization, span creation, transport batching, HTTP requests, and retry behavior.You can also configure the logger directly:Flushing in short-lived scripts
Flushing in short-lived scripts
For scripts, notebooks, or Lambda functions that exit quickly:For long-running services (web servers, workers), this is handled automatically via the background thread and
atexit handler.Queue full — dropping oldest item
Queue full — dropping oldest item
This warning appears when the send queue reaches
PANDAPROBE_MAX_QUEUE_SIZE (default: 1000). The oldest item is dropped to make room.Solutions:- Increase
PANDAPROBE_MAX_QUEUE_SIZE - Increase
PANDAPROBE_BATCH_SIZEfor faster throughput - Decrease
PANDAPROBE_FLUSH_INTERVALfor more frequent flushes - Check network connectivity to the PandaProbe backend
