Documentation Index
Fetch the complete documentation index at: https://docs.pandaprobe.com/llms.txt
Use this file to discover all available pages before exploring further.
Installation
pip install "pandaprobe[crewai]"
uv add "pandaprobe[crewai]"
Setup
from pandaprobe.integrations.crewai import CrewAIAdapter
adapter = CrewAIAdapter(
session_id="conversation-123",
user_id="user-abc",
tags=["production"],
)
adapter.instrument()
We recommend using UUIDs for session_id and user_id so traces can be grouped reliably across runs.
Usage
from crewai import Agent, Task, Crew
researcher = Agent(
role="Research Analyst",
goal="Find the latest AI trends",
backstory="You are an expert AI researcher.",
)
task = Task(
description="Research the latest developments in AI agents.",
expected_output="A summary of key trends.",
agent=researcher,
)
crew = Crew(agents=[researcher], tasks=[task])
result = crew.kickoff()
What gets traced
| CrewAI Component | Span Kind | Description |
|---|
Crew.kickoff / kickoff_for_each | CHAIN | Root trace boundary, crew configuration as system message |
Agent.execute_task | AGENT | Agent backstory as system message, prior agent context |
| LLM completion calls | LLM | Token delta tracking (per-call usage), reasoning extraction |
execute_tool_and_check_finality | TOOL | Tool name from AgentAction, input/output |
Multi-agent support
In multi-agent crews, PandaProbe tracks context propagation between agents. Each agent’s output is made available as context for subsequent agents, and this is reflected in the trace hierarchy.
Token usage
CrewAI token usage is computed using delta tracking: the adapter snapshots the agent’s cumulative _token_usage before and after each LLM call to determine per-call token counts.
| CrewAI Field | PandaProbe Field |
|---|
prompt_tokens | prompt_tokens |
completion_tokens | completion_tokens |
total_tokens | total_tokens |
cached_prompt_tokens | cache_read_tokens |
reasoning_tokens | reasoning_tokens |
Example with multiple agents
This example sets up a three-agent crew (researcher → analyst → writer). It traces the agents via CrewAIAdapter and wraps the run in a pandaprobe.session for trace grouping:
import uuid
from crewai import Agent, Crew, LLM, Task
import pandaprobe
from pandaprobe.integrations.crewai import CrewAIAdapter
SESSION_ID = str(uuid.uuid4())
USER_ID = "user_1"
def main():
adapter = CrewAIAdapter(
user_id=USER_ID,
tags=["multi-agent", "example"],
)
adapter.instrument()
llm = LLM(model="gemini/gemini-3.1-flash-lite-preview", reasoning_effort="low")
researcher = Agent(
role="Senior Researcher",
goal="Research and identify the most important trends in the given topic",
backstory="You are an expert researcher who excels at finding key insights.",
llm=llm,
)
analyst = Agent(
role="Data Analyst",
goal="Analyze research findings and identify actionable insights",
backstory="You are a skilled analyst who turns raw research into clear conclusions.",
llm=llm,
)
writer = Agent(
role="Content Writer",
goal="Write a compelling article based on research and analysis",
backstory="You are a talented writer who creates engaging content from technical material.",
llm=llm,
)
research_task = Task(
description="Research the latest trends in renewable energy for 2026.",
expected_output="A list of 5 key trends with brief descriptions.",
agent=researcher,
)
analysis_task = Task(
description="Analyze the research findings and identify the top 3 most impactful trends.",
expected_output="A ranked list of the top 3 trends with impact analysis.",
agent=analyst,
)
writing_task = Task(
description="Write a short article summarizing the top renewable energy trends for 2026.",
expected_output="A 200-word article suitable for a tech blog.",
agent=writer,
)
crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, writing_task],
verbose=True,
)
print(f"Session: {SESSION_ID}\n")
with pandaprobe.session(SESSION_ID):
result = crew.kickoff()
print(f"\nResult:\n{result.raw}")
pandaprobe.flush()
pandaprobe.shutdown()
print(f"\nTrace sent to PandaProbe backend (session={SESSION_ID}).")
if __name__ == "__main__":
main()
Each agent run is captured as an AGENT span nested under the root CHAIN, with LLM calls and any tool use recorded beneath the corresponding agent.