Skip to content

Agent API reference

Verified against crewai==1.14.3a2 (source: crewai/agent/core.py, crewai/agents/agent_builder/base_agent.py). Every import, field, and default below was read from the installed package.

Agent is a Pydantic model. Fields inherit from BaseAgent and Agent extends the set. Any unknown kwarg raises a validation error — this page lists the real fields.

from crewai import Agent, Task, Crew, LLM
analyst = Agent(
role="Research Analyst",
goal="Surface the strongest 3 claims about {topic}",
backstory="Senior analyst at an investigative newsroom.",
llm=LLM(model="openai/gpt-4o-mini"),
verbose=True,
)
task = Task(
description="Produce a bullet list on {topic}.",
expected_output="3 bullets, each citing a source.",
agent=analyst,
)
result = Crew(agents=[analyst], tasks=[task]).kickoff(inputs={"topic": "RAG evaluation"})
print(result.raw)

role, goal, and backstory are the only truly required fields. Everything else has a default.

Fields defined on BaseAgent — the ones you’ll touch most:

FieldTypeDefaultNotes
rolestrrequiredRole name; also used as knowledge collection name.
goalstrrequiredObjective. {placeholders} are interpolated from crew inputs.
backstorystrrequiredPersona text. {placeholders} also interpolated.
llmstr | BaseLLM | NoneNoneModel name ("openai/gpt-4o") or LLM(...) / BaseLLM instance. None uses default from crewai.llm.
function_calling_llmstr | BaseLLM | NoneNoneOverride the tool-calling LLM only (agent keeps llm for reasoning).
toolslist[BaseTool] | None[]Tools the agent may call. See the tools page.
max_iterint25Hard cap on reasoning iterations per task.
max_rpmint | NoneNoneRequests-per-minute throttle for this agent.
max_tokensint | NoneNoneMax tokens per LLM call.
max_execution_timeint | NoneNoneWall-clock seconds per task.
verboseboolFalsePrint the agent’s inner monologue.
allow_delegationboolFalseLet the agent call Delegate Work / Ask Question tools.
cacheboolTrueCache tool-call results.
respect_context_windowboolTrueSummarise messages to stay under the model’s context limit.
max_retry_limitint2Retries on task-level errors.
use_system_promptbool | NoneTrueWhether to set a system message (some o1-style models want False).
inject_dateboolFalsePrepend current date to the task description.
date_formatstr"%Y-%m-%d"Format when inject_date=True.
planningboolFalseEnable planning (reflection) before execution.
planning_configPlanningConfig | NoneNoneFine-tune planning (max attempts, LLM override).
embedderEmbedderConfig | NoneNoneEmbedder for the agent’s own knowledge collection.
knowledge_sourceslist[BaseKnowledgeSource] | NoneNonePer-agent knowledge (see Knowledge page).
knowledge_configKnowledgeConfig | NoneNoneLimits/threshold for knowledge retrieval.
mcpslist[str | MCPServerConfig] | NoneNoneMCP server references — URL, bare slug, or config object. "server#tool" filters to one tool.
appslist[PlatformAppOrAction] | NoneNoneCrewAI Platform app integrations (e.g. "gmail", "gmail/send_email").
a2aA2AConfig | A2AClientConfig | A2AServerConfig | list[A2AConfig | A2AClientConfig | A2AServerConfig] | NoneNoneAgent-to-Agent config (remote agents).
guardrailstr | GuardrailType | NoneNoneValidator applied to agent output.
guardrail_max_retriesint3Retries when the guardrail fails.
step_callbackCallable | NoneNoneCalled after every agent step.
system_template / prompt_template / response_templatestr | NoneNoneOverride the built-in prompt strings.
checkpointCheckpointConfig | bool | NoneNoneOpt the agent into checkpointing.
executor_classtype[CrewAgentExecutor] | type[AgentExecutor]CrewAgentExecutorSwap in the experimental AgentExecutor (ReAct-style).
from_repositorystr | NoneNoneLoad base config from the user’s CrewAI repository.

The installed source marks these as deprecated — don’t introduce them in new code:

  • allow_code_execution / code_execution_modeCodeInterpreterTool was removed. Call out to E2B / Modal / your own sandbox.
  • multimodal — pass files natively via Task.input_files.
  • reasoning, max_reasoning_attempts — superseded by planning + planning_config.

Setting any of these still works but emits a DeprecationWarning.

from crewai import Agent
from crewai.tools import tool
@tool("lookup_order")
def lookup_order(order_id: str) -> str:
"""Fetch order status from the orders DB."""
return f"order {order_id}: shipped"
support = Agent(
role="Support",
goal="Answer order questions accurately",
backstory="Level-2 support with DB access.",
tools=[lookup_order],
allow_delegation=False,
)
  • tools=[] + allow_delegation=True in a hierarchical crew still lets the agent call the manager’s delegation tools.
  • Set cache=False on an agent whose tool outputs change between calls (live APIs, RNG).
  • Use max_usage_count on individual tools to cap how often the agent can call them.
from crewai import Agent, PlanningConfig, LLM
planner_llm = LLM(model="openai/gpt-4o") # cheaper than the exec LLM is fine
researcher = Agent(
role="Researcher",
goal="Gather the strongest 3 sources on {topic}",
backstory="Veteran librarian.",
llm=LLM(model="openai/gpt-4o-mini"),
planning=True,
planning_config=PlanningConfig(max_attempts=3, llm=planner_llm),
)
  • planning=True with no planning_config uses defaults.
  • planning_config.llm lets you swap a smarter model in for the reflection step while the executor stays cheap.
  • planning_enabled is a property — it’s True if either planning or planning_config is set.
from crewai import Agent
from crewai.tasks.task_output import TaskOutput
def no_pii(output: TaskOutput) -> tuple[bool, str]:
if "ssn" in output.raw.lower():
return False, "Output contains what looks like an SSN — rewrite without it."
return True, output.raw
writer = Agent(
role="Writer",
goal="Draft blog posts",
backstory="Seasoned content editor.",
guardrail=no_pii,
guardrail_max_retries=2,
)
  • guardrail accepts a (TaskOutput) -> tuple[bool, Any] callable or a string describing the check (the agent itself runs the check).
  • On failure the agent re-executes up to guardrail_max_retries with the failure reason injected into the prompt.
from crewai import Agent
from crewai.knowledge.source.pdf_knowledge_source import PDFKnowledgeSource
research = Agent(
role="Legal Research",
goal="Summarise clauses from the supplied statutes",
backstory="Lawyer with 15 years of contract experience.",
knowledge_sources=[PDFKnowledgeSource(file_paths=["gdpr.pdf"])],
)

Knowledge is separate from memory: it’s an immutable vector store attached to the agent (or crew). See the Knowledge page for the full list of sources.

from crewai import Agent
from crewai.mcp import MCPServerStdio, MCPServerHTTP
agent = Agent(
role="DevOps",
goal="Inspect infra",
backstory="SRE.",
mcps=[
MCPServerStdio(command="uvx", args=["mcp-server-fetch"]),
MCPServerHTTP(url="https://tools.example.com/mcp", headers={"Authorization": "Bearer …"}),
"notion", # bare slug → CrewAI Platform integration
"https://tools.example.com/mcp#search", # single-tool filter
],
)
  • Stdio / HTTP / SSE are the three transports exposed in crewai.mcp.
  • tool_filter= on the config object lets you allow-list tools without a #tool suffix.
  • cache_tools_list=True speeds up subsequent runs after discovery.
from crewai import Agent
from crewai.a2a.config import A2AClientConfig
remote = Agent(
role="Index Lookups",
goal="Look up internal knowledge base",
backstory="Specialised retrieval agent.",
a2a=A2AClientConfig(
agent_card_url="https://kb.example.com/.well-known/agent.json",
),
)

Pass a single config, a single A2AServerConfig, or a list combining clients with one server to expose this agent as an A2A endpoint.

def on_step(step):
print(f"[{step.agent.role}] {step.tool} -> {step.observation[:80]}")
agent = Agent(role="...", goal="...", backstory="...", step_callback=on_step)

step_callback fires after each reasoning step. For crew-wide callbacks, use Crew(step_callback=...) — the crew-level one runs for every agent.

analyst_a = Agent(role="Market Analyst", ...)
analyst_b = Agent(role="Tech Analyst", ...)
task_a = Task(description="...", expected_output="...", agent=analyst_a, async_execution=True)
task_b = Task(description="...", expected_output="...", agent=analyst_b, async_execution=True)
summary = Task(description="Merge", expected_output="...", agent=editor, context=[task_a, task_b])

Async tasks at the same level run concurrently; context=[...] forces the summariser to wait for both.

Keep the reasoning model cheap, use a bigger model only for the planning reflection:

Agent(
role="Doc Writer",
goal="...",
backstory="...",
llm=LLM(model="openai/gpt-4o-mini"),
planning=True,
planning_config=PlanningConfig(llm=LLM(model="openai/gpt-4o")),
)
manager = Agent(
role="Manager",
goal="Break the problem down and route tasks",
backstory="Skilled at decomposition.",
allow_delegation=True,
llm=LLM(model="openai/gpt-4o"),
)
crew = Crew(agents=[manager, researcher, writer], tasks=[...], process=Process.hierarchical, manager_agent=manager)

Don’t set both manager_llm and manager_agent — pick one. manager_agent gives you full control over prompts.

Agent(role="Web Scraper", goal="...", backstory="...", max_rpm=20)

max_rpm is enforced before each LLM call; max_execution_time covers the whole task wall-clock.

from crewai import Agent, CheckpointConfig
slow = Agent(role="Heavy Research", goal="...", backstory="...",
checkpoint=CheckpointConfig(on_events=["task_completed", "llm_call_completed"]))

Other agents in the same crew can leave checkpoint=None (the crew’s setting then applies).

  • Field name is planning, not reasoning. The reasoning=... kwarg still works but is deprecated.
  • function_calling_llm isn’t the agent’s primary LLM. It’s used only for tool-call formatting on models that need a separate call for JSON-schema arguments.
  • allow_code_execution is a no-op. CodeInterpreterTool was removed from the public API in 1.14; use E2B or Modal.
  • knowledge_sources needs an embedder. If neither the agent’s embedder nor the crew’s is set, the default OpenAIEmbeddingFunction tries to reach OpenAI and needs OPENAI_API_KEY.
  • from_repository merges settings from your CrewAI repo on top of the explicit kwargs you passed — explicit values win.