Skip to content

Quick Start

Get up and running with AI agents in just a few minutes.


👤 Path 1: Simple Single Agent (5 minutes)

Section titled “👤 Path 1: Simple Single Agent (5 minutes)”

Use SmolAgents for the fastest start

Terminal window
pip install smolagents
from smolagents import CodeAgent, tool
@tool
def add_numbers(a: int, b: int) -> int:
"""Add two numbers together."""
return a + b
agent = CodeAgent(tools=[add_numbers])
result = agent.run("What is 15 + 27?")
print(result)

Next Steps:

  1. Read SmolAgents README
  2. Try more SmolAgents Recipes
  3. Deep dive: SmolAgents Comprehensive Guide

👥 Path 2: Multi-Agent Team (10 minutes)

Section titled “👥 Path 2: Multi-Agent Team (10 minutes)”

Use CrewAI for team-based agents

Terminal window
pip install crewai
from crewai import Agent, Task, Crew
# Define agents with different roles
analyst = Agent(
role="Data Analyst",
goal="Provide insights from data",
backstory="You are an experienced data analyst"
)
writer = Agent(
role="Content Writer",
goal="Write clear reports",
backstory="You are a professional writer"
)
# Define tasks
analysis_task = Task(
description="Analyse the Q3 sales data",
agent=analyst
)
report_task = Task(
description="Write a summary report",
agent=writer,
depends_on=[analysis_task]
)
# Create and run crew
crew = Crew(agents=[analyst, writer], tasks=[analysis_task, report_task])
result = crew.kickoff()
print(result)

Next Steps:

  1. Read CrewAI README
  2. Try more CrewAI Recipes
  3. Deep dive: CrewAI Comprehensive Guide

🔗 Path 3: Multi-Agent with Handoffs (15 minutes)

Section titled “🔗 Path 3: Multi-Agent with Handoffs (15 minutes)”

Use OpenAI Agents SDK for powerful coordination

Terminal window
pip install openai-agents
export OPENAI_API_KEY=sk-your-key
from agents import Agent, Runner
# Create specialised agents
support_agent = Agent(
name="Support",
instructions="You are a customer support specialist"
)
technical_agent = Agent(
name="Technical",
instructions="You are a technical specialist"
)
# Agents can handoff to each other
support_agent.instructions += "\nHandoff to technical if complex"
# Run
result = await Runner.run(support_agent, "I'm having a technical issue")
print(result.final_output)

Next Steps:

  1. Read OpenAI Agents SDK README
  2. Try more OpenAI Recipes
  3. Deep dive: OpenAI Comprehensive Guide

📚 Path 4: Knowledge Retrieval (RAG) (15 minutes)

Section titled “📚 Path 4: Knowledge Retrieval (RAG) (15 minutes)”

Use LlamaIndex for RAG systems

Terminal window
pip install llama-index
from llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# Load documents
documents = SimpleDirectoryReader("data").load_data()
# Create index
index = VectorStoreIndex.from_documents(documents)
# Query
query_engine = index.as_query_engine()
response = query_engine.query("What is the capital of France?")
print(response)

Next Steps:

  1. Read LlamaIndex README
  2. Try more LlamaIndex Recipes
  3. Deep dive: LlamaIndex Comprehensive Guide

🌊 Path 5: Complex Workflows (20 minutes)

Section titled “🌊 Path 5: Complex Workflows (20 minutes)”

Use LangGraph for sophisticated control flow

Terminal window
pip install langgraph langchain-openai
from langgraph.graph import StateGraph
from typing import TypedDict
class State(TypedDict):
messages: list[str]
current_step: str
# Create graph
builder = StateGraph(State)
def step_1(state: State) -> State:
state["current_step"] = "completed_step_1"
return state
def step_2(state: State) -> State:
state["current_step"] = "completed_step_2"
return state
builder.add_node("step_1", step_1)
builder.add_node("step_2", step_2)
builder.add_edge("step_1", "step_2")
builder.set_entry_point("step_1")
graph = builder.compile()
result = graph.invoke({"messages": [], "current_step": "start"})
print(result)

Next Steps:

  1. Read LangGraph README
  2. Try more LangGraph Recipes
  3. Deep dive: LangGraph Comprehensive Guide

@tool
def search_web(query: str) -> str:
"""Search the web for information."""
# Implementation
return results
agent = CodeAgent(tools=[search_web])

Learn More →


from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()
graph = builder.compile(checkpointer=memory)
# Now the graph remembers between runs

Learn More →


from smolagents import tool
import logging
@tool
def risky_operation() -> str:
"""An operation that might fail."""
try:
return do_something()
except Exception as e:
logging.error(f"Operation failed: {e}")
return "Operation failed, trying alternative approach"

Learn More →


Terminal window
# Containerise your agent
docker build -t my-agent .
docker run -p 8000:8000 my-agent
# Deploy to cloud
aws ecs run-task --cluster my-cluster --task-definition my-agent

Learn More →


NeedRecommendationTimeComplexity
Simple automationSmolAgents5 minLow
Team of agentsCrewAI10 minMedium
Flexible coordinationOpenAI Agents SDK15 minMedium
RAG/SearchLlamaIndex15 minMedium
Complex workflowsLangGraph20 minHigh

After your quick start:

  1. Explore Recipes - Real-world implementations
  2. Read Comprehensive Guide - Full features and concepts
  3. Study Production Guide - Deployment and scaling
  4. Review Diagrams - Architecture and patterns

# CrewAI
task1 = Task(description="...", agent=agent1)
task2 = Task(description="...", agent=agent2, depends_on=[task1])
# LangGraph
builder.add_edge("agent1_node", "agent2_node")
# OpenAI Agents SDK
agent1 can handoff to agent2

# CrewAI
tasks = [task1, task2, task3] # No dependencies = parallel
# LangGraph
builder.add_edge("start", ["agent1", "agent2"]) # Parallel
builder.add_edge(["agent1", "agent2"], "join_node") # Merge results
# Python
import asyncio
await asyncio.gather(agent1.run(...), agent2.run(...))

# LangGraph (best support)
def route(state):
if state["needs_help"]:
return "escalate"
else:
return "resolve"
builder.add_conditional_edges("classify", route)
# OpenAI Agents SDK
if condition:
agent.handoff_to(specialist_agent)
# CrewAI
# Use task dependencies

# ✅ Correct: Tools are passed
agent = Agent(tools=[my_tool])
# ❌ Wrong: Tools not passed
agent = Agent()

See Tool Guide →


# ✅ Correct: Checkpointer configured
graph = builder.compile(checkpointer=MemorySaver())
# ❌ Wrong: No checkpointer
graph = builder.compile()

See Memory Guide →


# ✅ Correct: Use streaming API
for event in agent.stream(input):
print(event)
# ❌ Wrong: Not streaming
output = agent.run(input)

See Streaming Guide →



  1. Check the relevant guide - Comprehensive guides have troubleshooting sections
  2. Search the Recipes - Your use case is probably covered
  3. Review Production Guide - For deployment issues
  4. Check official docs - Links provided in each guide

Ready? Pick a path above and start building! 🚀