Quick Start
🚀 Quick Start Guide
Section titled “🚀 Quick Start Guide”Get up and running with AI agents in just a few minutes.
Choose Your Path
Section titled “Choose Your Path”👤 Path 1: Simple Single Agent (5 minutes)
Section titled “👤 Path 1: Simple Single Agent (5 minutes)”Use SmolAgents for the fastest start
pip install smolagentsfrom smolagents import CodeAgent, tool
@tooldef add_numbers(a: int, b: int) -> int: """Add two numbers together.""" return a + b
agent = CodeAgent(tools=[add_numbers])result = agent.run("What is 15 + 27?")print(result)Next Steps:
- Read SmolAgents README
- Try more SmolAgents Recipes
- Deep dive: SmolAgents Comprehensive Guide
👥 Path 2: Multi-Agent Team (10 minutes)
Section titled “👥 Path 2: Multi-Agent Team (10 minutes)”Use CrewAI for team-based agents
pip install crewaifrom crewai import Agent, Task, Crew
# Define agents with different rolesanalyst = Agent( role="Data Analyst", goal="Provide insights from data", backstory="You are an experienced data analyst")
writer = Agent( role="Content Writer", goal="Write clear reports", backstory="You are a professional writer")
# Define tasksanalysis_task = Task( description="Analyse the Q3 sales data", agent=analyst)
report_task = Task( description="Write a summary report", agent=writer, depends_on=[analysis_task])
# Create and run crewcrew = Crew(agents=[analyst, writer], tasks=[analysis_task, report_task])result = crew.kickoff()print(result)Next Steps:
- Read CrewAI README
- Try more CrewAI Recipes
- Deep dive: CrewAI Comprehensive Guide
🔗 Path 3: Multi-Agent with Handoffs (15 minutes)
Section titled “🔗 Path 3: Multi-Agent with Handoffs (15 minutes)”Use OpenAI Agents SDK for powerful coordination
pip install openai-agentsexport OPENAI_API_KEY=sk-your-keyfrom agents import Agent, Runner
# Create specialised agentssupport_agent = Agent( name="Support", instructions="You are a customer support specialist")
technical_agent = Agent( name="Technical", instructions="You are a technical specialist")
# Agents can handoff to each othersupport_agent.instructions += "\nHandoff to technical if complex"
# Runresult = await Runner.run(support_agent, "I'm having a technical issue")print(result.final_output)Next Steps:
- Read OpenAI Agents SDK README
- Try more OpenAI Recipes
- Deep dive: OpenAI Comprehensive Guide
📚 Path 4: Knowledge Retrieval (RAG) (15 minutes)
Section titled “📚 Path 4: Knowledge Retrieval (RAG) (15 minutes)”Use LlamaIndex for RAG systems
pip install llama-indexfrom llama_index.core import VectorStoreIndex, SimpleDirectoryReader
# Load documentsdocuments = SimpleDirectoryReader("data").load_data()
# Create indexindex = VectorStoreIndex.from_documents(documents)
# Queryquery_engine = index.as_query_engine()response = query_engine.query("What is the capital of France?")print(response)Next Steps:
- Read LlamaIndex README
- Try more LlamaIndex Recipes
- Deep dive: LlamaIndex Comprehensive Guide
🌊 Path 5: Complex Workflows (20 minutes)
Section titled “🌊 Path 5: Complex Workflows (20 minutes)”Use LangGraph for sophisticated control flow
pip install langgraph langchain-openaifrom langgraph.graph import StateGraphfrom typing import TypedDict
class State(TypedDict): messages: list[str] current_step: str
# Create graphbuilder = StateGraph(State)
def step_1(state: State) -> State: state["current_step"] = "completed_step_1" return state
def step_2(state: State) -> State: state["current_step"] = "completed_step_2" return state
builder.add_node("step_1", step_1)builder.add_node("step_2", step_2)builder.add_edge("step_1", "step_2")builder.set_entry_point("step_1")
graph = builder.compile()result = graph.invoke({"messages": [], "current_step": "start"})print(result)Next Steps:
- Read LangGraph README
- Try more LangGraph Recipes
- Deep dive: LangGraph Comprehensive Guide
Common Tasks
Section titled “Common Tasks”Add a Tool to Your Agent
Section titled “Add a Tool to Your Agent”@tooldef search_web(query: str) -> str: """Search the web for information.""" # Implementation return results
agent = CodeAgent(tools=[search_web])Implement Memory
Section titled “Implement Memory”from langgraph.checkpoint.memory import MemorySaver
memory = MemorySaver()graph = builder.compile(checkpointer=memory)
# Now the graph remembers between runsAdd Error Handling
Section titled “Add Error Handling”from smolagents import toolimport logging
@tooldef risky_operation() -> str: """An operation that might fail.""" try: return do_something() except Exception as e: logging.error(f"Operation failed: {e}") return "Operation failed, trying alternative approach"Deploy to Production
Section titled “Deploy to Production”# Containerise your agentdocker build -t my-agent .docker run -p 8000:8000 my-agent
# Deploy to cloudaws ecs run-task --cluster my-cluster --task-definition my-agentComparison: Which Path for You?
Section titled “Comparison: Which Path for You?”| Need | Recommendation | Time | Complexity |
|---|---|---|---|
| Simple automation | SmolAgents | 5 min | Low |
| Team of agents | CrewAI | 10 min | Medium |
| Flexible coordination | OpenAI Agents SDK | 15 min | Medium |
| RAG/Search | LlamaIndex | 15 min | Medium |
| Complex workflows | LangGraph | 20 min | High |
Next Steps
Section titled “Next Steps”After your quick start:
- Explore Recipes - Real-world implementations
- Read Comprehensive Guide - Full features and concepts
- Study Production Guide - Deployment and scaling
- Review Diagrams - Architecture and patterns
Common Patterns
Section titled “Common Patterns”Chain Multiple Agents
Section titled “Chain Multiple Agents”# CrewAItask1 = Task(description="...", agent=agent1)task2 = Task(description="...", agent=agent2, depends_on=[task1])
# LangGraphbuilder.add_edge("agent1_node", "agent2_node")
# OpenAI Agents SDKagent1 can handoff to agent2Parallel Execution
Section titled “Parallel Execution”# CrewAItasks = [task1, task2, task3] # No dependencies = parallel
# LangGraphbuilder.add_edge("start", ["agent1", "agent2"]) # Parallelbuilder.add_edge(["agent1", "agent2"], "join_node") # Merge results
# Pythonimport asyncioawait asyncio.gather(agent1.run(...), agent2.run(...))Conditional Routing
Section titled “Conditional Routing”# LangGraph (best support)def route(state): if state["needs_help"]: return "escalate" else: return "resolve"
builder.add_conditional_edges("classify", route)
# OpenAI Agents SDKif condition: agent.handoff_to(specialist_agent)
# CrewAI# Use task dependenciesTroubleshooting
Section titled “Troubleshooting”Agent Isn’t Using Tools
Section titled “Agent Isn’t Using Tools”# ✅ Correct: Tools are passedagent = Agent(tools=[my_tool])
# ❌ Wrong: Tools not passedagent = Agent()Memory Not Working
Section titled “Memory Not Working”# ✅ Correct: Checkpointer configuredgraph = builder.compile(checkpointer=MemorySaver())
# ❌ Wrong: No checkpointergraph = builder.compile()Streaming Not Working
Section titled “Streaming Not Working”# ✅ Correct: Use streaming APIfor event in agent.stream(input): print(event)
# ❌ Wrong: Not streamingoutput = agent.run(input)Resources
Section titled “Resources”Get Help
Section titled “Get Help”- Check the relevant guide - Comprehensive guides have troubleshooting sections
- Search the Recipes - Your use case is probably covered
- Review Production Guide - For deployment issues
- Check official docs - Links provided in each guide
Ready? Pick a path above and start building! 🚀