Getting Started with Agent-Gantry
This guide will get you up and running with Agent-Gantry in about 5 minutes.
Prerequisites
- Python 3.10 or higher
- Basic knowledge of async/await in Python
- An LLM provider API key (OpenAI, Anthropic, Google, etc.)
Installation
Basic Installation
pip install agent-gantry
With LLM Provider Support
# All LLM providers (OpenAI, Anthropic, Google, Mistral, Groq)
pip install agent-gantry[llm-providers]
# Individual providers
pip install agent-gantry[openai] # OpenAI, Azure OpenAI
pip install agent-gantry[anthropic] # Anthropic Claude
pip install agent-gantry[google-genai] # Google Gemini
pip install agent-gantry[mistral] # Mistral AI
pip install agent-gantry[groq] # Groq
With Local Persistence
# LanceDB for local vector storage + Nomic embeddings
pip install agent-gantry[lancedb,nomic]
Everything
# All features, providers, and integrations
pip install agent-gantry[all]
Your First Agent-Gantry Application
Let’s build a simple agent that can help with weather and calculations.
Step 1: Set Up Environment
Create a new file my_agent.py and import the necessary modules:
import asyncio
import ast
import json
from typing import Union
from openai import AsyncOpenAI
from agent_gantry import AgentGantry, with_semantic_tools, set_default_gantry
def _evaluate_math_expression(expression: str) -> float:
"""Safely evaluate a basic math expression using the AST module."""
if len(expression) > 200:
raise ValueError("Expression too long")
node = ast.parse(expression, mode="eval").body
if sum(1 for _ in ast.walk(node)) > 50:
raise ValueError("Expression too complex")
def _evaluate(node: ast.AST) -> float:
if isinstance(node, ast.BinOp) and isinstance(
node.op,
(ast.Add, ast.Sub, ast.Mult, ast.Div, ast.Mod, ast.Pow),
):
left = _evaluate(node.left)
right = _evaluate(node.right)
if isinstance(node.op, ast.Add):
return left + right
if isinstance(node.op, ast.Sub):
return left - right
if isinstance(node.op, ast.Mult):
return left * right
if isinstance(node.op, ast.Div):
if right == 0:
raise ValueError("Division by zero is not allowed")
return left / right
if isinstance(node.op, ast.Mod):
if right == 0:
raise ValueError("Modulo by zero is not allowed")
return left % right
if isinstance(node.op, ast.Pow):
if abs(right) > 100:
raise ValueError("Exponent too large")
if left < 0 and not float(right).is_integer():
raise ValueError("Fractional exponents are not allowed for negative bases")
return left ** right
raise ValueError(
"Internal error: binary operator passed validation but was not handled. "
"This indicates a bug in the math expression evaluator."
)
if isinstance(node, ast.UnaryOp) and isinstance(node.op, (ast.UAdd, ast.USub)):
value = _evaluate(node.operand)
return value if isinstance(node.op, ast.UAdd) else -value
if isinstance(node, ast.Constant):
value = node.value
if isinstance(value, (int, float)):
return float(value)
raise ValueError("Only numeric literals are allowed")
raise ValueError(
"Unsupported expression. Allowed operators: +, -, *, /, %, ** and parentheses for grouping"
)
return float(_evaluate(node))
# Initialize OpenAI client
client = AsyncOpenAI() # Requires OPENAI_API_KEY in environment
# Initialize Agent-Gantry
gantry = AgentGantry()
set_default_gantry(gantry)
Step 2: Register Tools
Define and register your tools using the @gantry.register decorator:
@gantry.register(
tags=["weather", "forecast"],
description="Get current weather conditions for any city"
)
def get_weather(city: str, units: str = "fahrenheit") -> str:
"""
Get the current weather for a city.
Args:
city: Name of the city
units: Temperature units (fahrenheit or celsius)
Returns:
Weather description
"""
# In production, this would call a real weather API
return f"The weather in {city} is 72°{units[0].upper()} and sunny."
@gantry.register(
tags=["math", "calculation"],
description="Perform mathematical calculations"
)
def calculate(expression: str) -> Union[float, str]:
"""
Evaluate a mathematical expression.
Args:
expression: Math expression to evaluate (e.g., "15 * 8")
Returns:
Result of the calculation
"""
try:
return _evaluate_math_expression(expression)
except Exception as e:
return f"Error: {e}"
@gantry.register(
tags=["unit conversion"],
description="Convert between different units of measurement"
)
def convert_temperature(value: float, from_unit: str, to_unit: str) -> float:
"""
Convert temperature between Fahrenheit and Celsius.
Args:
value: Temperature value
from_unit: Source unit (F or C)
to_unit: Target unit (F or C)
Returns:
Converted temperature
"""
if from_unit.upper() == "F" and to_unit.upper() == "C":
return (value - 32) * 5/9
elif from_unit.upper() == "C" and to_unit.upper() == "F":
return (value * 9/5) + 32
else:
return value
Step 3: Sync Tools to Vector Store
Before using semantic search, sync tools to the vector store:
async def main():
# Sync tools to enable semantic search
await gantry.sync()
print(f"✓ Synced {len(gantry._registry._tools)} tools to vector store")
Step 4: Create Your LLM Function
Add the @with_semantic_tools decorator to automatically inject relevant tools:
@with_semantic_tools(limit=3, dialect="openai")
async def chat(prompt: str, *, tools=None):
"""Chat with the LLM, automatically providing relevant tools."""
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}],
tools=tools, # Automatically injected by decorator
tool_choice="auto"
)
return response
Step 5: Test Your Agent
# Test queries
queries = [
"What's the weather like in Paris?",
"Calculate 15% of 250",
"Convert 72 degrees Fahrenheit to Celsius"
]
for query in queries:
print(f"\n{'='*60}")
print(f"User: {query}")
print(f"{'='*60}")
response = await chat(query)
# Handle tool calls if present
if response.choices[0].message.tool_calls:
print("🔧 Tool calls requested:")
for tool_call in response.choices[0].message.tool_calls:
print(f" → {tool_call.function.name}({tool_call.function.arguments})")
# Execute the tool
from agent_gantry.schema.execution import ToolCall
try:
parsed_args = json.loads(tool_call.function.arguments)
except json.JSONDecodeError as err:
print(f" ⚠️ Unable to parse tool arguments: {err}")
continue
result = await gantry.execute(
ToolCall(
tool_name=tool_call.function.name,
arguments=parsed_args,
)
)
print(f" ✓ Result: {result.output}")
else:
print(f"Assistant: {response.choices[0].message.content}")
# Run the agent
if __name__ == "__main__":
asyncio.run(main())
Complete Example
Here’s the full code in one place:
This example reuses the
_evaluate_math_expressionhelper shown in the first code block of this guide. Copy that helper into yourmy_agent.pybefore these imports so the script remains self-contained.
import asyncio
import json
from typing import Union
from openai import AsyncOpenAI
from agent_gantry import AgentGantry, with_semantic_tools, set_default_gantry
from agent_gantry.schema.execution import ToolCall
# Initialize
client = AsyncOpenAI()
gantry = AgentGantry()
set_default_gantry(gantry)
# Register tools
@gantry.register(tags=["weather"])
def get_weather(city: str, units: str = "fahrenheit") -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is 72°{units[0].upper()} and sunny."
@gantry.register(tags=["math"])
def calculate(expression: str) -> Union[float, str]:
"""Evaluate a mathematical expression."""
try:
return _evaluate_math_expression(expression)
except Exception as e:
return f"Error: {e}"
@gantry.register(tags=["conversion"])
def convert_temperature(value: float, from_unit: str, to_unit: str) -> float:
"""Convert temperature between Fahrenheit and Celsius."""
if from_unit.upper() == "F" and to_unit.upper() == "C":
return (value - 32) * 5/9
elif from_unit.upper() == "C" and to_unit.upper() == "F":
return (value * 9/5) + 32
return value
async def main():
# Sync tools
await gantry.sync()
print(f"✓ Synced {len(gantry._registry._tools)} tools")
# Define chat function with semantic tools
@with_semantic_tools(limit=3, dialect="openai")
async def chat(prompt: str, *, tools=None):
response = await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}],
tools=tools,
tool_choice="auto"
)
return response
# Test
query = "What's the weather like in Paris?"
response = await chat(query)
print(f"Query: {query}")
message = response.choices[0].message
if message.tool_calls:
print("Tool calls requested:")
for tool_call in message.tool_calls:
print(f" → {tool_call.function.name}({tool_call.function.arguments})")
try:
parsed_args = json.loads(tool_call.function.arguments)
except json.JSONDecodeError as err:
print(f" ⚠️ Unable to parse tool arguments: {err}")
continue
result = await gantry.execute(
ToolCall(tool_name=tool_call.function.name, arguments=parsed_args)
)
print(f" ✓ Result: {result.output}")
else:
print(f"Response: {message}")
if __name__ == "__main__":
asyncio.run(main())
Run It
export OPENAI_API_KEY="your-api-key-here"
python my_agent.py
What Just Happened?
- Tool Registration: You registered 3 tools with descriptions and tags
- Automatic Embedding: Agent-Gantry embedded your tools into a vector store
- Semantic Routing: When you called
chat(), Agent-Gantry:- Extracted the prompt (“What’s the weather like in Paris?”)
- Performed vector search to find relevant tools
- Found
get_weatheras the most relevant tool - Converted it to OpenAI format
- Injected it into your LLM call
- Context Window Savings: Instead of sending all 3 tools, only the top 1-3 relevant tools were sent, saving ~70-90% on tokens
Next Steps
Now that you have a working agent, explore more advanced features:
Learn More About Semantic Routing
- Semantic Tool Decorator - Deep dive into the decorator
- Vector Store Integration - Advanced vector store usage
Add More Features
- Dynamic MCP Selection - Connect to MCP servers on-demand
- Local Persistence - Use LanceDB for persistent tool storage
- Configuration - Customize embedders, rerankers, and more
Integrate with Different LLM Providers
- LLM SDK Compatibility - Use with Anthropic, Google, Mistral, etc.
Production Best Practices
- Architecture Overview - Understand the system design
- Best Practices - Security, performance, and error handling
Common Issues
ImportError: No module named ‘agent_gantry’
Make sure you’ve installed the package:
pip install agent-gantry
Tools not being selected correctly
- Make sure you called
await gantry.sync()after registering tools - Add more descriptive
descriptionandtagsto your tools - Ensure your tool docstrings are clear and detailed
“No default gantry set” error
Call set_default_gantry(gantry) after creating your AgentGantry instance:
gantry = AgentGantry()
set_default_gantry(gantry) # This line is required
Questions or Issues?
- Check the Troubleshooting Guide
- Review the API Reference
- Open an issue on GitHub