Agent-Gantry Documentation
Universal Tool Orchestration Platform for LLM-Based Agent Systems
Context is precious. Execution is sacred. Trust is earned.
Welcome
Agent-Gantry is a Python library that solves three critical problems in LLM-based agent systems:
Quick Links
Installation
# Basic installation
pip install agent-gantry
# With all LLM providers
pip install agent-gantry[llm-providers]
# With local persistence (LanceDB + Nomic embeddings)
pip install agent-gantry[lancedb,nomic]
# Everything
pip install agent-gantry[all]
5-Minute Quick Start
Transform your existing LLM code into a semantically-aware agent system:
from openai import AsyncOpenAI
from agent_gantry import AgentGantry, with_semantic_tools, set_default_gantry
# Initialize
client = AsyncOpenAI()
gantry = AgentGantry()
set_default_gantry(gantry)
# Register tools
@gantry.register(tags=["weather"])
def get_weather(city: str) -> str:
"""Get the current weather for a city."""
return f"The weather in {city} is 72Β°F and sunny."
# Apply decorator - tools are automatically injected!
@with_semantic_tools(limit=3)
async def ask_llm(prompt: str, *, tools=None):
return await client.chat.completions.create(
model="gpt-4o",
messages=[{"role": "user", "content": prompt}],
tools=tools # Agent-Gantry injects relevant tools here
)
# Just call it - semantic routing happens automatically
await ask_llm("What's the weather in San Francisco?")
Thatβs it! Agent-Gantry automatically:
- π― Selects only relevant tools based on the query (reducing token costs by ~79%)
- π Converts tool schemas to any LLM provider format
- π‘οΈ Executes tools with circuit breakers and security policies
Key Features
Semantic Routing
Intelligent tool selection using vector similarity, reducing context window usage by ~90%
Multi-Protocol Support
Native support for:
- MCP (Model Context Protocol) - Client and Server
- A2A (Agent-to-Agent Protocol)
- OpenAI, Anthropic, Google Gemini, Mistral, Groq
Production-Ready
- Circuit breakers and health tracking
- Retries with exponential backoff
- Structured logging and telemetry
- Zero-trust security with capability-based permissions
Framework Agnostic
Works seamlessly with:
- LangChain
- AutoGen
- CrewAI
- LlamaIndex
- Semantic Kernel
- Custom agents
Whatβs New in v0.1.2
Context Window Savings
Agent-Gantry significantly reduces token usage by dynamically surfacing only the most relevant tools.
Benchmark Results:
| Scenario | Tools Passed | Prompt Tokens | Cost Reduction |
|---|---|---|---|
| Standard (All Tools) | 15 | 366 | - |
| Agent-Gantry (Top 2) | 2 | 78 | ~79% |
Measured using gpt-3.5-turbo with provider-reported token usage.
Stress Test: 100 Tools
| Metric | Value |
|---|---|
| Total Tools | 100 |
| Retrieval Limit | Top 2 |
| Accuracy | 100% (10/10 queries) |
| Embedder | Nomic (nomic-embed-text-v1.5) |
Documentation Structure
- Getting Started - Installation, quick start, and first steps
- Guides - Topic-specific tutorials and patterns
- Reference - API documentation and configuration
- Architecture - System design and best practices
- Troubleshooting - Common issues and solutions
Community & Support
- GitHub Repository - Source code, issues, and contributions
- Report a Bug - Found an issue? Let us know
- Feature Requests - Suggest improvements
License
Agent-Gantry is open-source software licensed under the MIT License.
Transform your LLM agent system with semantic tool orchestration
Get Started Now β