agent-intelligence.ai

Build and deploy agents in minutes.

Developer-native. Memory-backed. Agent-ready.

[>] how it compares

Unlike Python-based agent frameworks, agent-intelligence.ai compiles to a single Go binary that installs in seconds — no pip, no virtualenv, no Docker image required. Agents are fully described in agent.toml: plain text, version-controllable, and readable without executing code. MCP and A2A protocols are built in from day one, so Claude Code, Cursor, and any MCP-compatible client connects directly to your agent without additional configuration.

How agent-intelligence.ai compares to LangGraph, CrewAI, ADK, and OpenAI Agents SDK
Featureagent-intelligence.aiLangGraphCrewAIADKAgents SDK
LanguageGoPythonPythonPythonPython
ProtocolsMCP + A2Anone nativenone nativeMCPMCP
Memory / Graphbuilt-inpluginpluginpluginnone
Installsingle binarypippippippip
Configagent.tomlcodecodecodecode
Local-firstyesnononono
Binary size<25 MB50+ MB50+ MB50+ MB50+ MB
curl -fsSL https://agent-intelligence.ai/install.sh | sh
★ star on github

[>] quickstart — zero to running agent in 5 minutes

  1. 1. install the binary
    curl -fsSL https://agent-intelligence.ai/install.sh | sh
  2. 2. scaffold a new agent
    ai init my-agent && cd my-agent
  3. 3. configure agent.toml — set your model and API key
    [agent]
    name    = "my-agent"
    
    [agent.model]
    provider = "anthropic"
    model    = "claude-sonnet-4-6"
    api_key  = "${ANTHROPIC_API_KEY}"
  4. 4. start the agent server
    ai serve
  5. 5. run your first task
    ai run "summarise recent AI research papers"

→ for persistent memory: ai graph connect $GRAPH_URI

user@ai $ ai init my-research-agent

✓ scaffolded agent: my-research-agent/