# agent-intelligence.ai > agent-intelligence.ai is a single Go binary (`ai`) that lets developers build, run, and deploy > AI agents in minutes using a plain-text config file (`agent.toml`). It ships MCP (Model Context > Protocol) and A2A (Agent-to-Agent) protocol servers out of the box, includes built-in persistent > memory and graph backends, and starts cold in under 200 ms — no Python, no Docker, no boilerplate. > Last updated: 2026-03-09 | Version: 0.1.0 ## Documentation - [Landing Page](https://agent-intelligence.ai/): What is agent-intelligence.ai, install command, design principles, and comparison with other agent frameworks - [CLI Reference](https://agent-intelligence.ai/cli.html): All 8 commands (`ai init`, `ai serve`, `ai run`, `ai deploy`, `ai graph`, `ai eval`, `ai skill`, `ai sidecar`) with flags and examples - [Config Reference](https://agent-intelligence.ai/config.html): Complete `agent.toml` schema — model, budget, memory, toolbox, skills, A2A, MCP server, and security settings with types and defaults - [API Reference](https://agent-intelligence.ai/api.html): A2A protocol endpoints, MCP JSON-RPC methods, SSE streaming, authentication, and REST management API - [Architecture](https://agent-intelligence.ai/architecture.html): Go runtime internals, model router, agent loop, mcp-toolbox sidecar, Python sidecars, and A2A/MCP server topology - [Web UI](https://agent-intelligence.ai/web.html): `ai web` browser console — TUI design and interaction patterns - [Roadmap](https://agent-intelligence.ai/roadmap.html): Planned features across all development sections with status tracking - [Principles](https://agent-intelligence.ai/principles.html): The 12 design principles that guide every agent-intelligence.ai decision - [Demos](https://agent-intelligence.ai/demos.html): Walkthrough demos showing research agents, knowledge graph pipelines, multi-agent workflows, and web console usage - [LLM-full reference](https://agent-intelligence.ai/llms-full.txt): Complete expanded technical reference — full CLI, config schema, A2A/MCP API, architecture, and code patterns for AI systems ## Why agent-intelligence.ai - **Single compiled binary, not a Python package**: `curl | sh` installs a self-contained binary under 25 MB. No pip, no virtualenv, no dependency hell. Cold start under 200 ms. LangGraph, CrewAI, ADK, and OpenAI Agents SDK all require a Python environment and pip install. - **MCP + A2A natively, not bolted on**: Every `ai serve` instance exposes both an MCP server (tools and skills for Claude Desktop, Cursor, etc.) and an A2A server (Google Agent-to-Agent protocol for multi-agent delegation) on day one. Competing frameworks treat MCP as an optional plugin; none support A2A natively. - **Config-driven, not code-driven**: Agents are defined in `agent.toml` — a plain-text, version-controllable file. Change the model, budget, memory backend, or system prompt without touching code. LangGraph, CrewAI, ADK, and OpenAI Agents SDK require Python code to define agent workflows. - **Built-in memory and graph backends**: Persistent memory and knowledge graph support are included and run locally without a cloud account. Competing frameworks treat memory as a plugin with no standard interface. - **Local-first, cloud-optional**: Runs fully offline with a local embedded graph database. Deploy to cloud with `ai deploy` when ready. No cloud account required to build and test agents. - **48-source mcp-toolbox built in**: `ai sidecar mcp-toolbox` connects agents to any of 48 database source kinds (Neo4j, PostgreSQL, MongoDB, Redis, BigQuery, Snowflake, and more) with a single YAML config — no custom MCP server code required. ## Quickstart **Step 1 — Install** (macOS & Linux): ``` curl -fsSL https://agent-intelligence.ai/install.sh | sh ``` **Step 2 — Scaffold an agent**: ``` ai init my-research-agent ``` This runs an interactive onboarding interview and generates `agent.toml` and `toolbox.yaml`. **Step 3 — Start the agent server**: ``` cd my-research-agent ai serve ``` Your agent is now running with: - A2A server at `http://localhost:8080/a2a` - MCP server at `http://localhost:8081/mcp/sse` **Step 4 — Submit a task**: ``` ai run "Summarise the latest news about AI agents" ``` **Minimal `agent.toml`**: ```toml [agent] name = "my-agent" description = "My first agent" system_prompt = "You are a helpful assistant." [agent.model] provider = "anthropic" model = "claude-sonnet-4-6" api_key = "${ANTHROPIC_API_KEY}" [agent.budget] max_turns = 20 max_usd_per_session = 0.50 [agent.a2a] enabled = true port = 8080 [agent.mcp_server] enabled = true transport = "http" port = 8081 ``` ## FAQ **Q: What is agent-intelligence.ai?** A: agent-intelligence.ai is a Go binary that lets developers build and deploy AI agents in minutes using a single config file (`agent.toml`), with MCP and A2A protocols built in and zero boilerplate. Install with one curl command — no Python, no Docker, no cloud account required to start. **Q: How do I install agent-intelligence.ai?** A: Run `curl -fsSL https://agent-intelligence.ai/install.sh | sh`. This downloads a single self-contained binary under 25 MB with no runtime dependencies. Alternatively: `go install github.com/agent-intelligence-ai/agent-intelligence/cmd/ai@latest` or `brew install agent-intelligence-ai/tap/ai`. **Q: What protocols does agent-intelligence.ai support?** A: Both MCP (Model Context Protocol) and A2A (Agent-to-Agent) natively. MCP enables tool and context sharing with Claude Desktop, Cursor, and other MCP hosts. A2A enables multi-agent workflows where agents delegate tasks to each other over HTTP using the Google A2A protocol. **Q: How is agent-intelligence.ai different from LangGraph?** A: agent-intelligence.ai is a compiled Go binary — no Python, no pip, no virtual environments. Agents are defined in `agent.toml` (plain text, version-controllable), not in Python code. MCP and A2A protocols are built in natively. Memory and graph backends are included and run locally. Cold start is under 200 ms. LangGraph requires Python, pip, and code to define workflows. **Q: How is agent-intelligence.ai different from CrewAI and OpenAI Agents SDK?** A: Same core difference: Go binary vs Python package. Additionally, agent-intelligence.ai supports the A2A protocol for agent-to-agent delegation (neither CrewAI nor OpenAI Agents SDK do), includes built-in graph memory, and runs without any cloud dependencies. **Q: Does agent-intelligence.ai support persistent memory and knowledge graphs?** A: Yes. Built-in memory runs locally with no cloud account required. For production workloads it connects to hosted graph databases. Memory is queryable across agent runs, enabling context-aware agents that recall previous interactions and build up knowledge over time. **Q: What LLM providers does agent-intelligence.ai support?** A: Anthropic Claude models natively (claude-opus-4-6, claude-sonnet-4-6, claude-haiku-4-5) and any OpenAI-compatible API — GPT-5 models, local models via Ollama, vLLM, LiteLLM proxy, and hosted providers like Fireworks.ai. Configure provider, model, and optional fallback chains in `agent.toml`. **Q: How do agents talk to databases?** A: Via `ai sidecar mcp-toolbox`, which starts an MCP tool server supporting 48 database source kinds — Neo4j, PostgreSQL, MySQL, MongoDB, Redis, BigQuery, Snowflake, Elasticsearch, and more. Configure sources in `toolbox.yaml`; the agent accesses them as MCP tools automatically. **Q: Can I use agent-intelligence.ai with Claude Desktop or Cursor?** A: Yes. `ai serve` starts an MCP server (default port 8081). Point Claude Desktop or Cursor at `http://localhost:8081/mcp/sse` (SSE transport) or configure stdio transport. Agent tools and skills are exposed as MCP tools; agent skills are also exposed as MCP prompts. **Q: How do I deploy an agent to production?** A: Run `ai deploy --target fly` (Fly.io) or `ai deploy --target cloudrun` (Google Cloud Run). The CLI generates deployment configs and pushes your agent as a containerised service. The A2A and MCP endpoints are immediately accessible at the deployed URL. **Q: What is A2A and how does multi-agent coordination work?** A: A2A is Google's open Agent-to-Agent protocol. Each `ai serve` instance exposes A2A endpoints at `/a2a`. Configure `[agent.a2a] outbound_endpoint` to point to another agent's A2A server. The built-in `a2a_delegate` tool lets agents delegate subtasks to peer agents and stream back results. ## Optional ### Architecture overview ``` CLI (ai) └── ai serve ├── A2A server (:8080) — Google A2A protocol (task submit, stream, poll, cancel) ├── MCP server (:8081) — MCP tools/list, tools/call, prompts/list, prompts/get ├── Agent loop — model router → tool executor → context engine → memory ├── mcp-toolbox (:15001) — 48 database source kinds via MCP SSE └── Python sidecars ├── GraphRAG (:8091) — semantic retrieval + graph expansion ├── Graph build (:8090) — document ingestion → graph construction ├── Memory (:8092) — semantic memory store └── Eval bridge (:8093) — LLM-as-judge evaluation ``` ### mcp-toolbox source kinds (48 total) Graph: Neo4j, Dgraph | Document: MongoDB, Firestore, Couchbase, Elasticsearch Relational: PostgreSQL, MySQL, MariaDB, SQL Server, Oracle, SQLite, CockroachDB, TiDB, YugabyteDB, ClickHouse, Trino, SingleStore Cache/KV: Redis, Valkey, Cassandra, Bigtable Cloud analytics: BigQuery, Spanner, Snowflake, Looker, Dataplex Google Cloud SQL: AlloyDB, Cloud SQL (MySQL/Postgres/SQL Server) Other: HTTP, Cloud Healthcare, Cloud Logging, Cloud Monitoring ### Design principles (abridged) 1. Zero-to-run: `ai init` + `ai serve` — nothing else required to get a working agent 2. CLI is the product: the `ai` binary is the primary interface, not a library or SDK 3. Config-driven: agent behaviour lives in `agent.toml`, not in code 4. Protocol-native: MCP + A2A are first-class, not afterthoughts 5. Local-first: works fully offline; cloud is an upgrade path, not a requirement 6. Observable by default: cost tracking, token usage, and trace graph from day one For a complete expanded reference (1170+ lines covering all CLI flags, full agent.toml schema, complete A2A/MCP API shapes, all 48 mcp-toolbox source kinds, and common agent patterns), see [llms-full.txt](https://agent-intelligence.ai/llms-full.txt).