CLI Reference

The ai binary is a single statically-linked Go binary (<25 MB, <200 ms cold start). All commands read from agent.toml in the current directory by default.

Installation

One-line install (macOS & Linux)

curl -fsSL https://agent-intelligence.ai/install.sh | sh

Go install

go install github.com/agent-intelligence-ai/agent-intelligence/cmd/ai@latest

Homebrew

brew install agent-intelligence-ai/tap/ai

Global Flags

Flags placed before the subcommand name, applying to every command.

ai <version>

ai [global flags] <subcommand> [subcommand flags] [args]
FlagType / DefaultDescription
--configstring / agent.tomlPath to the agent config file; env: AI_CONFIG
--verbosebool / falseEnable verbose output (tool call traces, debug logs); env: AI_VERBOSE=1
--quiet, -qbool / falseSuppress non-essential output; errors only (useful for scripting / CI); env: AI_QUIET=1
--no-colorbool / falseDisable ANSI color codes; keeps Unicode symbols; env: AI_NO_COLOR=1 or NO_COLOR
--plainbool / falsePlain ASCII output: strips ANSI codes and Unicode symbols (✓→ok, ✗→FAIL, ──→--). Implies --no-color. Required with --yes (SF-3 safety floor). env: PLAIN=1
--outputstring / textOutput format: text (default), json, ndjson (ai run only); env: AI_OUTPUT
--non-interactivebool / falseSuppress all interactive prompts; commands requiring input exit 2; env: AI_NON_INTERACTIVE=1
--yesbool / falseAuto-approve all human_approval prompts (CI use only); requires --plain (SF-3 safety floor)

ai init

Run the onboarding interview and generate agent.toml + toolbox.yaml.

ai init [flags] [intent]
FlagType / DefaultDescription
--no-aibool / falseSkip Claude interview; prompt each field manually
--no-graphbool / falseSkip Neo4j introspection; generate a skeleton config
--credentialsfileLoad Neo4j connection details from an Aura/Sandbox credentials file
--templatestringAgent type: research, coding, data, ops, custom (skips intent detection)
--forcebool / falseBypass post-generation validation warnings
--namestringAgent name (required with --non-interactive)
--modelstringModel ID, e.g. claude-sonnet-4-6 (required with --non-interactive)
--providerstring / anthropicModel provider: anthropic or openai-compat

Optional intent string describes what the agent should do. Claude uses it to scaffold a relevant config, toolbox.yaml, and system prompt.

Basic init with intent

$ai init "a research agent for my companies database"
✓ scaffolded: my-research-agent/
✓ intent detected → research agent
✓ config generated: agent.toml
✓ toolbox.yaml generated

Template + credentials file

$ai init --template research --credentials ~/Downloads/Neo4j-xxxx.txt
✓ Neo4j credentials loaded
✓ config generated: agent.toml
✓ toolbox.yaml generated

Manual init (no AI interview)

$ai init --no-ai
? agent name: my-agent
? description: researches companies
✓ config generated: agent.toml

Non-interactive (CI / scripting)

$ai init --non-interactive --name my-agent --model claude-sonnet-4-6
✓ config generated: agent.toml

ai show

Display agent.toml with syntax highlighting, inspect toolbox sources, or show the full config schema.

ai show [flags] [file]
FlagType / DefaultDescription
--fullbool / falseShow the complete TOML dump of all config fields (no truncation)
--sectionstringShow only the named config section: model, memory, toolbox, security, graph, server, a2a
--toolsbool / [path]Show toolbox sources, tools, and toolsets; auto-detects toolbox.yaml
--filterbool / falsePipe output through fzf (if available) or built-in interactive line filter
--grepstringFilter output to lines matching a case-insensitive regex
--specbool / falseShow full config schema with all fields, types, and defaults
--jsonbool / falseOutput as JSON instead of colored TOML
--capabilitiesbool / falseOutput the A2A AgentCard JSON (no running server required)
--diagrambool / falseRender agent topology as an ASCII architecture diagram
--svgbool / falseWith --diagram: output SVG instead of ASCII

Show current config

$ai show
─── agent.toml ──────────────────────────────
[agent]
name = "my-research-agent"
[agent.model]
provider = "anthropic"
model = "claude-sonnet-4-6"
[agent.a2a]
enabled = true port = 8080

Show a specific section

$ai show --section model
[agent.model]
provider = "anthropic"
model = "claude-sonnet-4-6"

Show toolbox sources and filter with grep

$ai show --tools --grep neo4j
source: companies-db kind: neo4j tools: 3

Show config schema

$ai show --spec
agent.name string required
agent.model.provider string "anthropic"|"openai-compat"
agent.budget.max_turns int default: 10
agent.a2a.port int default: 8080
...

Output A2A AgentCard JSON

$ai show --capabilities
{"name":"my-research-agent","skills":[...],"url":"http://localhost:8080"}

Render topology diagram as SVG

$ai show --diagram --svg > topology.svg

ai serve

Start the A2A server, MCP server, and managed sidecars (mcp-toolbox, etc.).

ai serve [flags]
FlagType / DefaultDescription
--portint / 8080Port for the A2A server
--mcp-portint / 8081Port for the MCP server (Claude Desktop integration)
--hoststring / 127.0.0.1Host interface to bind; use 0.0.0.0 for team sharing
--tokenstringBearer token protecting /ui/ and /api/; env: AI_WEB_TOKEN
--configstring / agent.tomlPath to agent.toml config
--toolbox-configstringPath to toolbox YAML config (auto-detected if toolbox.yaml exists)
--no-sidecarbool / falseSkip sidecar startup (useful for testing)
--state-dirstring / .agintDirectory for durable state: sessions + tasks
ProcessPortDescription
A2A server:8080POST /a2a, POST /a2a/stream, GET /a2a/tasks/{id}
MCP server:8081Streamable HTTP at POST /mcp, legacy SSE at GET /sse; tools/list, tools/call, prompts/list
mcp-toolbox:15000MCP tool server subprocess (Neo4j, Postgres, 48 source kinds)
Python sidecars:8090–:8093GraphRAG, graph construction, memory, eval

Default serve

$ai serve
✓ toolbox: 14 tools loaded
✓ A2A → :8080
✓ MCP → :8081
ready — http://localhost:8080

Custom ports with token auth

$ai serve --port 9090 --mcp-port 9091 --token secret123
ready — http://localhost:9090

Team sharing (bind all interfaces)

$ai serve --host 0.0.0.0 --token secret123
ready — http://0.0.0.0:8080

ai run

Submit a task to an agent and stream the response to stdout.

ai run [flags] <task>
FlagType / DefaultDescriptionExample
--endpointstring / —Full A2A endpoint URL; skips loading agent.toml. Use for remote agents and deployed instances.--endpoint https://my-agent.fly.dev/a2a
--tokenstring / —Bearer token sent as Authorization: Bearer <token> on all A2A requests.--token $AI_TOKEN
--streambool / trueStream SSE events as they arrive (default behaviour; flag accepted for compatibility).--stream
--timeoutduration / —Maximum time to wait for task completion (e.g. 30s, 5m); env: AI_TIMEOUT; exit 8 on timeout.--timeout 60s
--verbosebool / falseShow tool call trace (name, input, result)--verbose
--agentstring / localhost:8080Deprecated — use --endpoint instead. Target A2A base URL.--agent localhost:9090

Simple query

$ai run "top 5 companies by revenue"
1. Apple Inc. — $394B
2. Amazon — $513B
3. Alphabet — $307B
4. Microsoft — $211B
5. Meta — $134B

Remote agent + verbose tool trace

$ai run --agent http://remote:8080 --verbose "find Acme competitors"
[tool] company_lookup({"query":"Acme Corp"})
[tool] graphrag_search({"query":"competitors similar to Acme"})
Acme's top competitors: WidgetCo, TechBase, BuildIt

With timeout

$ai run --timeout 60s "long running research task"
[streams response or exits 8 on timeout]

Use --endpoint and --token to target any deployed A2A-compliant agent without a local agent.toml. This is the pattern used in Demos 02–05.

Query a deployed Fly.io agent

$ai run --endpoint https://my-agent.fly.dev/a2a --token $AI_TOKEN "summarise Q4 earnings"
Q4 2025: revenue up 12% YoY to $4.2B...

Stream SSE progress events explicitly

$ai run --endpoint https://analyst.fly.dev/a2a --token $AI_TOKEN --stream "research Alphabet"
[progress] searching knowledge graph...
[progress] running graphrag retrieval...
Alphabet (Google parent): $307B revenue, 190k employees...

ai deploy

Deploy the agent runtime to a cloud provider (Fly.io or Cloud Run).

ai deploy [flags]
ai deploy status
ai deploy logs
FlagType / DefaultDescription
--targetstring / flyDeployment target: fly or cloudrun
--configstring / agent.tomlPath to agent.toml
--domainstring / —Custom domain override
--dry-runbool / falsePrint deployment plan without executing; exit 0
CommandDescription
statusShow deployment health and instance count
logsStream deployment logs to stdout

Deploy to Fly.io

$ai deploy
✓ built Go binary (4.2MB)
✓ deployed → https://my-research-agent.agent-intelligence.ai
✓ A2A + MCP endpoints live

Dry run (preview deployment plan)

$ai deploy --dry-run
target: fly · app: my-research-agent · region: iad

Check status & stream logs

$ai deploy status
✓ running · 2 instances · :8080
$ai deploy logs
2026-03-08T22:01:15Z INFO agent=companies-researcher turn=1 tokens=1243

ai graph

Manage the knowledge graph backend — connect, introspect, build, and promote.

ai graph <subcommand> [flags]
CommandDescription
connect <uri>Validate a Neo4j URI and print node/relationship counts
introspectOutput structured JSON schema (node labels, rel types, cardinality)
initCreate a local Kuzu database in .agint/graph/
build <source>Ingest documents into the graph via graph-construction sidecar (port 8090)
promoteExport local Kuzu graph to Neo4j Aura
FlagType / DefaultDescription
--usernamestringNeo4j username; env: NEO4J_USERNAME
--passwordstringNeo4j password; env: NEO4J_PASSWORD
--databasestring / neo4jNeo4j database name; env: NEO4J_DATABASE
FlagType / DefaultDescription
--uristringNeo4j URI; env: NEO4J_URI
--usernamestringNeo4j username; env: NEO4J_USERNAME
--passwordstringNeo4j password; env: NEO4J_PASSWORD
--databasestring / neo4jNeo4j database name; env: NEO4J_DATABASE
--timeoutint / 30Query timeout in seconds
FlagType / DefaultDescription
--localbool / falseCreate a local Kuzu embedded graph in .agint/graph/
--state-dirstring / .agintDirectory for agent state
FlagType / DefaultDescription
--portint / 8090Graph-construction sidecar port
--neo4j-uristringNeo4j URI; env: NEO4J_URI
--neo4j-usernamestringNeo4j username; env: NEO4J_USERNAME
--neo4j-passwordstringNeo4j password; env: NEO4J_PASSWORD
--pollduration / 2sProgress polling interval
--remotebool / falseSubmit as a Cloud Run Job instead of local sidecar
--projectstringGCP project ID (--remote only); env: GOOGLE_CLOUD_PROJECT
--regionstring / us-central1GCP region (--remote only); env: GOOGLE_CLOUD_REGION
--configstring / agent.tomlPath to agent.toml for job name resolution
FlagType / DefaultDescription
--targetstringNeo4j target URI; env: NEO4J_RW_URI
--usernamestringNeo4j username; env: NEO4J_RW_USERNAME
--passwordstringNeo4j password; env: NEO4J_RW_PASSWORD
--databasestring / neo4jNeo4j database; env: NEO4J_RW_DATABASE
--state-dirstring / .agintDirectory containing local graph state

Connect and introspect

$ai graph connect neo4j+s://demo.neo4jlabs.com:7687 --username companies --password companies
✓ connected · 47,821 nodes · 89,302 rels · 12 labels
$ai graph introspect --uri neo4j+s://demo.neo4jlabs.com:7687 --username companies --password companies
{"nodes":["Company","Person","Article"],"rels":["WORKS_AT","MENTIONS"]}

Build from documents

$ai graph build ./docs/
ingesting 47 documents...
✓ 47 docs → 892 chunks → 891 nodes · 1,203 rels

Remote build via Cloud Run

$ai graph build gs://my-bucket/corpus/ --remote --project my-gcp-project
submitted Cloud Run job...

Promote local graph to Neo4j Aura

$ai graph promote --target neo4j+s://xxx.databases.neo4j.io:7687 --username neo4j --password secret
✓ exported 891 nodes, 1,203 rels to Neo4j Aura

ai eval

Run evaluations against agents and produce quality / cost reports.

ai eval <subcommand> [flags]
CommandDescription
runRun LLM-as-judge evaluation against a YAML dataset
reportGenerate a markdown summary of evaluation results
cost-reportShow per-session cost breakdown from task data
FlagType / DefaultDescription
-datasetstring (required)Path to evaluation dataset YAML
-resultsstringPath to write results YAML (default: <dataset>-results.yaml)
-pass-thresholdfloat / 0.8Minimum score fraction to consider the run a PASS (0.0-1.0)
-sidecar-urlstring / http://127.0.0.1:8093Eval sidecar URL
FlagType / DefaultDescription
-resultsstring (required)Path to results YAML produced by ai eval run
-outputstringPath to write markdown report (default: print to stdout)
FlagType / DefaultDescription
--state-dirstring / .agintAgent state directory

The eval run subcommand requires the eval sidecar to be running (port 8093). Start it with: ai sidecar start eval-bridge

$ai eval run --dataset eval/qa-pairs.yaml
running 12 evaluations...
✓ 10 passed · 2 failed · avg score 4.2 / 5.0
$ai eval report --results eval/qa-pairs-results.yaml
# Evaluation Report: my-eval
...
$ai eval cost-report --state-dir .agint
task-id status turns in-tok out-tok cost-usd model
a1b2c3d4 completed 3 1243 456 $0.12 claude-sonnet-4-6

ai skill

Manage reusable agent skills — prompt fragments bundled with tool requirements.

ai skill <subcommand> [flags]
CommandDescription
add <ref>Download and register a skill from a local path or GitHub ref
listShow installed skills with name, version, and description
remove <name>Unregister a skill
$ai skill add github.com/agent-intelligence-ai/skills/graph-search
✓ installed graph-search v1.0.0
$ai skill list
graph-search 1.0.0 Search the knowledge graph using hybrid retrieval
memory-recall 0.9.1 Cross-session entity and context recall
$ai skill remove graph-search
✓ removed graph-search

ai sidecar

Manage sidecar services: mcp-toolbox MCP server, GraphRAG retrieval, graph construction, agent memory, and evaluation bridge.

ai sidecar <subcommand> [flags]
CommandDescription
mcp-toolboxStart mcp-toolbox MCP server (supports Neo4j, Postgres, and 48 source kinds)
start <service>Start a Python sidecar service; blocks until Ctrl+C; restarts on crash
installDownload and install all Python sidecar packages into ~/.agint/venv
statusShow running/stopped/error state for each sidecar service
FlagType / DefaultDescription
--configstring / toolbox.yamlPath to toolbox YAML config
--portint / 15001Port to listen on
ServicePortDescription
graph-construction:8090Document-to-graph pipeline
graphrag:8091GraphRAG retrieval sidecar
agent-memory:8092Semantic memory sidecar
eval-bridge:8093Evaluation bridge sidecar

Start mcp-toolbox

$ai sidecar mcp-toolbox --config toolbox.yaml --port 15001
✓ mcp-toolbox → :15001

Start a Python sidecar

$ai sidecar start graphrag
✓ graphrag → :8091

Install and check status

$ai sidecar install
✓ graphrag-retrieval installed
✓ graph-construction installed
✓ agent-memory installed
✓ eval-bridge installed
$ai sidecar status
graphrag-retrieval :8091 ✓ running
graph-construction :8090 ✓ running
agent-memory :8092 ✓ running
eval-bridge :8093 ✗ stopped

ai web

Start a local web console for agentic chat with an optional debug panel (tool calls, token usage, latency, OTel spans).

ai web [flags]
FlagType / DefaultDescription
--portint / 8888Port for the web console server; 0 = OS-assigned
--hoststring / 127.0.0.1Bind address; use 0.0.0.0 for team sharing
--restartbool / falseReplace a running ai web instance on the same port
--no-openbool / falseDon't open the browser automatically
--no-servebool / falseSkip auto-starting ai serve (proxy-only mode)
--debugbool / falseShow debug panel by default (tool calls, token usage, traces)
--a2astring / http://localhost:8080ai serve endpoint to connect to
--evals-dirstring / evalsDirectory to write saved eval cases
--sessions-dirstring / .ai/sessionsDirectory for session files
--no-persistbool / falseDon't write session files to disk
--configstringPath to agent.toml; sessions stored alongside this file
--tokenstringBearer token for authentication; env: AI_WEB_TOKEN
PanelContent
ChatConversation history with streaming responses, dark terminal theme
DebugTool calls (name + input/output), token counters, context utilization bar, OTel spans
System promptCurrent resolved system prompt (base + skill fragments)

The web console auto-starts ai serve unless --no-serve is set. Use --a2a to point to an already-running server.

$ai web
✓ web console → http://localhost:8888
opening browser...
$ai web --debug --port 9999
✓ web console → http://localhost:9999 (debug mode)
opening browser...
$ai web --host 0.0.0.0 --token mysecret
✓ web console → http://0.0.0.0:8888 (team sharing mode)

ai tui

Start a terminal UI for agentic chat. Connects to the agent running via ai serve and renders a two-pane chat interface.

ai tui [flags]
FlagType / DefaultDescription
--a2astring / http://localhost:8080ai serve endpoint
--no-servebool / falseSkip auto-starting ai serve (proxy-only mode)
--configstringPath to agent.toml
--tokenstringBearer token for authentication; env: AI_WEB_TOKEN
AreaContent
Left pane (60%)Chat history and input
Right pane (40%)ASCII trace (per turn, shows tool calls)
FooterToken/cost counter + keyboard hint
KeyAction
EnterSubmit message
Up / DownCycle previous messages
Ctrl+KCommand palette
Ctrl+CExit
$ai tui
[launches two-pane terminal UI]
$ai tui --a2a http://localhost:9090 --no-serve
[connects to existing server on :9090]

ai status

Show health status of all running components: server, sidecars, graph backend, and model.

ai status [flags]
FlagType / DefaultDescription
--endpointstring / http://localhost:8080Server base URL; env: AI_ENDPOINT
CodeMeaning
0All components healthy
7Server not running or a component is unreachable
$ai status
✓ server :8080 running
✓ mcp :8081 running
✓ toolbox :15000 running 14 tools
✓ graphrag :8091 running
✗ eval-bridge :8093 stopped
$ai status --output json
{"server":{"running":true,"port":8080},...}

ai completion

Output a shell completion script for bash, zsh, fish, or powershell.

ai completion <shell>
ShellInstall command
bashsource <(ai completion bash)
zshai completion zsh > "${fpath[1]}/_ai"
fishai completion fish > ~/.config/fish/completions/ai.fish
powershellai completion powershell | Out-String | Invoke-Expression

Bash (add to ~/.bashrc)

$source <(ai completion bash)

Zsh (persistent)

$ai completion zsh > "${fpath[1]}/_ai"

ai version

Print version, git commit, and build date.

ai version

No flags. Output includes version tag, short commit hash, and build timestamp. Also shown in ai --help header.

$ai version
ai version v0.1.0 (commit: abc1234, built: 2026-03-09T14:23:51Z)