CLI Reference
The ai binary is a single statically-linked Go binary (<25 MB, <200 ms cold start).
All commands read from agent.toml in the current directory by default.
Installation
One-line install (macOS & Linux)
curl -fsSL https://agent-intelligence.ai/install.sh | sh
Go install
go install github.com/agent-intelligence-ai/agent-intelligence/cmd/ai@latest
Homebrew
brew install agent-intelligence-ai/tap/ai
Global Flags
Flags placed before the subcommand name, applying to every command.
ai <version> ai [global flags] <subcommand> [subcommand flags] [args]
| Flag | Type / Default | Description |
|---|---|---|
| --config | string / agent.toml | Path to the agent config file; env: AI_CONFIG |
| --verbose | bool / false | Enable verbose output (tool call traces, debug logs); env: AI_VERBOSE=1 |
| --quiet, -q | bool / false | Suppress non-essential output; errors only (useful for scripting / CI); env: AI_QUIET=1 |
| --no-color | bool / false | Disable ANSI color codes; keeps Unicode symbols; env: AI_NO_COLOR=1 or NO_COLOR |
| --plain | bool / false | Plain ASCII output: strips ANSI codes and Unicode symbols (✓→ok, ✗→FAIL, ──→--). Implies --no-color. Required with --yes (SF-3 safety floor). env: PLAIN=1 |
| --output | string / text | Output format: text (default), json, ndjson (ai run only); env: AI_OUTPUT |
| --non-interactive | bool / false | Suppress all interactive prompts; commands requiring input exit 2; env: AI_NON_INTERACTIVE=1 |
| --yes | bool / false | Auto-approve all human_approval prompts (CI use only); requires --plain (SF-3 safety floor) |
ai init
Run the onboarding interview and generate agent.toml + toolbox.yaml.
ai init [flags] [intent]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --no-ai | bool / false | Skip Claude interview; prompt each field manually |
| --no-graph | bool / false | Skip Neo4j introspection; generate a skeleton config |
| --credentials | file | Load Neo4j connection details from an Aura/Sandbox credentials file |
| --template | string | Agent type: research, coding, data, ops, custom (skips intent detection) |
| --force | bool / false | Bypass post-generation validation warnings |
| --name | string | Agent name (required with --non-interactive) |
| --model | string | Model ID, e.g. claude-sonnet-4-6 (required with --non-interactive) |
| --provider | string / anthropic | Model provider: anthropic or openai-compat |
Optional intent string describes what the agent should do. Claude uses it to scaffold a relevant config, toolbox.yaml, and system prompt.
Basic init with intent
Template + credentials file
Manual init (no AI interview)
Non-interactive (CI / scripting)
ai show
Display agent.toml with syntax highlighting, inspect toolbox sources, or show the full config schema.
ai show [flags] [file]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --full | bool / false | Show the complete TOML dump of all config fields (no truncation) |
| --section | string | Show only the named config section: model, memory, toolbox, security, graph, server, a2a |
| --tools | bool / [path] | Show toolbox sources, tools, and toolsets; auto-detects toolbox.yaml |
| --filter | bool / false | Pipe output through fzf (if available) or built-in interactive line filter |
| --grep | string | Filter output to lines matching a case-insensitive regex |
| --spec | bool / false | Show full config schema with all fields, types, and defaults |
| --json | bool / false | Output as JSON instead of colored TOML |
| --capabilities | bool / false | Output the A2A AgentCard JSON (no running server required) |
| --diagram | bool / false | Render agent topology as an ASCII architecture diagram |
| --svg | bool / false | With --diagram: output SVG instead of ASCII |
Show current config
Show a specific section
Show toolbox sources and filter with grep
Show config schema
Output A2A AgentCard JSON
Render topology diagram as SVG
ai serve
Start the A2A server, MCP server, and managed sidecars (mcp-toolbox, etc.).
ai serve [flags]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --port | int / 8080 | Port for the A2A server |
| --mcp-port | int / 8081 | Port for the MCP server (Claude Desktop integration) |
| --host | string / 127.0.0.1 | Host interface to bind; use 0.0.0.0 for team sharing |
| --token | string | Bearer token protecting /ui/ and /api/; env: AI_WEB_TOKEN |
| --config | string / agent.toml | Path to agent.toml config |
| --toolbox-config | string | Path to toolbox YAML config (auto-detected if toolbox.yaml exists) |
| --no-sidecar | bool / false | Skip sidecar startup (useful for testing) |
| --state-dir | string / .agint | Directory for durable state: sessions + tasks |
What it starts
| Process | Port | Description |
|---|---|---|
| A2A server | :8080 | POST /a2a, POST /a2a/stream, GET /a2a/tasks/{id} |
| MCP server | :8081 | Streamable HTTP at POST /mcp, legacy SSE at GET /sse; tools/list, tools/call, prompts/list |
| mcp-toolbox | :15000 | MCP tool server subprocess (Neo4j, Postgres, 48 source kinds) |
| Python sidecars | :8090–:8093 | GraphRAG, graph construction, memory, eval |
Default serve
Custom ports with token auth
Team sharing (bind all interfaces)
ai run
Submit a task to an agent and stream the response to stdout.
ai run [flags] <task>
Flags
| Flag | Type / Default | Description | Example |
|---|---|---|---|
| --endpoint | string / — | Full A2A endpoint URL; skips loading agent.toml. Use for remote agents and deployed instances. | --endpoint https://my-agent.fly.dev/a2a |
| --token | string / — | Bearer token sent as Authorization: Bearer <token> on all A2A requests. | --token $AI_TOKEN |
| --stream | bool / true | Stream SSE events as they arrive (default behaviour; flag accepted for compatibility). | --stream |
| --timeout | duration / — | Maximum time to wait for task completion (e.g. 30s, 5m); env: AI_TIMEOUT; exit 8 on timeout. | --timeout 60s |
| --verbose | bool / false | Show tool call trace (name, input, result) | --verbose |
| --agent | string / localhost:8080 | Deprecated — use --endpoint instead. Target A2A base URL. | --agent localhost:9090 |
Simple query
Remote agent + verbose tool trace
With timeout
Remote agent queries
Use --endpoint and --token to target any deployed A2A-compliant agent without a local agent.toml. This is the pattern used in Demos 02–05.
Query a deployed Fly.io agent
Stream SSE progress events explicitly
ai deploy
Deploy the agent runtime to a cloud provider (Fly.io or Cloud Run).
ai deploy [flags] ai deploy status ai deploy logs
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --target | string / fly | Deployment target: fly or cloudrun |
| --config | string / agent.toml | Path to agent.toml |
| --domain | string / — | Custom domain override |
| --dry-run | bool / false | Print deployment plan without executing; exit 0 |
Subcommands
| Command | Description |
|---|---|
| status | Show deployment health and instance count |
| logs | Stream deployment logs to stdout |
Deploy to Fly.io
Dry run (preview deployment plan)
Check status & stream logs
ai graph
Manage the knowledge graph backend — connect, introspect, build, and promote.
ai graph <subcommand> [flags]
Subcommands
| Command | Description |
|---|---|
| connect <uri> | Validate a Neo4j URI and print node/relationship counts |
| introspect | Output structured JSON schema (node labels, rel types, cardinality) |
| init | Create a local Kuzu database in .agint/graph/ |
| build <source> | Ingest documents into the graph via graph-construction sidecar (port 8090) |
| promote | Export local Kuzu graph to Neo4j Aura |
graph connect
| Flag | Type / Default | Description |
|---|---|---|
| --username | string | Neo4j username; env: NEO4J_USERNAME |
| --password | string | Neo4j password; env: NEO4J_PASSWORD |
| --database | string / neo4j | Neo4j database name; env: NEO4J_DATABASE |
graph introspect
| Flag | Type / Default | Description |
|---|---|---|
| --uri | string | Neo4j URI; env: NEO4J_URI |
| --username | string | Neo4j username; env: NEO4J_USERNAME |
| --password | string | Neo4j password; env: NEO4J_PASSWORD |
| --database | string / neo4j | Neo4j database name; env: NEO4J_DATABASE |
| --timeout | int / 30 | Query timeout in seconds |
graph init
| Flag | Type / Default | Description |
|---|---|---|
| --local | bool / false | Create a local Kuzu embedded graph in .agint/graph/ |
| --state-dir | string / .agint | Directory for agent state |
graph build
| Flag | Type / Default | Description |
|---|---|---|
| --port | int / 8090 | Graph-construction sidecar port |
| --neo4j-uri | string | Neo4j URI; env: NEO4J_URI |
| --neo4j-username | string | Neo4j username; env: NEO4J_USERNAME |
| --neo4j-password | string | Neo4j password; env: NEO4J_PASSWORD |
| --poll | duration / 2s | Progress polling interval |
| --remote | bool / false | Submit as a Cloud Run Job instead of local sidecar |
| --project | string | GCP project ID (--remote only); env: GOOGLE_CLOUD_PROJECT |
| --region | string / us-central1 | GCP region (--remote only); env: GOOGLE_CLOUD_REGION |
| --config | string / agent.toml | Path to agent.toml for job name resolution |
graph promote
| Flag | Type / Default | Description |
|---|---|---|
| --target | string | Neo4j target URI; env: NEO4J_RW_URI |
| --username | string | Neo4j username; env: NEO4J_RW_USERNAME |
| --password | string | Neo4j password; env: NEO4J_RW_PASSWORD |
| --database | string / neo4j | Neo4j database; env: NEO4J_RW_DATABASE |
| --state-dir | string / .agint | Directory containing local graph state |
Connect and introspect
Build from documents
Remote build via Cloud Run
Promote local graph to Neo4j Aura
ai eval
Run evaluations against agents and produce quality / cost reports.
ai eval <subcommand> [flags]
Subcommands
| Command | Description |
|---|---|
| run | Run LLM-as-judge evaluation against a YAML dataset |
| report | Generate a markdown summary of evaluation results |
| cost-report | Show per-session cost breakdown from task data |
eval run
| Flag | Type / Default | Description |
|---|---|---|
| -dataset | string (required) | Path to evaluation dataset YAML |
| -results | string | Path to write results YAML (default: <dataset>-results.yaml) |
| -pass-threshold | float / 0.8 | Minimum score fraction to consider the run a PASS (0.0-1.0) |
| -sidecar-url | string / http://127.0.0.1:8093 | Eval sidecar URL |
eval report
| Flag | Type / Default | Description |
|---|---|---|
| -results | string (required) | Path to results YAML produced by ai eval run |
| -output | string | Path to write markdown report (default: print to stdout) |
eval cost-report
| Flag | Type / Default | Description |
|---|---|---|
| --state-dir | string / .agint | Agent state directory |
The eval run subcommand requires the eval sidecar to be running (port 8093). Start it with: ai sidecar start eval-bridge
ai skill
Manage reusable agent skills — prompt fragments bundled with tool requirements.
ai skill <subcommand> [flags]
Subcommands
| Command | Description |
|---|---|
| add <ref> | Download and register a skill from a local path or GitHub ref |
| list | Show installed skills with name, version, and description |
| remove <name> | Unregister a skill |
ai sidecar
Manage sidecar services: mcp-toolbox MCP server, GraphRAG retrieval, graph construction, agent memory, and evaluation bridge.
ai sidecar <subcommand> [flags]
Subcommands
| Command | Description |
|---|---|
| mcp-toolbox | Start mcp-toolbox MCP server (supports Neo4j, Postgres, and 48 source kinds) |
| start <service> | Start a Python sidecar service; blocks until Ctrl+C; restarts on crash |
| install | Download and install all Python sidecar packages into ~/.agint/venv |
| status | Show running/stopped/error state for each sidecar service |
mcp-toolbox flags
| Flag | Type / Default | Description |
|---|---|---|
| --config | string / toolbox.yaml | Path to toolbox YAML config |
| --port | int / 15001 | Port to listen on |
Python sidecar services
| Service | Port | Description |
|---|---|---|
| graph-construction | :8090 | Document-to-graph pipeline |
| graphrag | :8091 | GraphRAG retrieval sidecar |
| agent-memory | :8092 | Semantic memory sidecar |
| eval-bridge | :8093 | Evaluation bridge sidecar |
Start mcp-toolbox
Start a Python sidecar
Install and check status
ai web
Start a local web console for agentic chat with an optional debug panel (tool calls, token usage, latency, OTel spans).
ai web [flags]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --port | int / 8888 | Port for the web console server; 0 = OS-assigned |
| --host | string / 127.0.0.1 | Bind address; use 0.0.0.0 for team sharing |
| --restart | bool / false | Replace a running ai web instance on the same port |
| --no-open | bool / false | Don't open the browser automatically |
| --no-serve | bool / false | Skip auto-starting ai serve (proxy-only mode) |
| --debug | bool / false | Show debug panel by default (tool calls, token usage, traces) |
| --a2a | string / http://localhost:8080 | ai serve endpoint to connect to |
| --evals-dir | string / evals | Directory to write saved eval cases |
| --sessions-dir | string / .ai/sessions | Directory for session files |
| --no-persist | bool / false | Don't write session files to disk |
| --config | string | Path to agent.toml; sessions stored alongside this file |
| --token | string | Bearer token for authentication; env: AI_WEB_TOKEN |
Web console features
| Panel | Content |
|---|---|
| Chat | Conversation history with streaming responses, dark terminal theme |
| Debug | Tool calls (name + input/output), token counters, context utilization bar, OTel spans |
| System prompt | Current resolved system prompt (base + skill fragments) |
The web console auto-starts ai serve unless --no-serve is set. Use --a2a to point to an already-running server.
ai tui
Start a terminal UI for agentic chat. Connects to the agent running via ai serve and renders a two-pane chat interface.
ai tui [flags]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --a2a | string / http://localhost:8080 | ai serve endpoint |
| --no-serve | bool / false | Skip auto-starting ai serve (proxy-only mode) |
| --config | string | Path to agent.toml |
| --token | string | Bearer token for authentication; env: AI_WEB_TOKEN |
Layout
| Area | Content |
|---|---|
| Left pane (60%) | Chat history and input |
| Right pane (40%) | ASCII trace (per turn, shows tool calls) |
| Footer | Token/cost counter + keyboard hint |
Keyboard shortcuts
| Key | Action |
|---|---|
| Enter | Submit message |
| Up / Down | Cycle previous messages |
| Ctrl+K | Command palette |
| Ctrl+C | Exit |
ai status
Show health status of all running components: server, sidecars, graph backend, and model.
ai status [flags]
Flags
| Flag | Type / Default | Description |
|---|---|---|
| --endpoint | string / http://localhost:8080 | Server base URL; env: AI_ENDPOINT |
Exit codes
| Code | Meaning |
|---|---|
| 0 | All components healthy |
| 7 | Server not running or a component is unreachable |
ai completion
Output a shell completion script for bash, zsh, fish, or powershell.
ai completion <shell>
Supported shells
| Shell | Install command |
|---|---|
| bash | source <(ai completion bash) |
| zsh | ai completion zsh > "${fpath[1]}/_ai" |
| fish | ai completion fish > ~/.config/fish/completions/ai.fish |
| powershell | ai completion powershell | Out-String | Invoke-Expression |
Bash (add to ~/.bashrc)
Zsh (persistent)
ai version
Print version, git commit, and build date.
ai version
No flags. Output includes version tag, short commit hash, and build timestamp. Also shown in ai --help header.