Agent Formation Schema
The formation file format
Every formation starts with a formation.afs (or .yaml) file that defines agents, tools, memory, and behavior.
Copy-paste starter
The smallest valid formation:
# formation.afs
schema: "1.0.0"
id: my-assistant
description: A simple assistant
llm:
api_keys:
openai: "${{ secrets.OPENAI_API_KEY }}"
models:
- text: "openai/gpt-4o"
agents:
- assistant
# agents/assistant.afs
schema: "1.0.0"
id: assistant
name: Assistant
description: A helpful assistant
system_message: You are a helpful assistant.
Top-Level Structure
schema: "1.0.0" # Required: Schema version
id: my-formation # Required: Unique identifier
description: A helpful AI assistant # Required: Description
init: "mkdir -p /tmp/workspace" # Optional: Shell command run before services start
llm: {...} # LLM configuration
agents: [...] # At least one agent
memory: {...} # Memory configuration
mcp: {...} # MCP tool settings
rce: {...} # RCE for skill script execution
overlord: {...} # Orchestration settings
server: {...} # Server configuration
async: {...} # Async behavior
scheduler: {...} # Scheduled tasks
a2a: {...} # Agent-to-agent
user_credentials: {...} # User credential handling
LLM Configuration
llm:
api_keys:
openai: "${{ secrets.OPENAI_API_KEY }}"
anthropic: "${{ secrets.ANTHROPIC_API_KEY }}"
settings:
temperature: 0.7
max_tokens: 4096
timeout_seconds: 30
models:
- text: "openai/gpt-4o"
- embedding: "openai/text-embedding-3-large"
- vision: "openai/gpt-4o"
settings:
max_tokens: 1500
Model Capabilities
| Capability | Purpose | Example |
|---|---|---|
text
| Text generation | openai/gpt-4o
|
embedding
| Vector embeddings | openai/text-embedding-3-large or local/all-MiniLM-L6-v2
|
vision
| Image analysis | openai/gpt-4o
|
audio
| Transcription | openai/whisper-1
|
video
| Video analysis | google/gemini-pro-vision
|
documents
| Document processing | openai/gpt-4o
|
streaming
| Progress updates | openai/gpt-4o-mini
|
Supported Providers
| Provider | Model Format | Example |
|---|---|---|
| OpenAI | openai/{model}
| openai/gpt-4o
|
| Anthropic | anthropic/{model}
| anthropic/claude-sonnet-4-20250514
|
google/{model}
| google/gemini-pro
| |
| Ollama | ollama/{model}
| ollama/llama3
|
| Local (embeddings) | local/{model}
| local/all-MiniLM-L6-v2
|
Agents
Agents are defined in separate files in agents/ directory and must be explicitly listed in the formation manifest:
# agents/assistant.afs
schema: "1.0.0"
id: assistant
name: AI Assistant
description: General-purpose assistant
system_message: |
You are a helpful assistant.
knowledge:
enabled: true
sources:
- path: knowledge/docs/
Agent Fields
| Field | Type | Required | Description |
|---|---|---|---|
schema
| string | Yes | Schema version ("1.0.0") |
id
| string | Yes | Unique identifier |
name
| string | Yes | Display name |
description
| string | Yes | What the agent does |
system_message
| string | No | System prompt defining behavior |
knowledge
| object | No | RAG configuration |
llm_models
| list | No | Override formation LLM |
mcp_servers
| list | No | MCP server references (string IDs or inline dicts) |
Memory
memory:
buffer:
size: 50
multiplier: 10
vector_search: true
working:
max_memory_mb: auto
mode: local # or "remote" for FAISSx
persistent:
connection_string: "postgres://user:pass@localhost/db"
# or: "sqlite:///data/memory.db"
user_synopsis:
enabled: true
cache_ttl: 3600
Default behavior: When persistent is omitted, SQLite is automatically enabled with memory.db in the formation directory. To explicitly disable: persistent: false. See Memory Reference for details.
MCP Configuration
MCP servers are defined in separate files in mcp/ directory:
# mcp/web-search.afs
schema: "1.0.0"
id: web-search
type: command
command: npx
args: ["-y", "@modelcontextprotocol/server-brave-search"]
auth:
type: env
BRAVE_API_KEY: "${{ secrets.BRAVE_API_KEY }}"
# mcp/api-service.afs
schema: "1.0.0"
id: api-service
type: http
endpoint: "https://api.example.com/mcp"
auth:
type: bearer
token: "${{ secrets.API_TOKEN }}"
parameters: # Injected into every tool call
driveId: "${{ secrets.ORG_DRIVE_ID }}"
Use parameters to inject infrastructure constants (drive IDs, tenant IDs) into every tool call. See Default Parameters for details.
Formation-level MCP settings:
mcp:
connection_ttl: 300 # Idle connection TTL in seconds (default: 300, 0 = ephemeral)
default_timeout_seconds: 30
max_tool_iterations: 10
max_tool_calls: 50
The connection_ttl controls how long idle MCP connections stay open between tool calls. Each tool call resets the timer, so actively-used servers stay connected. Set to 0 to disconnect after every call. Individual servers can override this with their own connection_ttl field.
Skills
Skills are SKILL.md files in skills/ directories. Formation-level skills are public (all agents see them). Agent-level skills are private.
my-formation/
├── skills/ # Public skills
│ └── report-generation/
│ ├── SKILL.md
│ └── scripts/
│ └── generate.py
└── agents/
└── analyst/
└── skills/ # Private to analyst
└── forecasting/
└── SKILL.md
No explicit skills: block is needed in formation.afs -- skills are discovered from the directory structure.
See Skills Reference for full SKILL.md syntax.
RCE (Remote Code Execution)
Required only for skills with executable scripts:
rce:
url: "http://localhost:6000"
token: "${{ secrets.RCE_TOKEN }}"
| Field | Type | Required | Description |
|---|---|---|---|
url
| string | Yes | RCE service URL |
token
| string | Yes | Authentication token |
Overlord (Orchestration)
overlord:
soul: |
You are a helpful, professional assistant.
llm:
model: "openai/gpt-4o-mini"
settings:
temperature: 0.2
workflow:
auto_decomposition: true
complexity_threshold: 7.0
plan_approval_threshold: 8
max_parallel_tasks: 5
response:
format: markdown
streaming: true
clarification:
style: conversational
max_rounds:
direct: 3
brainstorm: 10
Server Configuration
server:
host: "0.0.0.0"
port: 8001
api_keys:
admin_key: "${{ secrets.ADMIN_KEY }}"
client_key: "${{ secrets.CLIENT_KEY }}"
Async Configuration
async:
threshold_seconds: 30
enable_estimation: true
webhook_url: "https://myapp.com/webhooks/muxi"
webhook_retries: 3
Both threshold_seconds and webhook_url can be overridden per-request in the chat API call body. A webhook URL is not required for async mode -- without one, clients poll GET /v1/requests/{id} for results. See Async Processing for details.
Include Directive
Reference external files:
agents:
- researcher # String ID resolves to agents/researcher.afs
- writer # String ID resolves to agents/writer.afs
You can also use the $include directive:
agents:
- $include: agents/researcher.afs
- $include: agents/writer.afs
Secrets Reference
Use ${{ secrets.KEY }} syntax:
llm:
api_keys:
openai: "${{ secrets.OPENAI_API_KEY }}"
# In mcp/*.afs files:
auth:
type: env
API_KEY: "${{ secrets.API_KEY }}"
Validation
Validate your formation:
muxi validate
Complete Example
Formation file:
# formation.afs
schema: "1.0.0"
id: research-assistant
description: AI research and writing team
version: "1.0.0"
llm:
api_keys:
openai: "${{ secrets.OPENAI_API_KEY }}"
models:
- text: "openai/gpt-4o"
- embedding: "openai/text-embedding-3-large"
memory:
buffer:
size: 50
vector_search: true
persistent:
connection_string: "sqlite:///data/memory.db"
overlord:
soul: You are a professional research assistant.
workflow:
auto_decomposition: true
complexity_threshold: 7.0
mcp:
default_timeout_seconds: 30
server:
api_keys:
admin_key: "${{ secrets.ADMIN_KEY }}"
client_key: "${{ secrets.CLIENT_KEY }}"
agents:
- researcher
mcp:
servers:
- web-search
Agent file:
# agents/researcher.afs
schema: "1.0.0"
id: researcher
name: Research Specialist
description: Gathers information from web sources
system_message: |
Research topics thoroughly with web search.
Always cite your sources.
MCP file:
# mcp/web-search.afs
schema: "1.0.0"
id: web-search
type: command
command: npx
args: ["-y", "@modelcontextprotocol/server-brave-search"]
auth:
type: env
BRAVE_API_KEY: "${{ secrets.BRAVE_API_KEY }}"
With MCP server file:
# mcp/web-search.afs
schema: "1.0.0"
id: web-search
type: command
command: npx
args: ["-y", "@modelcontextprotocol/server-brave-search"]
auth:
type: env
BRAVE_API_KEY: "${{ secrets.BRAVE_API_KEY }}"