How MUXI Works

The 30-second mental model

You completed the quickstart. Here's what you actually built:

---
config:
  layout: elk
  elk:
    mergeEdges: true
    nodePlacementStrategy: LINEAR_SEGMENTS
---
flowchart TB
    Request(["<strong>Your Request</strong>
<small>β€œHello, assistant!”</small>"])
    Server["<strong>MUXI Server :7890</strong>
<small>Routes requests / Manages formations / Handles auth</small>"]
    Overlord["<strong>Overlord</strong>
<small>Loads memory context / Routes to agents / Applies soul / Updates memory</small>"]
    A1["Agent"] & A2["Agent"] & A3["Agent"]
    LLM["LLM
(OpenAI)"] & Tools["Tools
(MCP)"] & RAG["Knowledge
(RAG)"]
    Response(["Response"])

    Request --> Server
    Server --> Overlord
    Overlord --> A1 & A2 & A3
    A1 & A2 & A3 --> LLM & Tools & RAG
    LLM & Tools & RAG --> Response

Think of it as:

  • Server = Traffic controller (like Nginx for AI)
  • Formation = Your complete AI system (like a Docker container)
  • Overlord = The brain that manages memory, routes requests, and applies soul
  • Agent = A specialized worker that uses tools and knowledge to complete tasks

What happens when you send a message

  1. Your app sends a request

    curl -X POST http://localhost:8001/v1/chat \
      -d '{"message": "What can you help me with?"}'

    The request hits your formation's API and goes to the Overlord.

  2. The Overlord builds context

    The Overlord loads context from four memory layers:

    • Buffer memory - Recent conversation messages
    • Long-term memory - User preferences and history (if enabled)
    • User Synopsis - Who the user is (derived from persistent memory)
    • Working memory - Current session state

    This context is attached to your message before any agent sees it.

  3. The Overlord routes to an agent

    The Overlord decides how to handle your request:

    1. SOP match? β†’ Execute the standard procedure
    2. Complex request? β†’ Decompose into multi-agent workflow
    3. Simple request? β†’ Route to the best-suited agent
    User:  "What can you help me with?"
      ↓
    Overlord: "Simple question β†’ route to 'assistant' agent"
  4. The agent processes with tools

    The selected agent:

    • Receives the message + context from the Overlord
    • Calls MCP tools if needed (web search, databases, etc.)
    • Retrieves relevant knowledge (RAG)
    • Sends everything to the LLM
  5. The Overlord applies soul and responds

    The Overlord:

    • Applies the configured soul (tone, style) to the response
    • Streams the response back to your app
    • Updates all memory layers with the conversation

For a deep dive into every step, see Request Lifecycle.

The four things you configure

Every formation has four core building blocks. You don't need all of them - start simple and add as needed.

1. Agents

What they do: Specialized workers with specific roles.

Example: A "researcher" agent that finds information, a "writer" agent that drafts content.

# agents/assistant.afs
id: assistant
name: My Assistant
system_message: You are a helpful assistant.

Learn more β†’

2. Tools (MCP)

What they do: Give agents capabilities beyond text generation.

Example: Web search, database queries, file operations, API calls.

# mcp/web-search.afs
id: web-search
server: "@anthropic/web-search"

Learn more β†’

3. Memory

What it does: Remembers conversations across sessions.

Example: User preferences, conversation history, learned context.

# formation.afs
memory:
  persistent:
    enabled: true

Learn more β†’

4. Knowledge (RAG)

What it does: Gives agents access to your documents.

Example: Product docs, FAQs, internal wikis.

# agents/support.afs
knowledge:
  sources:
    - path: ./docs

Learn more β†’

Formation file structure

When you ran muxi new formation, you got this structure:

my-assistant/
β”œβ”€β”€ formation.afs      # Main configuration (LLM, memory, defaults)
β”œβ”€β”€ agents/            # Agent definitions (auto-discovered)
β”‚   └── assistant.afs  # Your agent
β”œβ”€β”€ mcp/               # Tool configurations (optional)
β”œβ”€β”€ knowledge/         # Documents for RAG (optional)
β”œβ”€β”€ sops/              # Standard procedures (optional)
β”œβ”€β”€ triggers/          # Webhook templates (optional)
└── secrets.example    # Required API keys template

The key files

The main configuration file. Sets defaults for the entire formation.

schema: "1.0.0"
id: my-assistant

llm:
  models:
    - text: "openai/gpt-4o"

memory:
  persistent:
    enabled: true

Each agent is a separate file. They're auto-discovered from the agents/ directory.

schema: "1.0.0"
id: assistant
name: Helpful Assistant
description: General-purpose assistant

system_message: |
  You are a helpful assistant. Be concise and friendly.

# Optional: agent-specific tools
mcp_servers:
  - web-search

# Optional: agent-specific knowledge
knowledge:
  sources:
    - path: ./docs/faq.md

Tool servers that agents can use. Each file defines one MCP server.

schema: "1.0.0"
id: web-search
server: "@anthropic/web-search"

# Some tools need API keys
env:
  API_KEY: ${{ secrets.SEARCH_API_KEY }}

Single agent vs. multi-agent

Single agent (what you built)

Simple formations have one agent that handles everything:

User β†’ Overlord β†’ assistant β†’ LLM β†’ Response

Good for: chatbots, simple assistants, focused tools.

Multi-agent (when you're ready)

Complex formations have specialized agents that collaborate:

User β†’ Overlord ─┬→ researcher β†’ find information
                 β”œβ†’ analyst    β†’ analyze data
                 β””β†’ writer     β†’ draft response

The Overlord routes requests to the right agent or coordinates multiple agents for complex tasks.

Good for: customer support systems, research assistants, content pipelines.

Build Multi-Agent Teams β†’

Development vs. production

βœ“ muxi dev muxi deploy
Where it runs Your machine MUXI Server
Port 8001 (direct) 7890 (proxied)
Hot reload Yes No
Use for Development Production

Local development

muxi dev
# Formation running at http://localhost:8001

Production deployment

muxi deploy
# Formation deployed to server at http://server:7890/api/my-assistant/

Deploy to Production β†’

Next steps

Now that you understand how MUXI works, choose your path:

Go deeper

Architecture - Complete system architecture
Formation Schema - Full YAML specification
The Overlord - How orchestration works
Request Lifecycle - Every step of a request