Glossary

Quick reference for MUXI terminology.

A

A2A (Agent-to-Agent)

Protocol enabling software agents to communicate across system boundaries. In MUXI, A2A allows agents in different formations to collaborate and delegate tasks to each other.

Agent

In AI, an autonomous entity that perceives its environment and takes actions to achieve goals. In MUXI, agents are specialized workers within a formation - each has a system prompt defining its role and tools defining its capabilities. The Overlord routes tasks to the most appropriate agent.

Agent Formation Schema (AFS)

MUXI's YAML configuration format for defining formations. Files use .afs extension. Describes agents, tools, memory, triggers, and all formation behavior in a declarative, versionable format.

Artifact

In software, a byproduct of development (builds, logs, binaries). In MUXI, artifacts are structured outputs generated by agents - PDFs, images, CSV files, charts. Returned as Base64-encoded data with MIME type for easy handling.

Async Mode

Asynchronous processing where the caller doesn't wait for completion. In MUXI, long-running tasks automatically switch to async mode - users get immediate acknowledgment and results arrive via webhook when ready.

B

Buffer Memory

Short-term storage for recent context. In MUXI, buffer memory holds the current session's conversation history. Cleared when session ends; distinct from persistent memory which survives across sessions.

C

Chain-of-Thought (CoT)

Prompting technique where an LLM reasons step-by-step before answering, improving accuracy on complex tasks. MUXI's Overlord uses chain-of-thought internally for task decomposition and agent selection.

Clarification

The process of gathering missing information before proceeding. In MUXI, when the Overlord can't complete a task without more details, it asks follow-up questions - supporting multi-turn clarification with full context preservation.

Context Window

The maximum amount of text (measured in tokens) an LLM can process in one request. MUXI manages context automatically via memory compression, user synopsis, and selective retrieval to maximize useful context within limits.

Credential (User)

Authentication tokens for accessing external services. In MUXI, each user stores their own credentials (GitHub tokens, API keys, etc.) - encrypted, isolated, enabling personalized access to tools without sharing secrets.

D

DAG (Directed Acyclic Graph)

A graph structure where edges have direction and no cycles exist - used widely for task scheduling and dependency management. MUXI uses DAGs internally to orchestrate complex multi-step workflows, ensuring tasks execute in correct order.

E

Embedding

A dense vector representation that captures semantic meaning of text - similar concepts produce similar vectors. In MUXI, embeddings power memory retrieval, knowledge search, and semantic caching, enabling "meaning-based" rather than "keyword-based" matching.

F

FAISS

Facebook AI Similarity Search - an open-source library for efficient similarity search of dense vectors at scale. Foundation technology that FAISSx builds upon.

FAISSx

MUXI's enhanced FAISS wrapper for vector similarity search. Can run embedded (single process) or as a standalone server for shared memory across multiple formation instances in distributed deployments.

Formation

In MUXI, the atomic unit of deployment - a complete package containing agents, tools, memory configuration, and behavior definitions. Think of it like a Docker container for AI agents: self-contained, portable, versionable.

Function Calling

An LLM capability to output structured tool invocations instead of plain text - the model "decides" to call a function with specific parameters. MUXI leverages function calling for all MCP tool execution.

G

Grounding

Anchoring AI responses to factual, verifiable data sources rather than relying solely on training data. MUXI grounds responses through knowledge retrieval (RAG) and real-time tool access, reducing hallucinations.

H

Hallucination

When an LLM generates plausible-sounding but incorrect or fabricated information with confidence. MUXI mitigates hallucinations through RAG grounding, tool verification, and memory-based fact checking.

Human-in-the-Loop (HITL)

A pattern requiring human approval before automated systems take sensitive actions. In MUXI, HITL is configurable per action type - with customizable approval flows, timeouts, and escalation rules for production safety.

K

Knowledge

Information available for retrieval and reference. In MUXI, the Knowledge system implements RAG - embedding documents (PDFs, Markdown, text) and retrieving relevant context to ground agent responses in your data.

L

LLM (Large Language Model)

AI models trained on vast text corpora to understand and generate human language. Examples: GPT-4, Claude, Llama. MUXI orchestrates multiple LLMs through OneLLM, allowing different models for different tasks.

M

MCP (Model Context Protocol)

An open standard (by Anthropic) for connecting LLMs to external tools and data sources. MUXI uses MCP as its tool integration layer - any MCP-compatible server works out of the box.

Multi-Identity

Associating multiple identifiers with a single user (email, Slack ID, GitHub username, etc.). In MUXI, multi-identity enables unified memory and context when the same person interacts across different platforms.

Multi-Tenancy

Architecture where a single deployment serves multiple isolated users or organizations. In MUXI, multi-tenancy provides complete data isolation - each user's memory, credentials, and context are separate. Requires PostgreSQL for production use.

O

OneLLM

MUXI's unified LLM interface library. Provides a consistent API across 15+ providers (OpenAI, Anthropic, Google, local models, etc.) with built-in semantic caching, streaming, and cost tracking.

Overlord

MUXI's central orchestrator - the "brain" of a formation. Analyzes requests, routes to appropriate agents, manages workflows, handles clarifications, and synthesizes final responses. Users always interact through the Overlord, never directly with agents.

P

Persistent Memory

Storage that survives beyond a single session - facts, preferences, conversation history. In MUXI, persistent memory is backed by vector databases (FAISSx or PostgreSQL), enabling agents to remember across conversations.

Soul

The identity, tone, and communication style of your formation. In MUXI, only the Overlord has a soul (defined via SOUL.md or inline in formation config). Individual agents have system prompts that define their role, not personality.

pgvector

A PostgreSQL extension adding vector similarity search capabilities. In MUXI, pgvector is the production-recommended vector store - combining familiar SQL with semantic search in one database.

PostgreSQL

Industry-standard relational database. In MUXI, PostgreSQL (with pgvector) is required for production deployments - enabling multi-tenancy, persistent memory, user management, and reliable data storage at scale.

Prompt

The text input sent to an LLM. In MUXI, the Overlord constructs prompts dynamically - combining user messages, retrieved memories, user synopsis, system instructions, and available tool schemas into optimized requests.

R

RAG (Retrieval-Augmented Generation)

A pattern that enhances LLM responses by first retrieving relevant documents, then generating answers grounded in that context. MUXI's Knowledge system implements RAG for accurate, source-backed responses.

Registry

A central repository for discovering, sharing, and versioning packages. MUXI's Registry is like npm or Docker Hub but for formations - publish your own, pull community-built agents, with semantic versioning support.

Runtime

The execution environment that runs code. MUXI's Runtime is the Python environment that executes formations - can run standalone via CLI or embedded directly in your applications.

S

Scheduled Task

Work configured to run at specific times or intervals. In MUXI, users create scheduled tasks via natural language ("remind me every Monday at 9am"). Tasks execute as the user who created them - using their credentials and memory context.

Secret

Sensitive configuration values (API keys, passwords, connection strings) that shouldn't be exposed. In MUXI, secrets are stored encrypted and referenced as ${{ secrets.NAME }} in formation files - never hardcoded.

Semantic Cache

A cache that matches by meaning rather than exact string comparison. "What's the weather?" matches "How's the weather today?" MUXI's OneLLM provides semantic caching with 50-80% cost savings on repeated similar queries.

Session

A continuous interaction thread, typically tied to a conversation. In MUXI, each session has its own working memory. Users can have multiple concurrent sessions, and sessions can be restored for persistent chat history.

SOP (Standard Operating Procedure)

In business, documented step-by-step instructions for routine operations ensuring consistency and compliance. In MUXI, SOPs are Markdown templates triggered by specific phrases - predefined workflows distinct from dynamic workflows the Overlord creates per-request.

Streaming

Delivering data incrementally as it's produced rather than waiting for completion. In MUXI, streaming delivers LLM responses token-by-token for faster perceived response time - supported across all SDKs.

Structured Output

Data formatted according to a defined schema (JSON, XML, etc.) rather than free-form text. In MUXI, agents can return structured output for programmatic processing - supporting JSON, HTML, Markdown, and plain text formats.

System Prompt

Instructions that define an AI's behavior, role, and constraints - provided before user input. In MUXI, each agent has a system_message that establishes its expertise, tone, and operational boundaries.

T

Temperature

An LLM parameter controlling output randomness - 0 produces deterministic responses, higher values increase creativity/variability. In MUXI, temperature is configurable per agent for task-appropriate behavior.

Token

The basic unit of text for LLMs - roughly 4 characters or 0.75 words in English. Tokens determine context limits and API costs. MUXI tracks token usage for observability and cost management.

Tool

An external capability that extends what an AI can do - web search, file access, API calls, code execution. In MUXI, tools are integrated via the MCP protocol, giving agents real-world capabilities.

Trigger

An event that initiates an automated process. In MUXI, triggers are webhook endpoints that activate formation behavior from external events (GitHub pushes, Slack messages, etc.) - using Markdown templates with ${{ data.* }} placeholders.

U

User Synopsis

In MUXI, an LLM-generated summary of a user's identity, preferences, and interaction history. Cached and periodically refreshed, it reduces token usage by ~85% compared to injecting raw conversation history.

V

Vector

A list of numbers representing a point in multi-dimensional space. In AI, vectors encode semantic meaning - similar concepts have similar vectors. Foundation of embeddings, semantic search, and MUXI's memory retrieval.

Vector Database

A database optimized for storing and searching vectors by similarity. In MUXI, vector databases (FAISSx for development, PostgreSQL/pgvector for production) power memory retrieval and knowledge search.

W

Webhook

An HTTP callback - a URL that receives data when events occur. In MUXI, webhooks serve two purposes: triggers (receiving external events) and async delivery (sending results when long tasks complete).

Working Memory

The active context during task execution - what's "in mind" right now. In MUXI, working memory includes retrieved persistent memories, current conversation, and task state. Scoped to the current session/request.

Workflow

A sequence of steps to accomplish a goal. In MUXI, the Overlord creates workflows dynamically for complex requests - decomposing work into steps, assigning to agents, tracking progress. Distinct from SOPs which are predefined.

See Also