Production Deployment
Run MUXI reliably in production
Production checklist for running MUXI Server: TLS, authentication, reverse proxy, logging, monitoring, auto-restart, and scaling considerations.
Checklist
- Enable authentication
- Configure TLS/HTTPS
- Set up reverse proxy
- Configure logging
- Set up monitoring
- Configure auto-restart
- Plan backup strategy
Security
Enable Authentication
auth:
enabled: true
keys:
- id: MUXI_production
secret: sk_... # Generated by init
Never disable auth in production.
Use Strong Secrets
Generate cryptographically secure keys:
muxi-server init
Store secrets securely (vault, secrets manager).
Restrict Network Access
Bind to localhost if using reverse proxy:
server:
host: 127.0.0.1
TLS/HTTPS
Option 1: Reverse Proxy (Recommended)
Use nginx, Caddy, or similar:
# nginx.conf
server {
listen 443 ssl;
server_name muxi.example.com;
ssl_certificate /etc/letsencrypt/live/muxi.example.com/fullchain.pem;
ssl_certificate_key /etc/letsencrypt/live/muxi.example.com/privkey.pem;
location / {
proxy_pass http://127.0.0.1:7890;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
}
}
Option 2: Caddy (Simpler)
muxi.example.com {
reverse_proxy localhost:7890
}
Caddy handles TLS automatically.
Shared Vector Memory with FAISSx
Use FAISSx as a remote vector service when formations run on multiple servers. FAISSx speaks ZeroMQ (TCP, default port 45678) and supports API-key/tenant isolation.
Run FAISSx
# CLI
faissx.server run --port 45678 --data-dir /data --enable-auth --auth-keys "key1:tenant1,key2:tenant2"
# Docker
docker run -p 45678:45678 \
-v /path/to/data:/data \
-e FAISSX_DATA_DIR=/data \
-e FAISSX_ENABLE_AUTH=true \
-e FAISSX_AUTH_KEYS="key1:tenant1,key2:tenant2" \
ghcr.io/muxi-ai/faissx:latest-slim
Load balance (ZeroMQ/TCP)
Use an L4/TCP balancer (e.g., nginx stream, HAProxy) to front multiple FAISSx nodes. HTTP reverse_proxy will not work because FAISSx is not HTTP.
stream {
upstream faissx_pool {
server faissx-1.internal:45678;
server faissx-2.internal:45678;
}
server {
listen 45678;
proxy_pass faissx_pool;
}
}
Point MUXI at FAISSx
Configure your vector (and optional buffer) memory to use the FAISSx ZeroMQ endpoint (e.g., tcp://faissx-lb.internal:45678) and supply the API key/tenant id used above so every server shares the same index. Keep the FAISSx endpoint on a private network and require auth keys.
Service Configuration
systemd (Linux)
# /etc/systemd/system/muxi-server.service
[Unit]
Description=MUXI Server
After=network.target
[Service]
Type=simple
User=muxi
Group=muxi
ExecStart=/usr/local/bin/muxi-server serve
Restart=always
RestartSec=5
LimitNOFILE=65535
[Install]
WantedBy=multi-user.target
Enable and start:
sudo systemctl enable muxi-server
sudo systemctl start muxi-server
launchd (macOS)
See macOS Installation.
Configuration
Production Config
server:
port: 7890
host: 127.0.0.1 # Behind reverse proxy
auth:
enabled: true
keys:
- id: MUXI_production
secret: sk_...
formations:
port_range: [8000, 9000]
auto_restart: true
max_restarts: 10
health_check_interval: 30s
logging:
level: info
format: json
output: /var/log/muxi/server.log
runtime:
auto_download: true
Logging
Log Rotation
# /etc/logrotate.d/muxi
/var/log/muxi/*.log {
daily
rotate 14
compress
delaycompress
missingok
notifempty
create 0640 muxi muxi
}
Centralized Logging
Forward to logging service:
logging:
format: json
output: stdout # Captured by journald
Then forward journald to your logging platform.
Monitoring
Health Check
# Simple monitoring script
#!/bin/bash
if ! curl -sf http://localhost:7890/health > /dev/null; then
echo "MUXI Server unhealthy!"
# Send alert
fi
Uptime Monitoring
Use external monitoring:
- Pingdom
- UptimeRobot
- Custom solution
Backup
What to Backup
- Server config:
~/.muxi/server/config.yaml - Formation data:
~/.muxi/server/data/ - Logs:
/var/log/muxi/
Backup Script
#!/bin/bash
BACKUP_DIR=/backups/muxi/$(date +%Y%m%d)
mkdir -p $BACKUP_DIR
cp -r ~/.muxi/server $BACKUP_DIR/
cp -r /var/log/muxi $BACKUP_DIR/
Scaling
Single Server
For most deployments, one server handles many formations:
- Each formation runs independently
- Resources shared across formations
- Simple to manage
Multiple Servers
For high availability or geographic distribution:
# CLI profile with multiple servers
profiles:
production:
servers:
- id: us-east
url: https://east.example.com:7890
- id: us-west
url: https://west.example.com:7890
Deploy to all:
muxi deploy --profile production
Troubleshooting
Server Won't Start
Check logs:
journalctl -u muxi-server -n 100
Common issues:
- Port already in use
- Config syntax error
- Missing permissions
Formation Crashes
Check formation logs:
muxi logs my-assistant --lines 200
Common issues:
- Missing secrets
- Out of memory
- Invalid configuration
High Load
Monitor resources:
top -p $(pgrep muxi-server)
Consider:
- Increasing resources
- Reducing concurrent formations
- Adding servers
Next Steps
- Monitoring - Set up monitoring
- Authentication - Secure access