Documentation Index Fetch the complete documentation index at: https://docs.profy.cn/llms.txt
Use this file to discover all available pages before exploring further.
Deployment
Profy uses Docker Compose for application deployment with Nginx as the public-facing reverse proxy. The platform supports blue-green deployment for zero-downtime releases. Experts run on the Agent Runtime (services/agent-runtime) inside declarative sandboxes (E2B/Docker), orchestrated alongside Core and Web in Compose.
Deployment Architecture
Prerequisites
Requirement Version Docker Latest stable Docker Compose v2+ Nginx Host-level installation Domain + SSL Valid certificate (Let’s Encrypt or commercial) Node.js >= 20 (for builds) Bun Latest
Docker Compose
The application stack is defined in deploy/docker/docker-compose.yml:
Service Image Port Description coreBuilt from services/core/ 8080 Hono API backend webBuilt from apps/web/ 3000 Next.js frontend agent-runtimeBuilt from services/agent-runtime/ 8000 Agent Runtime (FastAPI) for Experts / sandboxes redisredis:alpine6379 Cache, session store, rate limiting
Quick Deploy
./deploy/scripts/deploy.sh
Or manually:
cd deploy/docker
docker compose up -d
Blue-Green Deployment
The deploy script implements a blue-green strategy for zero-downtime updates.
Slot Allocation
Slot Core Port Web Port (Prod) Web Port (Test) Blue 8080 3100 3000 Green 8081 3101 3001
Deployment Flow
Health Checks
The deploy script verifies the new environment before switching traffic:
Wait for containers to report healthy status
HTTP GET to GET /health endpoint
Verify response status code
Only proceed with traffic switch on success
If health checks fail, the script automatically rolls back by stopping the new environment and keeping the current one active.
Nginx Configuration
Nginx runs at the host level and serves as the unified entry point.
Routing Rules
# API routes → Core backend
location /api/ {
proxy_pass http://core_upstream;
proxy_set_header Host $ host ;
proxy_set_header X-Real-IP $ remote_addr ;
}
# OpenAPI routes → Core backend (API Key auth)
location /openapi/ {
proxy_pass http://core_upstream;
}
# Internal routes → Core (private network only)
location /internal/ {
allow 10.0.0.0/8;
allow 172.16.0.0/12;
deny all ;
proxy_pass http://core_upstream;
}
# SSE / long-lived streams → Next.js (proxies to Agent Runtime as needed)
location ~ ^/api/(agent|sessions|tools|skills|credits)(/|$) {
proxy_pass http://web_upstream;
proxy_http_version 1.1 ;
proxy_set_header Connection '' ;
proxy_buffering off ;
proxy_cache off ;
chunked_transfer_encoding off ;
}
# All other routes → Next.js
location / {
proxy_pass http://web_upstream;
}
Key Nginx Settings
Setting Value Purpose proxy_bufferingoffRequired for SSE streaming proxy_read_timeout300sLong-lived SSE connections client_max_body_size50mFile upload support SSL TLS 1.2+ HTTPS termination
Environment Variables
Core API (services/core)
Variable Description Required DATABASE_URLMySQL connection string Yes REDIS_URLRedis connection string Yes JWT_SECRETSecret key for JWT signing Yes MINIO_ENDPOINTMinIO server URL Yes MINIO_ACCESS_KEYMinIO access key Yes MINIO_SECRET_KEYMinIO secret key Yes WECHAT_PAY_APP_IDWeChat Pay app ID For payments WECHAT_PAY_MCH_IDWeChat Pay merchant ID For payments AGENT_INVOKE_URLAgent engine endpoint For tasks ALIYUN_SMS_ACCESS_KEY_IDAliyun SMS key For SMS
Frontend (apps/web)
Variable Description Required NEXT_PUBLIC_API_URLCore API base URL Yes BACKEND_URLAgent Runtime base URL (Compose: http://agent-runtime:8000) For Experts / chat CORE_API_URLCore API URL inside Docker network Yes (Compose)
Agent Runtime & declarative sandboxes
The Agent Runtime lives in services/agent-runtime/ and is built as the agent-runtime (and optional worker) Compose services. Sandbox provisioning is owned by Core: the E2B template ID is driven by SANDBOX_E2B_TEMPLATE_ID (defaults to profy-sandbox) in services/core/.env*; agent-runtime only selects SANDBOX_PROVIDER=e2b and connects to the sandbox by ID. The sandbox image is built from services/profy-sandbox/ (see make deploy-sandbox).
The Next.js app uses BACKEND_URL to reach the Agent Runtime for invoke/stream flows; no separate Kubernetes cluster is required for the Expert workforce model.
Monitoring & Operations
Health Endpoint
curl https://your-domain.com/health
Returns 200 OK when the Core API is operational.
Container Logs
docker compose logs -f core
docker compose logs -f web
Rollback
If a deployment fails post-switch, re-run the deploy script — it will detect the current active slot and deploy to the standby slot, effectively rolling back.
Related Pages
Architecture System architecture and routing overview
Agent Platform Agent Runtime, Experts, and declarative sandboxes