Documentation Index
Fetch the complete documentation index at: https://docs.profy.cn/llms.txt
Use this file to discover all available pages before exploring further.
Agent Platform
The Expert workforce surface in Profy is the AI agent platform: Experts (AI expert products) run on the Agent Runtime (services/agent-runtime) inside declarative sandboxes (E2B/Docker), with AutoClaw as the AI model brand, streaming (SSE), and credit-based billing. Projects group collaboration; the Marketplace connects creators to Experts and skills.
Overview
Architecture
Agent Runtime
The Agent Runtime is the engine inservices/agent-runtime/. Each Expert conversation runs in an isolated declarative sandbox (for example E2B or Docker/OpenSandbox) with its own workspace, tools, and filesystem, exposed to the Web app via HTTP/SSE.
AutoClaw
AutoClaw is the AI model product name (billing and plan surfaces may still use theautoclaw service code). It is not a Kubernetes control plane in the current architecture: orchestration is sandbox-provider driven (E2B/Docker), configured via Agent Runtime environment variables.
| Concept | Description |
|---|---|
| Sandbox | Isolated runtime backing an Expert session (provider-specific) |
| Expert / session config | Model, skills, preferences — stored in Core and passed to the runtime |
| Secrets | API keys and credentials — injected via env / secrets stores, not committed |
| Agent Runtime service | Single logical gateway the Web app reaches via BACKEND_URL |
Expert & sandbox lifecycle
Idle timeout: Long-lived resources may be reclaimed after extended inactivity; exact policy depends on deployment and provider.Stop vs delete
| Operation | Sandbox behavior | Recovery |
|---|---|---|
| Stop | Sandbox paused or released; metadata often retained | Fast resume when supported |
| Delete | Session / resources removed in Core; sandbox torn down | Unrecoverable |
Design note: Prefer explicit stop/resume over hard deletes for chat UX, so URLs and routing stay stable while sandboxes are elastic.
Error state auto-recovery
Theerror state is not always terminal. Health checks against the Agent Runtime and sandbox readiness can move a session back to running when the runtime and workspace are healthy again; otherwise the user may restart to obtain a fresh sandbox.
Chat entry readiness gate
Before entering the chat view, the frontend typically waits for real signals in sequence:| Step | Signal | Meaning |
|---|---|---|
| 1 | Runtime / gateway ready | Agent Runtime HTTP reachable for the session |
| 2 | Skills ready | Marketplace or URL skills materialized in the workspace |
| 3 | Files loaded | Client-side tree / prefetch aligned with sandbox state |
Frontend file state management
The platform keeps session trajectory separate from workspace file listings (sandbox-backed):| Concept | Level | Description |
|---|---|---|
| Workspace file list | Session / Expert | Files visible in the IDE, tied to the active sandbox |
| Conversation trajectory | Session | Tool calls, todos, streaming deltas |
| Persisted events | Session | Stored events for replay after refresh |
APIs (Web → Core → Agent Runtime)
Most session and project operations go through tRPC on the Core API (/api/trpc/* from the browser), with Next.js route handlers under /api/sessions/* and /api/projects/* as thin proxies where needed.
Representative Next.js routes (apps/web)
| Path | Description |
|---|---|
POST /api/agent/invoke | Agent mode invoke (tools, planning) — proxies to Agent Runtime |
POST /api/agent/resume | Resume / continue flows |
GET/POST/PATCH/DELETE /api/sessions/[id] | Session CRUD proxy |
GET/POST /api/sessions/[id]/messages | Messages for a session |
GET/POST /api/projects/... | Project-scoped collaboration |
GET/POST /api/skills/* | Skill install / metadata |
GET /api/credits | Credit balance |
GET /api/billing/* | Billing proxies |
apps/web/src/app/api/** as the source of truth.
SSE streaming chat
The chat interface uses Server-Sent Events between the user and the Agent Runtime (via Next.js).Request flow
SSE event types (illustrative)
| Event | Description |
|---|---|
text | Streaming model text |
tool_call_chunk / tool_call | Tool invocation in progress / complete |
tool_call_result | Tool output |
token_usage | Usage for billing |
complete | Stream finished |
error | Failure |
POST /api/preview/start whose response carries the preview URL; sandbox status is pulled on demand from GET /api/sandbox/status. The agent SSE stream only carries “what the agent is doing right now”.
Routing to the runtime
The Web server usesBACKEND_URL (for example http://agent-runtime:8000 in Docker Compose) to reach the Agent Runtime. Public ingress is handled by Nginx → Next.js; the browser does not call the runtime origin directly for authenticated chat.
Credit billing
Expert usage is billed through the coin system. After a billable completion, the proxy layer reports token usage to the Core API. Consumption uses API Key authentication (sk_* prefix) where configured, so the Next.js layer can bill on behalf of the signed-in user without forwarding end-user JWTs to the OpenAPI billing surface.
Service plans
Experts and AutoClaw-backed features use tiered service plans (models, limits, entitlements). Example list query:autoclaw remains the service code key in finance data; the product UX is Expert / AutoClaw.)
Configuration & drift
Expert and session configuration (model, skills, system prompt, preferences) lives in Core and is applied on the next restart or sandbox refresh when drift is detected. Use the product settings / drift flows in the app to confirm when a restart is required.Health monitoring
Health combines:- Agent Runtime HTTP health (container / process up).
- Sandbox readiness (skills, workspace) as reported by the runtime or Core aggregation.
Related Pages
Finance & Payments
Credit billing and coin consumption
Authentication
API Keys bridge proxy and billing
Marketplace
Skills and Experts from the Marketplace
Deployment
Docker Compose, Agent Runtime, and Nginx