Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.profy.cn/llms.txt

Use this file to discover all available pages before exploring further.

Agent Platform

The Expert workforce surface in Profy is the AI agent platform: Experts (AI expert products) run on the Agent Runtime (services/agent-runtime) inside declarative sandboxes (E2B/Docker), with AutoClaw as the AI model brand, streaming (SSE), and credit-based billing. Projects group collaboration; the Marketplace connects creators to Experts and skills.

Overview

Architecture

Agent Runtime

The Agent Runtime is the engine in services/agent-runtime/. Each Expert conversation runs in an isolated declarative sandbox (for example E2B or Docker/OpenSandbox) with its own workspace, tools, and filesystem, exposed to the Web app via HTTP/SSE.

AutoClaw

AutoClaw is the AI model product name (billing and plan surfaces may still use the autoclaw service code). It is not a Kubernetes control plane in the current architecture: orchestration is sandbox-provider driven (E2B/Docker), configured via Agent Runtime environment variables.
ConceptDescription
SandboxIsolated runtime backing an Expert session (provider-specific)
Expert / session configModel, skills, preferences — stored in Core and passed to the runtime
SecretsAPI keys and credentials — injected via env / secrets stores, not committed
Agent Runtime serviceSingle logical gateway the Web app reaches via BACKEND_URL

Expert & sandbox lifecycle

Idle timeout: Long-lived resources may be reclaimed after extended inactivity; exact policy depends on deployment and provider.

Stop vs delete

OperationSandbox behaviorRecovery
StopSandbox paused or released; metadata often retainedFast resume when supported
DeleteSession / resources removed in Core; sandbox torn downUnrecoverable
Design note: Prefer explicit stop/resume over hard deletes for chat UX, so URLs and routing stay stable while sandboxes are elastic.

Error state auto-recovery

The error state is not always terminal. Health checks against the Agent Runtime and sandbox readiness can move a session back to running when the runtime and workspace are healthy again; otherwise the user may restart to obtain a fresh sandbox.

Chat entry readiness gate

Before entering the chat view, the frontend typically waits for real signals in sequence:
Sandbox allocated → Agent Runtime reachable → Skills loaded → Workspace files prefetched → Enter chat
StepSignalMeaning
1Runtime / gateway readyAgent Runtime HTTP reachable for the session
2Skills readyMarketplace or URL skills materialized in the workspace
3Files loadedClient-side tree / prefetch aligned with sandbox state
Every entry can go through a loading state: Even warm sessions may re-verify readiness after refresh.

Frontend file state management

The platform keeps session trajectory separate from workspace file listings (sandbox-backed):
ConceptLevelDescription
Workspace file listSession / ExpertFiles visible in the IDE, tied to the active sandbox
Conversation trajectorySessionTool calls, todos, streaming deltas
Persisted eventsSessionStored events for replay after refresh
Switching sessions should not silently wipe a freshly loaded workspace file list; that state is tied to the Expert/sandbox view, not only the chat transcript.

APIs (Web → Core → Agent Runtime)

Most session and project operations go through tRPC on the Core API (/api/trpc/* from the browser), with Next.js route handlers under /api/sessions/* and /api/projects/* as thin proxies where needed.

Representative Next.js routes (apps/web)

PathDescription
POST /api/agent/invokeAgent mode invoke (tools, planning) — proxies to Agent Runtime
POST /api/agent/resumeResume / continue flows
GET/POST/PATCH/DELETE /api/sessions/[id]Session CRUD proxy
GET/POST /api/sessions/[id]/messagesMessages for a session
GET/POST /api/projects/...Project-scoped collaboration
GET/POST /api/skills/*Skill install / metadata
GET /api/creditsCredit balance
GET /api/billing/*Billing proxies
Exact surface evolves with the router tree; treat apps/web/src/app/api/** as the source of truth.

SSE streaming chat

The chat interface uses Server-Sent Events between the user and the Agent Runtime (via Next.js).

Request flow

SSE event types (illustrative)

EventDescription
textStreaming model text
tool_call_chunk / tool_callTool invocation in progress / complete
tool_call_resultTool output
token_usageUsage for billing
completeStream finished
errorFailure
Sandbox and website preview readiness are not pushed over SSE. The Start Preview button is a synchronous POST /api/preview/start whose response carries the preview URL; sandbox status is pulled on demand from GET /api/sandbox/status. The agent SSE stream only carries “what the agent is doing right now”.

Routing to the runtime

The Web server uses BACKEND_URL (for example http://agent-runtime:8000 in Docker Compose) to reach the Agent Runtime. Public ingress is handled by NginxNext.js; the browser does not call the runtime origin directly for authenticated chat.

Credit billing

Expert usage is billed through the coin system. After a billable completion, the proxy layer reports token usage to the Core API. Consumption uses API Key authentication (sk_* prefix) where configured, so the Next.js layer can bill on behalf of the signed-in user without forwarding end-user JWTs to the OpenAPI billing surface.

Service plans

Experts and AutoClaw-backed features use tiered service plans (models, limits, entitlements). Example list query:
GET /api/finance/servicePlan/list?serviceCode=autoclaw
(autoclaw remains the service code key in finance data; the product UX is Expert / AutoClaw.)

Configuration & drift

Expert and session configuration (model, skills, system prompt, preferences) lives in Core and is applied on the next restart or sandbox refresh when drift is detected. Use the product settings / drift flows in the app to confirm when a restart is required.

Health monitoring

Health combines:
  1. Agent Runtime HTTP health (container / process up).
  2. Sandbox readiness (skills, workspace) as reported by the runtime or Core aggregation.
Operational cleanup and admin routes are environment-specific; follow internal runbooks for production.

Finance & Payments

Credit billing and coin consumption

Authentication

API Keys bridge proxy and billing

Marketplace

Skills and Experts from the Marketplace

Deployment

Docker Compose, Agent Runtime, and Nginx