mirror of
https://github.com/langgenius/dify.git
synced 2026-05-13 08:57:28 +08:00
remove obsolete agenton and run api docs
This commit is contained in:
parent
ae4a3f75f4
commit
b70763d751
@ -1,201 +0,0 @@
|
||||
# Agenton API reference
|
||||
|
||||
This page summarizes the public Agenton API. Import paths are shown for symbols
|
||||
commonly used by layer authors and compositor callers.
|
||||
|
||||
## Layers: `agenton.layers`
|
||||
|
||||
### `Layer[DepsT, PromptT, UserPromptT, ToolT, ConfigT, RuntimeStateT]`
|
||||
|
||||
Framework-neutral base class for invocation-scoped prompt/tool layers.
|
||||
|
||||
Class attributes:
|
||||
|
||||
- `type_id: str | None`: provider id for config-backed graph nodes.
|
||||
- `config_type: type[LayerConfig]`: Pydantic schema for per-run layer config.
|
||||
- `runtime_state_type: type[BaseModel]`: Pydantic schema for snapshot-safe
|
||||
per-layer state.
|
||||
- `deps_type: type[LayerDeps]`: inferred from the layer generic base or declared
|
||||
explicitly.
|
||||
|
||||
Invocation attributes assigned by `CompositorRun`:
|
||||
|
||||
- `config: ConfigT`
|
||||
- `deps: DepsT`
|
||||
- `runtime_state: RuntimeStateT`
|
||||
|
||||
Construction and dependency APIs:
|
||||
|
||||
- `from_config(config: ConfigT) -> Self`: create a fresh layer from
|
||||
schema-validated config. The default implementation supports only empty config.
|
||||
- `dependency_names() -> frozenset[str]`: dependency fields declared by
|
||||
`deps_type`.
|
||||
- `bind_deps(deps: Mapping[str, Layer | None]) -> None`: bind direct layer
|
||||
instance dependencies for one invocation.
|
||||
|
||||
Lifecycle hooks:
|
||||
|
||||
- `on_context_create() -> None`
|
||||
- `on_context_resume() -> None`
|
||||
- `on_context_suspend() -> None`
|
||||
- `on_context_delete() -> None`
|
||||
|
||||
Prompt/tool authoring surfaces:
|
||||
|
||||
- `prefix_prompts -> Sequence[PromptT]`
|
||||
- `suffix_prompts -> Sequence[PromptT]`
|
||||
- `user_prompts -> Sequence[UserPromptT]`
|
||||
- `tools -> Sequence[ToolT]`
|
||||
|
||||
Aggregation adapters implemented by typed layer families:
|
||||
|
||||
- `wrap_prompt(prompt: PromptT) -> object`
|
||||
- `wrap_user_prompt(prompt: UserPromptT) -> object`
|
||||
- `wrap_tool(tool: ToolT) -> object`
|
||||
|
||||
### Schema defaults and lifecycle enums
|
||||
|
||||
- `LayerConfig`: base DTO for serializable layer config schemas.
|
||||
- `LayerConfigValue`: JSON value or concrete `LayerConfig` DTO.
|
||||
- `EmptyLayerConfig`: default config schema for layers without config.
|
||||
- `EmptyRuntimeState`: default serializable runtime-state schema.
|
||||
- `LayerDeps`: typed dependency container base.
|
||||
- `NoLayerDeps`: dependency container for layers with no dependencies.
|
||||
- `LifecycleState`: `NEW`, `ACTIVE`, `SUSPENDED`, `CLOSED`.
|
||||
- `ExitIntent`: `DELETE`, `SUSPEND`.
|
||||
|
||||
`ACTIVE` is internal to an entered run and is rejected in external snapshots.
|
||||
|
||||
### Typed layer families: `agenton.layers.types`
|
||||
|
||||
- `PlainLayer[DepsT, ConfigT, RuntimeStateT]`
|
||||
- `PydanticAILayer[DepsT, AgentDepsT, ConfigT, RuntimeStateT]`
|
||||
|
||||
Tagged aggregate item types:
|
||||
|
||||
- `PlainPromptType`, `PlainUserPromptType`, `PlainToolType`
|
||||
- `PydanticAIPromptType`, `PydanticAIUserPromptType`, `PydanticAIToolType`
|
||||
- `AllPromptTypes`, `AllUserPromptTypes`, `AllToolTypes`
|
||||
|
||||
## Compositor: `agenton.compositor`
|
||||
|
||||
### Config models
|
||||
|
||||
- `LayerNodeConfig`: `name`, `type`, `deps`, `metadata`.
|
||||
- `CompositorConfig`: `schema_version`, `layers`.
|
||||
- `LayerConfigInput`: accepted per-run config input for one node.
|
||||
|
||||
Config nodes are pure serializable graph topology. Per-run layer config is passed
|
||||
separately to `Compositor.enter(configs=...)` keyed by node name.
|
||||
|
||||
### Providers and graph nodes
|
||||
|
||||
`LayerProvider[LayerT]` is a reusable validated factory for one concrete layer
|
||||
class.
|
||||
|
||||
- `LayerProvider.from_layer_type(layer_type) -> LayerProvider`: construct through
|
||||
`layer_type.from_config`.
|
||||
- `LayerProvider.from_factory(layer_type=..., create=...) -> LayerProvider`:
|
||||
construct through a custom typed-config factory.
|
||||
- `type_id -> str | None`: provider id declared by the layer type.
|
||||
- `validate_config(config=None) -> LayerConfig`: validate config without invoking
|
||||
the factory.
|
||||
- `create_layer(config=None) -> LayerT`: validate config and create a fresh layer.
|
||||
- `create_layer_from_config(config) -> LayerT`: create from already validated
|
||||
config and enforce fresh-instance semantics.
|
||||
|
||||
`LayerNode(name, implementation, deps=None, metadata=None)` creates a stateless
|
||||
graph node from a `Layer` subclass or `LayerProvider`. `deps` maps dependency
|
||||
field names on the node's layer class to other node names.
|
||||
|
||||
### `Compositor`
|
||||
|
||||
`Compositor[PromptT, ToolT, LayerPromptT, LayerToolT, UserPromptT, LayerUserPromptT]`
|
||||
owns the ordered graph plan and provider construction plans.
|
||||
|
||||
Construction:
|
||||
|
||||
- `Compositor(nodes, prompt_transformer=None, user_prompt_transformer=None, tool_transformer=None)`.
|
||||
- `Compositor.from_config(conf, providers=..., node_providers=None, prompt_transformer=None, user_prompt_transformer=None, tool_transformer=None)`.
|
||||
|
||||
Public properties and entry API:
|
||||
|
||||
- `nodes -> tuple[LayerNode, ...]`: stateless graph plan in order.
|
||||
- `enter(configs=None, session_snapshot=None) -> AsyncIterator[CompositorRun]`:
|
||||
validate per-run configs and optional snapshot, create fresh layers, bind direct
|
||||
dependencies, enter hooks in graph order, and exit hooks in reverse order.
|
||||
|
||||
`providers` resolve graph node `type` ids. `node_providers` are keyed by node name
|
||||
and override type-id providers for node-specific construction.
|
||||
|
||||
### `CompositorRun`
|
||||
|
||||
`CompositorRun` is the single-invocation runtime object yielded by
|
||||
`Compositor.enter(...)`.
|
||||
|
||||
Fields:
|
||||
|
||||
- `slots: OrderedDict[str, LayerRunSlot]`
|
||||
- `session_snapshot: CompositorSessionSnapshot | None`
|
||||
|
||||
Layer access and exit intent:
|
||||
|
||||
- `get_layer(name) -> Layer`
|
||||
- `get_layer(name, layer_type) -> LayerT`
|
||||
- `suspend_on_exit() -> None`
|
||||
- `delete_on_exit() -> None`
|
||||
- `suspend_layer_on_exit(name) -> None`
|
||||
- `delete_layer_on_exit(name) -> None`
|
||||
|
||||
Aggregation properties:
|
||||
|
||||
- `prompts -> list[PromptT]`: prefix prompts in layer order, suffix prompts in
|
||||
reverse layer order, then optional `prompt_transformer`.
|
||||
- `user_prompts -> list[UserPromptT]`: user prompts in layer order, then optional
|
||||
`user_prompt_transformer`.
|
||||
- `tools -> list[ToolT]`: tools in layer order, then optional `tool_transformer`.
|
||||
|
||||
Snapshot API:
|
||||
|
||||
- `snapshot_session() -> CompositorSessionSnapshot`: snapshot non-active layer
|
||||
lifecycle state and runtime state.
|
||||
|
||||
`session_snapshot` is populated after context exit. Core run slots default to
|
||||
delete-on-exit; request suspend before exit when the next snapshot must be
|
||||
resumable.
|
||||
|
||||
### Run slots and snapshots
|
||||
|
||||
- `LayerRunSlot`: `layer`, `lifecycle_state`, `exit_intent`.
|
||||
- `LayerSessionSnapshot`: `name`, `lifecycle_state`, `runtime_state`.
|
||||
- `CompositorSessionSnapshot`: `schema_version`, `layers`.
|
||||
|
||||
Snapshots include ordered layer lifecycle state and JSON-safe runtime state only.
|
||||
They exclude live resources, dependencies, prompts, tools, per-run config, and
|
||||
exit intent.
|
||||
|
||||
## Collection layers and transformers
|
||||
|
||||
### Plain layers: `agenton_collections.layers.plain`
|
||||
|
||||
- `PromptLayer`: config-backed layer with `PromptLayerConfig(prefix, user,
|
||||
suffix)` and `type_id = "plain.prompt"`.
|
||||
- `ObjectLayer`: factory-backed layer for Python objects.
|
||||
- `ToolsLayer`: factory-backed layer for plain callables.
|
||||
- `DynamicToolsLayer`: factory-backed layer for object-bound callables.
|
||||
- `with_object`: decorator for dynamic tools whose first argument is supplied by
|
||||
an `ObjectLayer` dependency.
|
||||
|
||||
### Pydantic AI bridge
|
||||
|
||||
`agenton_collections.layers.pydantic_ai.PydanticAIBridgeLayer` exposes
|
||||
pydantic-ai system prompts, user prompts, and tools while depending on an
|
||||
`ObjectLayer` for `RunContext.deps`.
|
||||
|
||||
`agenton_collections.transformers.pydantic_ai.PYDANTIC_AI_TRANSFORMERS` provides:
|
||||
|
||||
- `prompt_transformer`: maps tagged Agenton prompt items to pydantic-ai system
|
||||
prompt functions.
|
||||
- `user_prompt_transformer`: maps tagged Agenton user prompt items to pydantic-ai
|
||||
`UserContent` values.
|
||||
- `tool_transformer`: maps tagged Agenton tool items to pydantic-ai tools.
|
||||
@ -1,313 +0,0 @@
|
||||
# Dify Agent Run API
|
||||
|
||||
The Dify Agent API exposes asynchronous agent runs backed by Agenton state-only
|
||||
layer composition, Pydantic AI runtime execution, Redis run records, and per-run
|
||||
Redis Streams event logs. The FastAPI application lives at
|
||||
`dify-agent/src/dify_agent/server/app.py`.
|
||||
|
||||
Public Python DTOs and event models are exported from
|
||||
`dify_agent.protocol.schemas`. `dify_agent.server.schemas` is intentionally
|
||||
server-only and should not be used by API consumers.
|
||||
|
||||
## Input model
|
||||
|
||||
Create-run requests accept a public `RunComposition` and an optional
|
||||
`CompositorSessionSnapshot`. There is **no top-level `user_prompt` or model
|
||||
profile field**. User input and model/provider selection are supplied by Agenton
|
||||
layers. `on_exit` optionally controls whether layers suspend or delete when the
|
||||
run leaves the active session; the default is suspend for all layers. In the MVP
|
||||
server, the safe provider set includes `plain.prompt`, `dify.plugin`, and
|
||||
`dify.plugin.llm`. The runtime reads the LLM model layer named by
|
||||
`DIFY_AGENT_MODEL_LAYER_ID`, whose public value is `"llm"`.
|
||||
|
||||
Blank user input is rejected. A request with no user prompt, an empty string, or
|
||||
only whitespace strings such as `"user": ["", " "]` returns `422` before a run
|
||||
record is created.
|
||||
|
||||
The server does not implement a Pydantic AI history layer. Resumable Agenton
|
||||
state is represented only by `session_snapshot`.
|
||||
|
||||
## Create a run
|
||||
|
||||
```http
|
||||
POST /runs
|
||||
Content-Type: application/json
|
||||
```
|
||||
|
||||
Request:
|
||||
|
||||
```json
|
||||
{
|
||||
"composition": {
|
||||
"schema_version": 1,
|
||||
"layers": [
|
||||
{
|
||||
"name": "prompt",
|
||||
"type": "plain.prompt",
|
||||
"config": {
|
||||
"prefix": "You are a concise assistant.",
|
||||
"user": "Say hello from the Dify Agent API."
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "plugin",
|
||||
"type": "dify.plugin",
|
||||
"config": {
|
||||
"tenant_id": "replace-with-tenant-id",
|
||||
"plugin_id": "langgenius/openai"
|
||||
}
|
||||
},
|
||||
{
|
||||
"name": "llm",
|
||||
"type": "dify.plugin.llm",
|
||||
"deps": {
|
||||
"plugin": "plugin"
|
||||
},
|
||||
"config": {
|
||||
"model_provider": "openai",
|
||||
"model": "gpt-4o-mini",
|
||||
"credentials": {
|
||||
"api_key": "replace-with-provider-key"
|
||||
},
|
||||
"model_settings": {
|
||||
"temperature": 0.2
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
"session_snapshot": null,
|
||||
"on_exit": {
|
||||
"default": "suspend",
|
||||
"layers": {
|
||||
"prompt": "delete"
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
Response (`202 Accepted`):
|
||||
|
||||
```json
|
||||
{
|
||||
"run_id": "4a7f9a98-5c55-48d0-8f3e-87ef2cf81234",
|
||||
"status": "running"
|
||||
}
|
||||
```
|
||||
|
||||
The server persists the run record and schedules execution immediately in the
|
||||
same FastAPI process. Redis is not used as a job queue. Run records and per-run
|
||||
event streams expire after `DIFY_AGENT_RUN_RETENTION_SECONDS`, which defaults to
|
||||
`259200` seconds (3 days).
|
||||
|
||||
`dify.plugin` receives tenant/plugin identity only; daemon URL, API key, timeout,
|
||||
and connection limits are server settings. `dify.plugin.llm.credentials` accepts
|
||||
scalar values only (`string`, `number`, `boolean`, or `null`). Unknown
|
||||
`on_exit.layers` keys return `422` before a run record is created.
|
||||
|
||||
Validation error example (`422`):
|
||||
|
||||
```json
|
||||
{
|
||||
"detail": "run.user_prompts must not be empty"
|
||||
}
|
||||
```
|
||||
|
||||
## Get run status
|
||||
|
||||
```http
|
||||
GET /runs/{run_id}
|
||||
```
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"run_id": "4a7f9a98-5c55-48d0-8f3e-87ef2cf81234",
|
||||
"status": "succeeded",
|
||||
"created_at": "2026-05-08T12:00:00Z",
|
||||
"updated_at": "2026-05-08T12:00:02Z",
|
||||
"error": null
|
||||
}
|
||||
```
|
||||
|
||||
Status values are:
|
||||
|
||||
- `running`
|
||||
- `succeeded`
|
||||
- `failed`
|
||||
|
||||
Unknown or expired run ids return `404` with `"run not found"`.
|
||||
|
||||
## Poll events
|
||||
|
||||
```http
|
||||
GET /runs/{run_id}/events?after=0-0&limit=100
|
||||
```
|
||||
|
||||
Cursor values are Redis Stream IDs. Use `after=0-0` to read from the beginning.
|
||||
The response includes `next_cursor`; pass it as the next `after` value to continue
|
||||
polling.
|
||||
|
||||
Response:
|
||||
|
||||
```json
|
||||
{
|
||||
"run_id": "4a7f9a98-5c55-48d0-8f3e-87ef2cf81234",
|
||||
"events": [
|
||||
{
|
||||
"id": "1715170000000-0",
|
||||
"run_id": "4a7f9a98-5c55-48d0-8f3e-87ef2cf81234",
|
||||
"type": "run_started",
|
||||
"data": {},
|
||||
"created_at": "2026-05-08T12:00:00Z"
|
||||
}
|
||||
],
|
||||
"next_cursor": "1715170000000-0"
|
||||
}
|
||||
```
|
||||
|
||||
## Stream events with SSE
|
||||
|
||||
```http
|
||||
GET /runs/{run_id}/events/sse
|
||||
```
|
||||
|
||||
SSE frames use the run event id as `id`, the event type as `event`, and the full
|
||||
`RunEvent` JSON object as `data`:
|
||||
|
||||
```text
|
||||
id: 1715170000000-0
|
||||
event: run_started
|
||||
data: {"id":"1715170000000-0","run_id":"...","type":"run_started","data":{},"created_at":"..."}
|
||||
|
||||
```
|
||||
|
||||
Replay can start from a cursor with either:
|
||||
|
||||
- `GET /runs/{run_id}/events/sse?after=1715170000000-0`
|
||||
- `Last-Event-ID: 1715170000000-0`
|
||||
|
||||
If both are provided, the `after` query parameter takes precedence.
|
||||
|
||||
## Python client
|
||||
|
||||
Use `dify_agent.client.Client` for both async and sync code. Async methods use
|
||||
normal names; sync methods add `_sync`.
|
||||
|
||||
```python {test="skip" lint="skip"}
|
||||
from agenton.layers import ExitIntent
|
||||
from agenton_collections.layers.plain import PromptLayerConfig
|
||||
from dify_agent.client import Client
|
||||
from dify_agent.layers.dify_plugin import DifyPluginLLMLayerConfig, DifyPluginLayerConfig
|
||||
from dify_agent.protocol import (
|
||||
DIFY_AGENT_MODEL_LAYER_ID,
|
||||
CreateRunRequest,
|
||||
LayerExitSignals,
|
||||
RunComposition,
|
||||
RunLayerSpec,
|
||||
)
|
||||
|
||||
|
||||
async def main() -> None:
|
||||
request = CreateRunRequest(
|
||||
composition=RunComposition(
|
||||
layers=[
|
||||
RunLayerSpec(name="prompt", type="plain.prompt", config=PromptLayerConfig(user="hello")),
|
||||
RunLayerSpec(
|
||||
name="plugin",
|
||||
type="dify.plugin",
|
||||
config=DifyPluginLayerConfig(tenant_id="tenant-id", plugin_id="langgenius/openai"),
|
||||
),
|
||||
RunLayerSpec(
|
||||
name=DIFY_AGENT_MODEL_LAYER_ID,
|
||||
type="dify.plugin.llm",
|
||||
deps={"plugin": "plugin"},
|
||||
config=DifyPluginLLMLayerConfig(
|
||||
model_provider="openai",
|
||||
model="gpt-4o-mini",
|
||||
credentials={"api_key": "provider-key"},
|
||||
),
|
||||
),
|
||||
]
|
||||
),
|
||||
on_exit=LayerExitSignals(layers={"prompt": ExitIntent.DELETE}),
|
||||
)
|
||||
async with Client(base_url="http://localhost:8000") as client:
|
||||
run = await client.create_run(request)
|
||||
async for event in client.stream_events(run.run_id):
|
||||
print(event)
|
||||
```
|
||||
|
||||
```python {test="skip" lint="skip"}
|
||||
from agenton_collections.layers.plain import PromptLayerConfig
|
||||
from dify_agent.client import Client
|
||||
from dify_agent.layers.dify_plugin import DifyPluginLLMLayerConfig, DifyPluginLayerConfig
|
||||
from dify_agent.protocol import DIFY_AGENT_MODEL_LAYER_ID, CreateRunRequest, RunComposition, RunLayerSpec
|
||||
|
||||
|
||||
request = CreateRunRequest(
|
||||
composition=RunComposition(
|
||||
layers=[
|
||||
RunLayerSpec(name="prompt", type="plain.prompt", config=PromptLayerConfig(user="hello")),
|
||||
RunLayerSpec(
|
||||
name="plugin",
|
||||
type="dify.plugin",
|
||||
config=DifyPluginLayerConfig(tenant_id="tenant-id", plugin_id="langgenius/openai"),
|
||||
),
|
||||
RunLayerSpec(
|
||||
name=DIFY_AGENT_MODEL_LAYER_ID,
|
||||
type="dify.plugin.llm",
|
||||
deps={"plugin": "plugin"},
|
||||
config=DifyPluginLLMLayerConfig(
|
||||
model_provider="openai",
|
||||
model="gpt-4o-mini",
|
||||
credentials={"api_key": "provider-key"},
|
||||
),
|
||||
),
|
||||
]
|
||||
)
|
||||
)
|
||||
|
||||
with Client(base_url="http://localhost:8000") as client:
|
||||
run = client.create_run_sync(request)
|
||||
terminal = client.wait_run_sync(run.run_id)
|
||||
```
|
||||
|
||||
`stream_events` and `stream_events_sync` parse SSE without an extra dependency.
|
||||
They reconnect by default from the latest yielded event id and stop after
|
||||
`run_succeeded` or `run_failed`. They do not reconnect for HTTP 4xx responses,
|
||||
DTO validation failures, or malformed SSE frames. `create_run` and
|
||||
`create_run_sync` require a `CreateRunRequest` DTO and never retry `POST /runs`;
|
||||
if a timeout occurs, the caller must decide whether to inspect existing runs or
|
||||
submit a new run.
|
||||
|
||||
## Event types and order
|
||||
|
||||
A normal successful run emits:
|
||||
|
||||
1. `run_started`
|
||||
2. zero or more `pydantic_ai_event`
|
||||
3. `run_succeeded`
|
||||
|
||||
A failed run emits:
|
||||
|
||||
1. `run_started`
|
||||
2. zero or more `pydantic_ai_event`
|
||||
3. `run_failed`
|
||||
|
||||
Each event keeps the same envelope shape and has typed `data`: `run_started` uses
|
||||
`{}`, `pydantic_ai_event` uses Pydantic AI's `AgentStreamEvent` union,
|
||||
`run_succeeded` uses `{ "output": JsonValue, "session_snapshot":
|
||||
CompositorSessionSnapshot }`, and `run_failed` uses `{ "error": string,
|
||||
"reason": string | null }`. The session snapshot from `run_succeeded.data` can
|
||||
be sent as `session_snapshot` in a later create-run request with the same
|
||||
composition layer names and order.
|
||||
|
||||
## Consumer examples
|
||||
|
||||
See:
|
||||
|
||||
- `dify-agent/examples/dify_agent/dify_agent_examples/run_server_consumer.py` for cursor polling
|
||||
- `dify-agent/examples/dify_agent/dify_agent_examples/run_server_sse_consumer.py` for SSE consumption
|
||||
- `dify-agent/examples/dify_agent/dify_agent_examples/run_server_sync_client.py` for synchronous client usage
|
||||
Loading…
Reference in New Issue
Block a user