mirror of
https://github.com/langgenius/dify.git
synced 2026-05-09 12:59:18 +08:00
remove the agent notes
This commit is contained in:
parent
2b787bff72
commit
688b2cfc48
@ -1,25 +0,0 @@
|
||||
## Purpose
|
||||
|
||||
`core/model_runtime/entities/message_entities.py` defines the canonical in-memory Pydantic entities for model runtime
|
||||
prompt messages and multi-modal message content. These entities are used across providers (built-in and plugin-backed)
|
||||
and are serialized/deserialized when exchanging prompt/response payloads between layers.
|
||||
|
||||
## Key invariants
|
||||
|
||||
- `PromptMessage.content` is either a `str`, a list of typed content items (discriminated by `type`), or `None`.
|
||||
- `PromptMessage.validate_content` normalizes dict/content-model inputs into the correct concrete content classes using
|
||||
`CONTENT_TYPE_MAPPING`.
|
||||
- `PromptMessage.serialize_content` ensures a list of content items is emitted as a list of plain dicts.
|
||||
- `AssistantPromptMessage.tool_calls` may coexist with text/multi-modal content and is considered part of "non-empty".
|
||||
|
||||
## Opaque pass-through fields
|
||||
|
||||
- `opaque_body` is an optional JSON value on `PromptMessageContent` and `AssistantPromptMessage`.
|
||||
- It is treated as an uninterpreted provider-specific payload and must be passed through unchanged between Dify and
|
||||
plugin LLM providers (no validation/transformation beyond JSON serialization).
|
||||
|
||||
## Safety / compatibility notes
|
||||
|
||||
- Do not make `opaque_body` required; existing providers/plugins may not send it.
|
||||
- Keep `type` discrimination stable; content subclasses must continue to be selectable via `Field(discriminator="type")`.
|
||||
|
||||
@ -1,20 +0,0 @@
|
||||
## Purpose
|
||||
|
||||
`core/model_runtime/model_providers/__base/large_language_model.py` defines the base `LargeLanguageModel` interface used
|
||||
by model providers, including plugin-backed providers via `PluginModelClient`.
|
||||
|
||||
## Plugin invocation flow
|
||||
|
||||
- For plugin-based providers, `invoke()` delegates to `PluginModelClient.invoke_llm(...)`, which streams
|
||||
`LLMResultChunk` objects from the plugin daemon.
|
||||
- Dify yields chunks to callers and also aggregates chunks to fire `after_invoke` callbacks (and to construct a
|
||||
blocking `LLMResult` when `stream=False`).
|
||||
|
||||
## Key invariants / edge cases
|
||||
|
||||
- When aggregating chunks into an `LLMResult`, preserve provider-specific fields on the assistant message:
|
||||
- `AssistantPromptMessage.opaque_body` (pass-through, uninterpreted JSON).
|
||||
- Incremental `tool_calls` (merge deltas via `_increase_tool_call`).
|
||||
- Chunk `.prompt_messages` may be empty for plugin responses (compat layer for the plugin daemon); Dify re-attaches the
|
||||
original request `prompt_messages` for downstream consumers.
|
||||
|
||||
@ -1,12 +0,0 @@
|
||||
## Purpose
|
||||
|
||||
Unit tests for plugin-backed `LargeLanguageModel.invoke()` behavior around preserving provider pass-through data.
|
||||
|
||||
## What it covers
|
||||
|
||||
- `AssistantPromptMessage.opaque_body` from plugin `LLMResultChunk` deltas is preserved:
|
||||
- On the returned `LLMResult` in blocking (`stream=False`) mode.
|
||||
- On the aggregated `LLMResult` passed to `on_after_invoke` callbacks in streaming mode.
|
||||
- Streaming mode also verifies that `chunk.prompt_messages` is re-attached to the original request prompt messages.
|
||||
- Streaming aggregation merges incremental `tool_calls` across chunks.
|
||||
|
||||
Loading…
Reference in New Issue
Block a user