- Add model_features property and build_execution_context method to
AgentAppRunner to fix mypy attr-defined errors
- Export WorkflowComment, WorkflowCommentReply, WorkflowCommentMention
from models/__init__.py to fix import errors
- Add NestedNodeGraphRequest, NestedNodeGraphResponse,
NestedNodeParameterSchema to services/workflow/entities.py
- Update test_agent_chat_app_runner: tests for invalid LLM mode and
invalid strategy now reflect unified AgentAppRunner behavior
(no longer raises ValueError for these cases)
Made-with: Cursor
VirtualWorkflowSynthesizer._build_features() now extracts ALL legacy
app features from AppModelConfig into the synthesized workflow.features:
- opening_statement + suggested_questions
- sensitive_word_avoidance (keywords/API moderation)
- more_like_this
- speech_to_text / text_to_speech
- retriever_resource
Previously workflow.features was hardcoded to "{}", losing all these
features during transparent upgrade. Now AdvancedChatAppRunner's
moderation, opening text, and other feature layers work correctly
for transparently upgraded old apps.
Made-with: Cursor
- workflow_execute_task: add AppMode.CHAT/AGENT_CHAT/COMPLETION to the
AdvancedChatAppGenerator routing branch so transparently upgraded old
apps can execute through the workflow engine.
- app_generate_service: use app_model.mode (not hardcoded AppMode.AGENT)
for SSE event subscription channel, ensuring the subscriber and
Celery publisher use the same Redis channel key.
Made-with: Cursor
1. DSL Import fix: change self._session.commit() to self._session.flush()
in app_dsl_service.py _create_or_update_app() to avoid "closed transaction"
error. DSL import now works: export agent app -> import -> new app created.
2. Memory loading attempt: added _load_memory_messages() to AgentV2Node
that loads TokenBufferMemory from conversation history. However, chatflow
engine manages conversations differently from easy-UI (conversation may
not be in DB at query time, or uses ConversationVariablePersistenceLayer
instead of Message table). Memory needs further investigation.
Test results:
- Multi-turn memory: Turn 1 OK, Turn 2 LLM doesn't see history (needs deeper fix)
- Service API with API Key: PASSED (answer="Sixteen" for 8+8)
- DSL Import: PASSED (status=completed, new app created)
- Token aggregation: PASSED (node=49, workflow=49)
Known: memory in multi-turn chatflow needs to use graphon's built-in
memory mechanism (MemoryConfig on node + ConversationVariablePersistenceLayer)
rather than direct DB query.
Made-with: Cursor
1. Remove StreamChunkEvent from AgentV2Node._run_without_tools():
The agent-v2 node was yielding StreamChunkEvent during LLM streaming,
AND the downstream answer node was outputting the same text via
{{#agent.text#}} variable reference, causing "FourFour" duplication.
Now text only flows through outputs.text -> answer node (single path).
2. Map inputs to query for completion app transparent upgrade:
Completion apps send {inputs: {query: "..."}} not {query: "..."}.
VirtualWorkflowSynthesizer route now extracts query from inputs
when the top-level query is missing.
Verified:
- Old chat app: "What is 2+2?" -> "Four" (was "FourFour")
- Old completion app: {inputs: {query: "What is 3+3?"}} -> "3 + 3 = 6" (was failing)
- Old agent-chat app: still works
Made-with: Cursor
VirtualWorkflowSynthesizer.ensure_workflow() creates a real draft
workflow on first call for a legacy app, persisting it to the database.
On subsequent calls, returns the existing draft.
This is needed because AdvancedChatAppGenerator's worker thread looks
up workflows from the database by ID. Instead of hacking the generator
to skip DB lookups, we treat this as a lazy one-time upgrade: the old
app gets a real workflow that can also be edited in the workflow editor.
Verified: old chat app created on main branch ("What is 2+2?" -> "Four")
and old agent-chat app ("Say hello" -> "Hello!") both successfully
execute through the Agent V2 engine with AGENT_V2_TRANSPARENT_UPGRADE=true.
Made-with: Cursor
Add two feature-flag-controlled upgrade paths that allow existing apps
and LLM nodes to transparently run through the Agent V2 engine without
any database migration:
1. AGENT_V2_TRANSPARENT_UPGRADE (default: off):
When enabled, old apps (chat/completion/agent-chat) bypass legacy
Easy-UI runners. VirtualWorkflowSynthesizer converts AppModelConfig
to an in-memory Workflow (start -> agent-v2 -> answer) at runtime,
then executes via AdvancedChatAppGenerator. Falls back to legacy
path on any synthesis error.
VirtualWorkflowSynthesizer maps:
- model JSON -> ModelConfig
- pre_prompt/chat_prompt_config -> prompt_template
- agent_mode.tools -> ToolMetadata[]
- agent_mode.strategy -> agent_strategy
- dataset_configs -> context
- file_upload -> vision
2. AGENT_V2_REPLACES_LLM (default: off):
When enabled, DifyNodeFactory.create_node() transparently remaps
nodes with type="llm" to type="agent-v2" before class resolution.
Since AgentV2NodeData is a strict superset of LLMNodeData, the
mapping is lossless. With tools=[], Agent V2 behaves identically
to LLM Node.
Both flags default to False for safety. Turn off = instant rollback.
46 existing tests pass. Flask starts successfully.
Made-with: Cursor