VirtualWorkflowSynthesizer.ensure_workflow() creates a real draft
workflow on first call for a legacy app, persisting it to the database.
On subsequent calls, returns the existing draft.
This is needed because AdvancedChatAppGenerator's worker thread looks
up workflows from the database by ID. Instead of hacking the generator
to skip DB lookups, we treat this as a lazy one-time upgrade: the old
app gets a real workflow that can also be edited in the workflow editor.
Verified: old chat app created on main branch ("What is 2+2?" -> "Four")
and old agent-chat app ("Say hello" -> "Hello!") both successfully
execute through the Agent V2 engine with AGENT_V2_TRANSPARENT_UPGRADE=true.
Made-with: Cursor
Co-authored-by: Brian Wang <BrianWang1990@users.noreply.github.com>
Co-authored-by: test <test@testdeMac-mini.local>
Co-authored-by: BrianWang1990 <512dabing99@163.com>
Co-authored-by: Stephen Zhou <hi@hyoban.cc>
Co-authored-by: Stephen Zhou <38493346+hyoban@users.noreply.github.com>
Add two feature-flag-controlled upgrade paths that allow existing apps
and LLM nodes to transparently run through the Agent V2 engine without
any database migration:
1. AGENT_V2_TRANSPARENT_UPGRADE (default: off):
When enabled, old apps (chat/completion/agent-chat) bypass legacy
Easy-UI runners. VirtualWorkflowSynthesizer converts AppModelConfig
to an in-memory Workflow (start -> agent-v2 -> answer) at runtime,
then executes via AdvancedChatAppGenerator. Falls back to legacy
path on any synthesis error.
VirtualWorkflowSynthesizer maps:
- model JSON -> ModelConfig
- pre_prompt/chat_prompt_config -> prompt_template
- agent_mode.tools -> ToolMetadata[]
- agent_mode.strategy -> agent_strategy
- dataset_configs -> context
- file_upload -> vision
2. AGENT_V2_REPLACES_LLM (default: off):
When enabled, DifyNodeFactory.create_node() transparently remaps
nodes with type="llm" to type="agent-v2" before class resolution.
Since AgentV2NodeData is a strict superset of LLMNodeData, the
mapping is lossless. With tools=[], Agent V2 behaves identically
to LLM Node.
Both flags default to False for safety. Turn off = instant rollback.
46 existing tests pass. Flask starts successfully.
Made-with: Cursor
Replace the hardcoded FunctionCallAgentRunner / CotChatAgentRunner /
CotCompletionAgentRunner selection in AgentChatAppRunner with the new
AgentAppRunner class that uses StrategyFactory from Phase 1.
Before: AgentChatAppRunner manually selects FC/CoT runner class based on
model features and LLM mode, then instantiates it directly.
After: AgentChatAppRunner instantiates AgentAppRunner (from sandbox branch),
which internally uses StrategyFactory.create_strategy() to auto-select
the right strategy, and uses ToolInvokeHook for proper agent_invoke
with file handling and thought persistence.
This unifies the agent execution engine: both the new Agent V2 workflow
node and the legacy agent-chat app now use the same StrategyFactory
and AgentPattern implementations.
Also fix: command and file_upload nodes use string node_type instead of
BuiltinNodeTypes.COMMAND/FILE_UPLOAD (not in current graphon version).
46 tests pass. Flask starts successfully.
Made-with: Cursor
Integrate the ported sandbox system with Agent V2 node:
- Add DIFY_SANDBOX_CONTEXT_KEY to app_invoke_entities for passing
sandbox through run_context without modifying graphon
- DifyNodeFactory._resolve_sandbox() extracts sandbox from run_context
and passes it to AgentV2Node constructor
- AgentV2Node accepts optional sandbox parameter
- AgentV2ToolManager supports dual execution paths:
- _invoke_tool_directly(): standard ToolEngine.generic_invoke (no sandbox)
- _invoke_tool_in_sandbox(): delegates to SandboxBashSession.run_tool()
which uses DifyCli to call back to Dify API from inside the sandbox
- Graceful fallback: if sandbox execution fails, logs warning and returns
error message (does not crash the agent loop)
To enable sandbox for an Agent workflow:
1. Create a Sandbox via SandboxBuilder
2. Add it to run_context under DIFY_SANDBOX_CONTEXT_KEY
3. Agent V2 nodes will automatically use sandbox for tool execution
46 existing tests still pass.
Made-with: Cursor
Fixes discovered during end-to-end testing of Agent workflow execution:
1. ModelManager instantiation: use ModelManager.for_tenant() instead of
ModelManager() which requires a ProviderManager argument
2. Variable template resolution: use VariableTemplateParser(template).format()
instead of non-existent resolve_template() static method
3. invoke_llm() signature: remove unsupported 'user' keyword argument
4. Event dispatch: remove ModelInvokeCompletedEvent from _run() yield
(graphon base Node._dispatch doesn't support it via singledispatch)
5. NodeRunResult metadata: use WorkflowNodeExecutionMetadataKey enum keys
(TOTAL_TOKENS, TOTAL_PRICE, CURRENCY) instead of arbitrary string keys
6. SSE topic mismatch: use AppMode.AGENT (not ADVANCED_CHAT) in
retrieve_events() so publisher and subscriber share the same channel
7. Celery task routing: add AppMode.AGENT to workflow_execute_task._run_app()
alongside ADVANCED_CHAT
All issues verified fixed: Agent V2 node successfully invokes LLM and
returns "Hello there!" through the full SSE streaming pipeline.
Made-with: Cursor