Add two feature-flag-controlled upgrade paths that allow existing apps
and LLM nodes to transparently run through the Agent V2 engine without
any database migration:
1. AGENT_V2_TRANSPARENT_UPGRADE (default: off):
When enabled, old apps (chat/completion/agent-chat) bypass legacy
Easy-UI runners. VirtualWorkflowSynthesizer converts AppModelConfig
to an in-memory Workflow (start -> agent-v2 -> answer) at runtime,
then executes via AdvancedChatAppGenerator. Falls back to legacy
path on any synthesis error.
VirtualWorkflowSynthesizer maps:
- model JSON -> ModelConfig
- pre_prompt/chat_prompt_config -> prompt_template
- agent_mode.tools -> ToolMetadata[]
- agent_mode.strategy -> agent_strategy
- dataset_configs -> context
- file_upload -> vision
2. AGENT_V2_REPLACES_LLM (default: off):
When enabled, DifyNodeFactory.create_node() transparently remaps
nodes with type="llm" to type="agent-v2" before class resolution.
Since AgentV2NodeData is a strict superset of LLMNodeData, the
mapping is lossless. With tools=[], Agent V2 behaves identically
to LLM Node.
Both flags default to False for safety. Turn off = instant rollback.
46 existing tests pass. Flask starts successfully.
Made-with: Cursor
Replace the hardcoded FunctionCallAgentRunner / CotChatAgentRunner /
CotCompletionAgentRunner selection in AgentChatAppRunner with the new
AgentAppRunner class that uses StrategyFactory from Phase 1.
Before: AgentChatAppRunner manually selects FC/CoT runner class based on
model features and LLM mode, then instantiates it directly.
After: AgentChatAppRunner instantiates AgentAppRunner (from sandbox branch),
which internally uses StrategyFactory.create_strategy() to auto-select
the right strategy, and uses ToolInvokeHook for proper agent_invoke
with file handling and thought persistence.
This unifies the agent execution engine: both the new Agent V2 workflow
node and the legacy agent-chat app now use the same StrategyFactory
and AgentPattern implementations.
Also fix: command and file_upload nodes use string node_type instead of
BuiltinNodeTypes.COMMAND/FILE_UPLOAD (not in current graphon version).
46 tests pass. Flask starts successfully.
Made-with: Cursor
Integrate the ported sandbox system with Agent V2 node:
- Add DIFY_SANDBOX_CONTEXT_KEY to app_invoke_entities for passing
sandbox through run_context without modifying graphon
- DifyNodeFactory._resolve_sandbox() extracts sandbox from run_context
and passes it to AgentV2Node constructor
- AgentV2Node accepts optional sandbox parameter
- AgentV2ToolManager supports dual execution paths:
- _invoke_tool_directly(): standard ToolEngine.generic_invoke (no sandbox)
- _invoke_tool_in_sandbox(): delegates to SandboxBashSession.run_tool()
which uses DifyCli to call back to Dify API from inside the sandbox
- Graceful fallback: if sandbox execution fails, logs warning and returns
error message (does not crash the agent loop)
To enable sandbox for an Agent workflow:
1. Create a Sandbox via SandboxBuilder
2. Add it to run_context under DIFY_SANDBOX_CONTEXT_KEY
3. Agent V2 nodes will automatically use sandbox for tool execution
46 existing tests still pass.
Made-with: Cursor
Fixes discovered during end-to-end testing of Agent workflow execution:
1. ModelManager instantiation: use ModelManager.for_tenant() instead of
ModelManager() which requires a ProviderManager argument
2. Variable template resolution: use VariableTemplateParser(template).format()
instead of non-existent resolve_template() static method
3. invoke_llm() signature: remove unsupported 'user' keyword argument
4. Event dispatch: remove ModelInvokeCompletedEvent from _run() yield
(graphon base Node._dispatch doesn't support it via singledispatch)
5. NodeRunResult metadata: use WorkflowNodeExecutionMetadataKey enum keys
(TOTAL_TOKENS, TOTAL_PRICE, CURRENCY) instead of arbitrary string keys
6. SSE topic mismatch: use AppMode.AGENT (not ADVANCED_CHAT) in
retrieve_events() so publisher and subscriber share the same channel
7. Celery task routing: add AppMode.AGENT to workflow_execute_task._run_app()
alongside ADVANCED_CHAT
All issues verified fixed: Agent V2 node successfully invokes LLM and
returns "Hello there!" through the full SSE streaming pipeline.
Made-with: Cursor
Add AppMode.AGENT to validate_features_structure() match case
alongside ADVANCED_CHAT, fixing 'Invalid app mode: agent' error
when creating Agent apps (which auto-generate a workflow draft).
Discovered during E2E testing of the full create -> draft -> publish flow.
Made-with: Cursor
Ensure new Agent apps (AppMode.AGENT) can access all workflow-related
APIs and Service API chat endpoints:
- Add AppMode.AGENT to 13 workflow controller mode checks
- Add AppMode.AGENT to 4 workflow_run controller mode checks
- Add AppMode.AGENT to workflow_draft_variable controller
- Add AppMode.AGENT to Service API chat, conversation, message endpoints
- Add AgentV2Node.get_default_config() with prompt templates and strategy defaults
- 46 unit tests all passing (8 new Phase 7 tests)
Old agent/agent-chat paths remain completely unchanged.
Made-with: Cursor