mirror of https://github.com/langgenius/dify.git
Merge branch 'main' into feat/hitl-frontend
This commit is contained in:
commit
d9982b8dc4
171
api/README.md
171
api/README.md
|
|
@ -1,6 +1,6 @@
|
|||
# Dify Backend API
|
||||
|
||||
## Usage
|
||||
## Setup and Run
|
||||
|
||||
> [!IMPORTANT]
|
||||
>
|
||||
|
|
@ -8,48 +8,77 @@
|
|||
> [`uv`](https://docs.astral.sh/uv/) as the package manager
|
||||
> for Dify API backend service.
|
||||
|
||||
1. Start the docker-compose stack
|
||||
`uv` and `pnpm` are required to run the setup and development commands below.
|
||||
|
||||
The backend require some middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using `docker-compose`.
|
||||
### Using scripts (recommended)
|
||||
|
||||
The scripts resolve paths relative to their location, so you can run them from anywhere.
|
||||
|
||||
1. Run setup (copies env files and installs dependencies).
|
||||
|
||||
```bash
|
||||
cd ../docker
|
||||
cp middleware.env.example middleware.env
|
||||
# change the profile to mysql if you are not using postgres,change the profile to other vector database if you are not using weaviate
|
||||
docker compose -f docker-compose.middleware.yaml --profile postgresql --profile weaviate -p dify up -d
|
||||
cd ../api
|
||||
./dev/setup
|
||||
```
|
||||
|
||||
1. Copy `.env.example` to `.env`
|
||||
1. Review `api/.env`, `web/.env.local`, and `docker/middleware.env` values (see the `SECRET_KEY` note below).
|
||||
|
||||
```cli
|
||||
cp .env.example .env
|
||||
1. Start middleware (PostgreSQL/Redis/Weaviate).
|
||||
|
||||
```bash
|
||||
./dev/start-docker-compose
|
||||
```
|
||||
|
||||
> [!IMPORTANT]
|
||||
>
|
||||
> When the frontend and backend run on different subdomains, set COOKIE_DOMAIN to the site’s top-level domain (e.g., `example.com`). The frontend and backend must be under the same top-level domain in order to share authentication cookies.
|
||||
1. Start backend (runs migrations first).
|
||||
|
||||
1. Generate a `SECRET_KEY` in the `.env` file.
|
||||
|
||||
bash for Linux
|
||||
|
||||
```bash for Linux
|
||||
sed -i "/^SECRET_KEY=/c\SECRET_KEY=$(openssl rand -base64 42)" .env
|
||||
```bash
|
||||
./dev/start-api
|
||||
```
|
||||
|
||||
bash for Mac
|
||||
1. Start Dify [web](../web) service.
|
||||
|
||||
```bash for Mac
|
||||
secret_key=$(openssl rand -base64 42)
|
||||
sed -i '' "/^SECRET_KEY=/c\\
|
||||
SECRET_KEY=${secret_key}" .env
|
||||
```bash
|
||||
./dev/start-web
|
||||
```
|
||||
|
||||
1. Create environment.
|
||||
1. Set up your application by visiting `http://localhost:3000`.
|
||||
|
||||
Dify API service uses [UV](https://docs.astral.sh/uv/) to manage dependencies.
|
||||
First, you need to add the uv package manager, if you don't have it already.
|
||||
1. Optional: start the worker service (async tasks, runs from `api`).
|
||||
|
||||
```bash
|
||||
./dev/start-worker
|
||||
```
|
||||
|
||||
1. Optional: start Celery Beat (scheduled tasks).
|
||||
|
||||
```bash
|
||||
./dev/start-beat
|
||||
```
|
||||
|
||||
### Manual commands
|
||||
|
||||
<details>
|
||||
<summary>Show manual setup and run steps</summary>
|
||||
|
||||
These commands assume you start from the repository root.
|
||||
|
||||
1. Start the docker-compose stack.
|
||||
|
||||
The backend requires middleware, including PostgreSQL, Redis, and Weaviate, which can be started together using `docker-compose`.
|
||||
|
||||
```bash
|
||||
cp docker/middleware.env.example docker/middleware.env
|
||||
# Use mysql or another vector database profile if you are not using postgres/weaviate.
|
||||
docker compose -f docker/docker-compose.middleware.yaml --profile postgresql --profile weaviate -p dify up -d
|
||||
```
|
||||
|
||||
1. Copy env files.
|
||||
|
||||
```bash
|
||||
cp api/.env.example api/.env
|
||||
cp web/.env.example web/.env.local
|
||||
```
|
||||
|
||||
1. Install UV if needed.
|
||||
|
||||
```bash
|
||||
pip install uv
|
||||
|
|
@ -57,60 +86,96 @@
|
|||
brew install uv
|
||||
```
|
||||
|
||||
1. Install dependencies
|
||||
1. Install API dependencies.
|
||||
|
||||
```bash
|
||||
uv sync --dev
|
||||
cd api
|
||||
uv sync --group dev
|
||||
```
|
||||
|
||||
1. Run migrate
|
||||
|
||||
Before the first launch, migrate the database to the latest version.
|
||||
1. Install web dependencies.
|
||||
|
||||
```bash
|
||||
cd web
|
||||
pnpm install
|
||||
cd ..
|
||||
```
|
||||
|
||||
1. Start backend (runs migrations first, in a new terminal).
|
||||
|
||||
```bash
|
||||
cd api
|
||||
uv run flask db upgrade
|
||||
```
|
||||
|
||||
1. Start backend
|
||||
|
||||
```bash
|
||||
uv run flask run --host 0.0.0.0 --port=5001 --debug
|
||||
```
|
||||
|
||||
1. Start Dify [web](../web) service.
|
||||
1. Start Dify [web](../web) service (in a new terminal).
|
||||
|
||||
1. Setup your application by visiting `http://localhost:3000`.
|
||||
```bash
|
||||
cd web
|
||||
pnpm dev:inspect
|
||||
```
|
||||
|
||||
1. If you need to handle and debug the async tasks (e.g. dataset importing and documents indexing), please start the worker service.
|
||||
1. Set up your application by visiting `http://localhost:3000`.
|
||||
|
||||
```bash
|
||||
uv run celery -A app.celery worker -P threads -c 2 --loglevel INFO -Q dataset,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention
|
||||
```
|
||||
1. Optional: start the worker service (async tasks, in a new terminal).
|
||||
|
||||
Additionally, if you want to debug the celery scheduled tasks, you can run the following command in another terminal to start the beat service:
|
||||
```bash
|
||||
cd api
|
||||
uv run celery -A app.celery worker -P threads -c 2 --loglevel INFO -Q dataset,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention
|
||||
```
|
||||
|
||||
```bash
|
||||
uv run celery -A app.celery beat
|
||||
```
|
||||
1. Optional: start Celery Beat (scheduled tasks, in a new terminal).
|
||||
|
||||
```bash
|
||||
cd api
|
||||
uv run celery -A app.celery beat
|
||||
```
|
||||
|
||||
</details>
|
||||
|
||||
### Environment notes
|
||||
|
||||
> [!IMPORTANT]
|
||||
>
|
||||
> When the frontend and backend run on different subdomains, set COOKIE_DOMAIN to the site’s top-level domain (e.g., `example.com`). The frontend and backend must be under the same top-level domain in order to share authentication cookies.
|
||||
|
||||
- Generate a `SECRET_KEY` in the `.env` file.
|
||||
|
||||
bash for Linux
|
||||
|
||||
```bash
|
||||
sed -i "/^SECRET_KEY=/c\\SECRET_KEY=$(openssl rand -base64 42)" .env
|
||||
```
|
||||
|
||||
bash for Mac
|
||||
|
||||
```bash
|
||||
secret_key=$(openssl rand -base64 42)
|
||||
sed -i '' "/^SECRET_KEY=/c\\
|
||||
SECRET_KEY=${secret_key}" .env
|
||||
```
|
||||
|
||||
## Testing
|
||||
|
||||
1. Install dependencies for both the backend and the test environment
|
||||
|
||||
```bash
|
||||
uv sync --dev
|
||||
cd api
|
||||
uv sync --group dev
|
||||
```
|
||||
|
||||
1. Run the tests locally with mocked system environment variables in `tool.pytest_env` section in `pyproject.toml`, more can check [Claude.md](../CLAUDE.md)
|
||||
|
||||
```bash
|
||||
cd api
|
||||
uv run pytest # Run all tests
|
||||
uv run pytest tests/unit_tests/ # Unit tests only
|
||||
uv run pytest tests/integration_tests/ # Integration tests
|
||||
|
||||
# Code quality
|
||||
../dev/reformat # Run all formatters and linters
|
||||
uv run ruff check --fix ./ # Fix linting issues
|
||||
uv run ruff format ./ # Format code
|
||||
uv run basedpyright . # Type checking
|
||||
./dev/reformat # Run all formatters and linters
|
||||
uv run ruff check --fix ./ # Fix linting issues
|
||||
uv run ruff format ./ # Format code
|
||||
uv run basedpyright . # Type checking
|
||||
```
|
||||
|
|
|
|||
|
|
@ -81,6 +81,7 @@ def initialize_extensions(app: DifyApp):
|
|||
ext_commands,
|
||||
ext_compress,
|
||||
ext_database,
|
||||
ext_fastopenapi,
|
||||
ext_forward_refs,
|
||||
ext_hosting_provider,
|
||||
ext_import_modules,
|
||||
|
|
@ -128,6 +129,7 @@ def initialize_extensions(app: DifyApp):
|
|||
ext_proxy_fix,
|
||||
ext_blueprints,
|
||||
ext_commands,
|
||||
ext_fastopenapi,
|
||||
ext_otel,
|
||||
ext_request_logging,
|
||||
ext_session_factory,
|
||||
|
|
|
|||
|
|
@ -82,13 +82,13 @@ class ProviderNotSupportSpeechToTextError(BaseHTTPException):
|
|||
class DraftWorkflowNotExist(BaseHTTPException):
|
||||
error_code = "draft_workflow_not_exist"
|
||||
description = "Draft workflow need to be initialized."
|
||||
code = 400
|
||||
code = 404
|
||||
|
||||
|
||||
class DraftWorkflowNotSync(BaseHTTPException):
|
||||
error_code = "draft_workflow_not_sync"
|
||||
description = "Workflow graph might have been modified, please refresh and resubmit."
|
||||
code = 400
|
||||
code = 409
|
||||
|
||||
|
||||
class TracingConfigNotExist(BaseHTTPException):
|
||||
|
|
|
|||
|
|
@ -470,7 +470,7 @@ class AdvancedChatDraftRunLoopNodeApi(Resource):
|
|||
Run draft workflow loop node
|
||||
"""
|
||||
current_user, _ = current_account_with_tenant()
|
||||
args = LoopNodeRunPayload.model_validate(console_ns.payload or {}).model_dump(exclude_none=True)
|
||||
args = LoopNodeRunPayload.model_validate(console_ns.payload or {})
|
||||
|
||||
try:
|
||||
response = AppGenerateService.generate_single_loop(
|
||||
|
|
@ -508,7 +508,7 @@ class WorkflowDraftRunLoopNodeApi(Resource):
|
|||
Run draft workflow loop node
|
||||
"""
|
||||
current_user, _ = current_account_with_tenant()
|
||||
args = LoopNodeRunPayload.model_validate(console_ns.payload or {}).model_dump(exclude_none=True)
|
||||
args = LoopNodeRunPayload.model_validate(console_ns.payload or {})
|
||||
|
||||
try:
|
||||
response = AppGenerateService.generate_single_loop(
|
||||
|
|
@ -999,6 +999,7 @@ class DraftWorkflowTriggerRunApi(Resource):
|
|||
if not event:
|
||||
return jsonable_encoder({"status": "waiting", "retry_in": LISTENING_RETRY_IN})
|
||||
workflow_args = dict(event.workflow_args)
|
||||
|
||||
workflow_args[SKIP_PREPARE_USER_INPUTS_KEY] = True
|
||||
return helper.compact_generate_response(
|
||||
AppGenerateService.generate(
|
||||
|
|
@ -1147,6 +1148,7 @@ class DraftWorkflowTriggerRunAllApi(Resource):
|
|||
|
||||
try:
|
||||
workflow_args = dict(trigger_debug_event.workflow_args)
|
||||
|
||||
workflow_args[SKIP_PREPARE_USER_INPUTS_KEY] = True
|
||||
response = AppGenerateService.generate(
|
||||
app_model=app_model,
|
||||
|
|
|
|||
|
|
@ -1,17 +1,17 @@
|
|||
from flask_restx import Resource, fields
|
||||
from pydantic import BaseModel, Field
|
||||
|
||||
from . import console_ns
|
||||
from controllers.fastopenapi import console_router
|
||||
|
||||
|
||||
@console_ns.route("/ping")
|
||||
class PingApi(Resource):
|
||||
@console_ns.doc("health_check")
|
||||
@console_ns.doc(description="Health check endpoint for connection testing")
|
||||
@console_ns.response(
|
||||
200,
|
||||
"Success",
|
||||
console_ns.model("PingResponse", {"result": fields.String(description="Health check result", example="pong")}),
|
||||
)
|
||||
def get(self):
|
||||
"""Health check endpoint for connection testing"""
|
||||
return {"result": "pong"}
|
||||
class PingResponse(BaseModel):
|
||||
result: str = Field(description="Health check result", examples=["pong"])
|
||||
|
||||
|
||||
@console_router.get(
|
||||
"/ping",
|
||||
response_model=PingResponse,
|
||||
tags=["console"],
|
||||
)
|
||||
def ping() -> PingResponse:
|
||||
"""Health check endpoint for connection testing."""
|
||||
return PingResponse(result="pong")
|
||||
|
|
|
|||
|
|
@ -0,0 +1,3 @@
|
|||
from fastopenapi.routers import FlaskRouter
|
||||
|
||||
console_router = FlaskRouter()
|
||||
|
|
@ -17,5 +17,15 @@ class SystemFeatureApi(Resource):
|
|||
|
||||
Returns:
|
||||
dict: System feature configuration object
|
||||
|
||||
This endpoint is akin to the `SystemFeatureApi` endpoint in api/controllers/console/feature.py,
|
||||
except it is intended for use by the web app, instead of the console dashboard.
|
||||
|
||||
NOTE: This endpoint is unauthenticated by design, as it provides system features
|
||||
data required for webapp initialization.
|
||||
|
||||
Authentication would create circular dependency (can't authenticate without webapp loading).
|
||||
|
||||
Only non-sensitive configuration data should be returned by this endpoint.
|
||||
"""
|
||||
return FeatureService.get_system_features().model_dump()
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
from flask import make_response, request
|
||||
from flask_restx import Resource, reqparse
|
||||
from flask_restx import Resource
|
||||
from jwt import InvalidTokenError
|
||||
from pydantic import BaseModel, Field, field_validator
|
||||
|
||||
import services
|
||||
from configs import dify_config
|
||||
from controllers.common.schema import register_schema_models
|
||||
from controllers.console.auth.error import (
|
||||
AuthenticationFailedError,
|
||||
EmailCodeError,
|
||||
|
|
@ -18,7 +20,7 @@ from controllers.console.wraps import (
|
|||
)
|
||||
from controllers.web import web_ns
|
||||
from controllers.web.wraps import decode_jwt_token
|
||||
from libs.helper import email
|
||||
from libs.helper import EmailStr
|
||||
from libs.passport import PassportService
|
||||
from libs.password import valid_password
|
||||
from libs.token import (
|
||||
|
|
@ -30,10 +32,35 @@ from services.app_service import AppService
|
|||
from services.webapp_auth_service import WebAppAuthService
|
||||
|
||||
|
||||
class LoginPayload(BaseModel):
|
||||
email: EmailStr
|
||||
password: str
|
||||
|
||||
@field_validator("password")
|
||||
@classmethod
|
||||
def validate_password(cls, value: str) -> str:
|
||||
return valid_password(value)
|
||||
|
||||
|
||||
class EmailCodeLoginSendPayload(BaseModel):
|
||||
email: EmailStr
|
||||
language: str | None = None
|
||||
|
||||
|
||||
class EmailCodeLoginVerifyPayload(BaseModel):
|
||||
email: EmailStr
|
||||
code: str
|
||||
token: str = Field(min_length=1)
|
||||
|
||||
|
||||
register_schema_models(web_ns, LoginPayload, EmailCodeLoginSendPayload, EmailCodeLoginVerifyPayload)
|
||||
|
||||
|
||||
@web_ns.route("/login")
|
||||
class LoginApi(Resource):
|
||||
"""Resource for web app email/password login."""
|
||||
|
||||
@web_ns.expect(web_ns.models[LoginPayload.__name__])
|
||||
@setup_required
|
||||
@only_edition_enterprise
|
||||
@web_ns.doc("web_app_login")
|
||||
|
|
@ -50,15 +77,10 @@ class LoginApi(Resource):
|
|||
@decrypt_password_field
|
||||
def post(self):
|
||||
"""Authenticate user and login."""
|
||||
parser = (
|
||||
reqparse.RequestParser()
|
||||
.add_argument("email", type=email, required=True, location="json")
|
||||
.add_argument("password", type=valid_password, required=True, location="json")
|
||||
)
|
||||
args = parser.parse_args()
|
||||
payload = LoginPayload.model_validate(web_ns.payload or {})
|
||||
|
||||
try:
|
||||
account = WebAppAuthService.authenticate(args["email"], args["password"])
|
||||
account = WebAppAuthService.authenticate(payload.email, payload.password)
|
||||
except services.errors.account.AccountLoginError:
|
||||
raise AccountBannedError()
|
||||
except services.errors.account.AccountPasswordError:
|
||||
|
|
@ -145,6 +167,7 @@ class EmailCodeLoginSendEmailApi(Resource):
|
|||
@only_edition_enterprise
|
||||
@web_ns.doc("send_email_code_login")
|
||||
@web_ns.doc(description="Send email verification code for login")
|
||||
@web_ns.expect(web_ns.models[EmailCodeLoginSendPayload.__name__])
|
||||
@web_ns.doc(
|
||||
responses={
|
||||
200: "Email code sent successfully",
|
||||
|
|
@ -153,19 +176,14 @@ class EmailCodeLoginSendEmailApi(Resource):
|
|||
}
|
||||
)
|
||||
def post(self):
|
||||
parser = (
|
||||
reqparse.RequestParser()
|
||||
.add_argument("email", type=email, required=True, location="json")
|
||||
.add_argument("language", type=str, required=False, location="json")
|
||||
)
|
||||
args = parser.parse_args()
|
||||
payload = EmailCodeLoginSendPayload.model_validate(web_ns.payload or {})
|
||||
|
||||
if args["language"] is not None and args["language"] == "zh-Hans":
|
||||
if payload.language == "zh-Hans":
|
||||
language = "zh-Hans"
|
||||
else:
|
||||
language = "en-US"
|
||||
|
||||
account = WebAppAuthService.get_user_through_email(args["email"])
|
||||
account = WebAppAuthService.get_user_through_email(payload.email)
|
||||
if account is None:
|
||||
raise AuthenticationFailedError()
|
||||
else:
|
||||
|
|
@ -179,6 +197,7 @@ class EmailCodeLoginApi(Resource):
|
|||
@only_edition_enterprise
|
||||
@web_ns.doc("verify_email_code_login")
|
||||
@web_ns.doc(description="Verify email code and complete login")
|
||||
@web_ns.expect(web_ns.models[EmailCodeLoginVerifyPayload.__name__])
|
||||
@web_ns.doc(
|
||||
responses={
|
||||
200: "Email code verified and login successful",
|
||||
|
|
@ -189,17 +208,11 @@ class EmailCodeLoginApi(Resource):
|
|||
)
|
||||
@decrypt_code_field
|
||||
def post(self):
|
||||
parser = (
|
||||
reqparse.RequestParser()
|
||||
.add_argument("email", type=str, required=True, location="json")
|
||||
.add_argument("code", type=str, required=True, location="json")
|
||||
.add_argument("token", type=str, required=True, location="json")
|
||||
)
|
||||
args = parser.parse_args()
|
||||
payload = EmailCodeLoginVerifyPayload.model_validate(web_ns.payload or {})
|
||||
|
||||
user_email = args["email"].lower()
|
||||
user_email = payload.email.lower()
|
||||
|
||||
token_data = WebAppAuthService.get_email_code_login_data(args["token"])
|
||||
token_data = WebAppAuthService.get_email_code_login_data(payload.token)
|
||||
if token_data is None:
|
||||
raise InvalidTokenError()
|
||||
|
||||
|
|
@ -210,10 +223,10 @@ class EmailCodeLoginApi(Resource):
|
|||
if normalized_token_email != user_email:
|
||||
raise InvalidEmailError()
|
||||
|
||||
if token_data["code"] != args["code"]:
|
||||
if token_data["code"] != payload.code:
|
||||
raise EmailCodeError()
|
||||
|
||||
WebAppAuthService.revoke_email_code_login_token(args["token"])
|
||||
WebAppAuthService.revoke_email_code_login_token(payload.token)
|
||||
account = WebAppAuthService.get_user_through_email(token_email)
|
||||
if not account:
|
||||
raise AuthenticationFailedError()
|
||||
|
|
|
|||
|
|
@ -1,8 +1,10 @@
|
|||
import logging
|
||||
from typing import Any
|
||||
|
||||
from flask_restx import reqparse
|
||||
from pydantic import BaseModel, Field
|
||||
from werkzeug.exceptions import InternalServerError
|
||||
|
||||
from controllers.common.schema import register_schema_models
|
||||
from controllers.web import web_ns
|
||||
from controllers.web.error import (
|
||||
CompletionRequestError,
|
||||
|
|
@ -27,19 +29,22 @@ from models.model import App, AppMode, EndUser
|
|||
from services.app_generate_service import AppGenerateService
|
||||
from services.errors.llm import InvokeRateLimitError
|
||||
|
||||
|
||||
class WorkflowRunPayload(BaseModel):
|
||||
inputs: dict[str, Any] = Field(description="Input variables for the workflow")
|
||||
files: list[dict[str, Any]] | None = Field(default=None, description="Files to be processed by the workflow")
|
||||
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
register_schema_models(web_ns, WorkflowRunPayload)
|
||||
|
||||
|
||||
@web_ns.route("/workflows/run")
|
||||
class WorkflowRunApi(WebApiResource):
|
||||
@web_ns.doc("Run Workflow")
|
||||
@web_ns.doc(description="Execute a workflow with provided inputs and files.")
|
||||
@web_ns.doc(
|
||||
params={
|
||||
"inputs": {"description": "Input variables for the workflow", "type": "object", "required": True},
|
||||
"files": {"description": "Files to be processed by the workflow", "type": "array", "required": False},
|
||||
}
|
||||
)
|
||||
@web_ns.expect(web_ns.models[WorkflowRunPayload.__name__])
|
||||
@web_ns.doc(
|
||||
responses={
|
||||
200: "Success",
|
||||
|
|
@ -58,12 +63,8 @@ class WorkflowRunApi(WebApiResource):
|
|||
if app_mode != AppMode.WORKFLOW:
|
||||
raise NotWorkflowAppError()
|
||||
|
||||
parser = (
|
||||
reqparse.RequestParser()
|
||||
.add_argument("inputs", type=dict, required=True, nullable=False, location="json")
|
||||
.add_argument("files", type=list, required=False, location="json")
|
||||
)
|
||||
args = parser.parse_args()
|
||||
payload = WorkflowRunPayload.model_validate(web_ns.payload or {})
|
||||
args = payload.model_dump(exclude_none=True)
|
||||
|
||||
try:
|
||||
response = AppGenerateService.generate(
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import contextvars
|
||||
import logging
|
||||
import threading
|
||||
import uuid
|
||||
from collections.abc import Generator, Mapping
|
||||
from typing import Any, Literal, Union, overload
|
||||
from typing import TYPE_CHECKING, Any, Literal, Union, overload
|
||||
|
||||
from flask import Flask, current_app
|
||||
from pydantic import ValidationError
|
||||
|
|
@ -13,6 +15,9 @@ from sqlalchemy.orm import Session, sessionmaker
|
|||
import contexts
|
||||
from configs import dify_config
|
||||
from constants import UUID_NIL
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from controllers.console.app.workflow import LoopNodeRunPayload
|
||||
from core.app.app_config.features.file_upload.manager import FileUploadConfigManager
|
||||
from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfigManager
|
||||
from core.app.apps.advanced_chat.app_runner import AdvancedChatAppRunner
|
||||
|
|
@ -304,7 +309,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
|
|||
workflow: Workflow,
|
||||
node_id: str,
|
||||
user: Account | EndUser,
|
||||
args: Mapping,
|
||||
args: LoopNodeRunPayload,
|
||||
streaming: bool = True,
|
||||
) -> Mapping[str, Any] | Generator[str | Mapping[str, Any], Any, None]:
|
||||
"""
|
||||
|
|
@ -320,7 +325,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
|
|||
if not node_id:
|
||||
raise ValueError("node_id is required")
|
||||
|
||||
if args.get("inputs") is None:
|
||||
if args.inputs is None:
|
||||
raise ValueError("inputs is required")
|
||||
|
||||
# convert to app config
|
||||
|
|
@ -338,7 +343,7 @@ class AdvancedChatAppGenerator(MessageBasedAppGenerator):
|
|||
stream=streaming,
|
||||
invoke_from=InvokeFrom.DEBUGGER,
|
||||
extras={"auto_generate_conversation_name": False},
|
||||
single_loop_run=AdvancedChatAppGenerateEntity.SingleLoopRunEntity(node_id=node_id, inputs=args["inputs"]),
|
||||
single_loop_run=AdvancedChatAppGenerateEntity.SingleLoopRunEntity(node_id=node_id, inputs=args.inputs),
|
||||
)
|
||||
contexts.plugin_tool_providers.set({})
|
||||
contexts.plugin_tool_providers_lock.set(threading.Lock())
|
||||
|
|
|
|||
|
|
@ -1,9 +1,11 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import contextvars
|
||||
import logging
|
||||
import threading
|
||||
import uuid
|
||||
from collections.abc import Generator, Mapping, Sequence
|
||||
from typing import Any, Literal, Union, overload
|
||||
from typing import TYPE_CHECKING, Any, Literal, Union, overload
|
||||
|
||||
from flask import Flask, current_app
|
||||
from pydantic import ValidationError
|
||||
|
|
@ -40,6 +42,9 @@ from models import Account, App, EndUser, Workflow, WorkflowNodeExecutionTrigger
|
|||
from models.enums import WorkflowRunTriggeredFrom
|
||||
from services.workflow_draft_variable_service import DraftVarLoader, WorkflowDraftVariableService
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from controllers.console.app.workflow import LoopNodeRunPayload
|
||||
|
||||
SKIP_PREPARE_USER_INPUTS_KEY = "_skip_prepare_user_inputs"
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
|
@ -381,7 +386,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
|
|||
workflow: Workflow,
|
||||
node_id: str,
|
||||
user: Account | EndUser,
|
||||
args: Mapping[str, Any],
|
||||
args: LoopNodeRunPayload,
|
||||
streaming: bool = True,
|
||||
) -> Mapping[str, Any] | Generator[str | Mapping[str, Any], None, None]:
|
||||
"""
|
||||
|
|
@ -397,7 +402,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
|
|||
if not node_id:
|
||||
raise ValueError("node_id is required")
|
||||
|
||||
if args.get("inputs") is None:
|
||||
if args.inputs is None:
|
||||
raise ValueError("inputs is required")
|
||||
|
||||
# convert to app config
|
||||
|
|
@ -413,7 +418,7 @@ class WorkflowAppGenerator(BaseAppGenerator):
|
|||
stream=streaming,
|
||||
invoke_from=InvokeFrom.DEBUGGER,
|
||||
extras={"auto_generate_conversation_name": False},
|
||||
single_loop_run=WorkflowAppGenerateEntity.SingleLoopRunEntity(node_id=node_id, inputs=args["inputs"]),
|
||||
single_loop_run=WorkflowAppGenerateEntity.SingleLoopRunEntity(node_id=node_id, inputs=args.inputs or {}),
|
||||
workflow_execution_id=str(uuid.uuid4()),
|
||||
)
|
||||
contexts.plugin_tool_providers.set({})
|
||||
|
|
|
|||
|
|
@ -166,18 +166,22 @@ class WorkflowBasedAppRunner:
|
|||
|
||||
# Determine which type of single node execution and get graph/variable_pool
|
||||
if single_iteration_run:
|
||||
graph, variable_pool = self._get_graph_and_variable_pool_of_single_iteration(
|
||||
graph, variable_pool = self._get_graph_and_variable_pool_for_single_node_run(
|
||||
workflow=workflow,
|
||||
node_id=single_iteration_run.node_id,
|
||||
user_inputs=dict(single_iteration_run.inputs),
|
||||
graph_runtime_state=graph_runtime_state,
|
||||
node_type_filter_key="iteration_id",
|
||||
node_type_label="iteration",
|
||||
)
|
||||
elif single_loop_run:
|
||||
graph, variable_pool = self._get_graph_and_variable_pool_of_single_loop(
|
||||
graph, variable_pool = self._get_graph_and_variable_pool_for_single_node_run(
|
||||
workflow=workflow,
|
||||
node_id=single_loop_run.node_id,
|
||||
user_inputs=dict(single_loop_run.inputs),
|
||||
graph_runtime_state=graph_runtime_state,
|
||||
node_type_filter_key="loop_id",
|
||||
node_type_label="loop",
|
||||
)
|
||||
else:
|
||||
raise ValueError("Neither single_iteration_run nor single_loop_run is specified")
|
||||
|
|
@ -314,44 +318,6 @@ class WorkflowBasedAppRunner:
|
|||
|
||||
return graph, variable_pool
|
||||
|
||||
def _get_graph_and_variable_pool_of_single_iteration(
|
||||
self,
|
||||
workflow: Workflow,
|
||||
node_id: str,
|
||||
user_inputs: dict[str, Any],
|
||||
graph_runtime_state: GraphRuntimeState,
|
||||
) -> tuple[Graph, VariablePool]:
|
||||
"""
|
||||
Get variable pool of single iteration
|
||||
"""
|
||||
return self._get_graph_and_variable_pool_for_single_node_run(
|
||||
workflow=workflow,
|
||||
node_id=node_id,
|
||||
user_inputs=user_inputs,
|
||||
graph_runtime_state=graph_runtime_state,
|
||||
node_type_filter_key="iteration_id",
|
||||
node_type_label="iteration",
|
||||
)
|
||||
|
||||
def _get_graph_and_variable_pool_of_single_loop(
|
||||
self,
|
||||
workflow: Workflow,
|
||||
node_id: str,
|
||||
user_inputs: dict[str, Any],
|
||||
graph_runtime_state: GraphRuntimeState,
|
||||
) -> tuple[Graph, VariablePool]:
|
||||
"""
|
||||
Get variable pool of single loop
|
||||
"""
|
||||
return self._get_graph_and_variable_pool_for_single_node_run(
|
||||
workflow=workflow,
|
||||
node_id=node_id,
|
||||
user_inputs=user_inputs,
|
||||
graph_runtime_state=graph_runtime_state,
|
||||
node_type_filter_key="loop_id",
|
||||
node_type_label="loop",
|
||||
)
|
||||
|
||||
def _handle_event(self, workflow_entry: WorkflowEntry, event: GraphEngineEvent):
|
||||
"""
|
||||
Handle event
|
||||
|
|
|
|||
|
|
@ -154,7 +154,7 @@ class IrisConnectionPool:
|
|||
# Add to cache to skip future checks
|
||||
self._schemas_initialized.add(schema)
|
||||
|
||||
except Exception as e:
|
||||
except Exception:
|
||||
conn.rollback()
|
||||
logger.exception("Failed to ensure schema %s exists", schema)
|
||||
raise
|
||||
|
|
@ -177,6 +177,9 @@ class IrisConnectionPool:
|
|||
class IrisVector(BaseVector):
|
||||
"""IRIS vector database implementation using native VECTOR type and HNSW indexing."""
|
||||
|
||||
# Fallback score for full-text search when Rank function unavailable or TEXT_INDEX disabled
|
||||
_FULL_TEXT_FALLBACK_SCORE = 0.5
|
||||
|
||||
def __init__(self, collection_name: str, config: IrisVectorConfig) -> None:
|
||||
super().__init__(collection_name)
|
||||
self.config = config
|
||||
|
|
@ -272,41 +275,131 @@ class IrisVector(BaseVector):
|
|||
return docs
|
||||
|
||||
def search_by_full_text(self, query: str, **kwargs: Any) -> list[Document]:
|
||||
"""Search documents by full-text using iFind index or fallback to LIKE search."""
|
||||
"""Search documents by full-text using iFind index with BM25 relevance scoring.
|
||||
|
||||
When IRIS_TEXT_INDEX is enabled, this method uses the auto-generated Rank
|
||||
function from %iFind.Index.Basic to calculate BM25 relevance scores. The Rank
|
||||
function is automatically created with naming: {schema}.{table_name}_{index}Rank
|
||||
|
||||
Args:
|
||||
query: Search query string
|
||||
**kwargs: Optional parameters including top_k, document_ids_filter
|
||||
|
||||
Returns:
|
||||
List of Document objects with relevance scores in metadata["score"]
|
||||
"""
|
||||
top_k = kwargs.get("top_k", 5)
|
||||
document_ids_filter = kwargs.get("document_ids_filter")
|
||||
|
||||
with self._get_cursor() as cursor:
|
||||
if self.config.IRIS_TEXT_INDEX:
|
||||
# Use iFind full-text search with index
|
||||
# Use iFind full-text search with auto-generated Rank function
|
||||
text_index_name = f"idx_{self.table_name}_text"
|
||||
# IRIS removes underscores from function names
|
||||
table_no_underscore = self.table_name.replace("_", "")
|
||||
index_no_underscore = text_index_name.replace("_", "")
|
||||
rank_function = f"{self.schema}.{table_no_underscore}_{index_no_underscore}Rank"
|
||||
|
||||
# Build WHERE clause with document ID filter if provided
|
||||
where_clause = f"WHERE %ID %FIND search_index({text_index_name}, ?)"
|
||||
# First param for Rank function, second for FIND
|
||||
params = [query, query]
|
||||
|
||||
if document_ids_filter:
|
||||
# Add document ID filter
|
||||
placeholders = ",".join("?" * len(document_ids_filter))
|
||||
where_clause += f" AND JSON_VALUE(meta, '$.document_id') IN ({placeholders})"
|
||||
params.extend(document_ids_filter)
|
||||
|
||||
sql = f"""
|
||||
SELECT TOP {top_k} id, text, meta
|
||||
SELECT TOP {top_k}
|
||||
id,
|
||||
text,
|
||||
meta,
|
||||
{rank_function}(%ID, ?) AS score
|
||||
FROM {self.schema}.{self.table_name}
|
||||
WHERE %ID %FIND search_index({text_index_name}, ?)
|
||||
{where_clause}
|
||||
ORDER BY score DESC
|
||||
"""
|
||||
cursor.execute(sql, (query,))
|
||||
|
||||
logger.debug(
|
||||
"iFind search: query='%s', index='%s', rank='%s'",
|
||||
query,
|
||||
text_index_name,
|
||||
rank_function,
|
||||
)
|
||||
|
||||
try:
|
||||
cursor.execute(sql, params)
|
||||
except Exception: # pylint: disable=broad-exception-caught
|
||||
# Fallback to query without Rank function if it fails
|
||||
logger.warning(
|
||||
"Rank function '%s' failed, using fixed score",
|
||||
rank_function,
|
||||
exc_info=True,
|
||||
)
|
||||
sql_fallback = f"""
|
||||
SELECT TOP {top_k} id, text, meta, {self._FULL_TEXT_FALLBACK_SCORE} AS score
|
||||
FROM {self.schema}.{self.table_name}
|
||||
{where_clause}
|
||||
"""
|
||||
# Skip first param (for Rank function)
|
||||
cursor.execute(sql_fallback, params[1:])
|
||||
else:
|
||||
# Fallback to LIKE search (inefficient for large datasets)
|
||||
# Escape special characters for LIKE clause to prevent SQL injection
|
||||
from libs.helper import escape_like_pattern
|
||||
# Fallback to LIKE search (IRIS_TEXT_INDEX disabled)
|
||||
from libs.helper import ( # pylint: disable=import-outside-toplevel
|
||||
escape_like_pattern,
|
||||
)
|
||||
|
||||
escaped_query = escape_like_pattern(query)
|
||||
query_pattern = f"%{escaped_query}%"
|
||||
|
||||
# Build WHERE clause with document ID filter if provided
|
||||
where_clause = "WHERE text LIKE ? ESCAPE '\\\\'"
|
||||
params = [query_pattern]
|
||||
|
||||
if document_ids_filter:
|
||||
placeholders = ",".join("?" * len(document_ids_filter))
|
||||
where_clause += f" AND JSON_VALUE(meta, '$.document_id') IN ({placeholders})"
|
||||
params.extend(document_ids_filter)
|
||||
|
||||
sql = f"""
|
||||
SELECT TOP {top_k} id, text, meta
|
||||
SELECT TOP {top_k} id, text, meta, {self._FULL_TEXT_FALLBACK_SCORE} AS score
|
||||
FROM {self.schema}.{self.table_name}
|
||||
WHERE text LIKE ? ESCAPE '\\'
|
||||
{where_clause}
|
||||
ORDER BY LENGTH(text) ASC
|
||||
"""
|
||||
cursor.execute(sql, (query_pattern,))
|
||||
|
||||
logger.debug(
|
||||
"LIKE fallback (TEXT_INDEX disabled): query='%s'",
|
||||
query_pattern,
|
||||
)
|
||||
cursor.execute(sql, params)
|
||||
|
||||
docs = []
|
||||
for row in cursor.fetchall():
|
||||
if len(row) >= 3:
|
||||
metadata = json.loads(row[2]) if row[2] else {}
|
||||
docs.append(Document(page_content=row[1], metadata=metadata))
|
||||
# Expecting 4 columns: id, text, meta, score
|
||||
if len(row) >= 4:
|
||||
text_content = row[1]
|
||||
meta_str = row[2]
|
||||
score_value = row[3]
|
||||
|
||||
metadata = json.loads(meta_str) if meta_str else {}
|
||||
# Add score to metadata for hybrid search compatibility
|
||||
score = float(score_value) if score_value is not None else 0.0
|
||||
metadata["score"] = score
|
||||
|
||||
docs.append(Document(page_content=text_content, metadata=metadata))
|
||||
|
||||
logger.info(
|
||||
"Full-text search completed: query='%s', results=%d/%d",
|
||||
query,
|
||||
len(docs),
|
||||
top_k,
|
||||
)
|
||||
|
||||
if not docs:
|
||||
logger.info("Full-text search for '%s' returned no results", query)
|
||||
logger.warning("Full-text search for '%s' returned no results", query)
|
||||
|
||||
return docs
|
||||
|
||||
|
|
@ -370,7 +463,11 @@ class IrisVector(BaseVector):
|
|||
AS %iFind.Index.Basic
|
||||
(LANGUAGE = '{language}', LOWER = 1, INDEXOPTION = 0)
|
||||
"""
|
||||
logger.info("Creating text index: %s with language: %s", text_index_name, language)
|
||||
logger.info(
|
||||
"Creating text index: %s with language: %s",
|
||||
text_index_name,
|
||||
language,
|
||||
)
|
||||
logger.info("SQL for text index: %s", sql_text_index)
|
||||
cursor.execute(sql_text_index)
|
||||
logger.info("Text index created successfully: %s", text_index_name)
|
||||
|
|
|
|||
|
|
@ -130,7 +130,7 @@ class ToolInvokeMessage(BaseModel):
|
|||
text: str
|
||||
|
||||
class JsonMessage(BaseModel):
|
||||
json_object: dict
|
||||
json_object: dict | list
|
||||
suppress_output: bool = Field(default=False, description="Whether to suppress JSON output in result string")
|
||||
|
||||
class BlobMessage(BaseModel):
|
||||
|
|
@ -144,7 +144,14 @@ class ToolInvokeMessage(BaseModel):
|
|||
end: bool = Field(..., description="Whether the chunk is the last chunk")
|
||||
|
||||
class FileMessage(BaseModel):
|
||||
pass
|
||||
file_marker: str = Field(default="file_marker")
|
||||
|
||||
@model_validator(mode="before")
|
||||
@classmethod
|
||||
def validate_file_message(cls, values):
|
||||
if isinstance(values, dict) and "file_marker" not in values:
|
||||
raise ValueError("Invalid FileMessage: missing file_marker")
|
||||
return values
|
||||
|
||||
class VariableMessage(BaseModel):
|
||||
variable_name: str = Field(..., description="The name of the variable")
|
||||
|
|
@ -234,10 +241,22 @@ class ToolInvokeMessage(BaseModel):
|
|||
|
||||
@field_validator("message", mode="before")
|
||||
@classmethod
|
||||
def decode_blob_message(cls, v):
|
||||
def decode_blob_message(cls, v, info: ValidationInfo):
|
||||
# 处理 blob 解码
|
||||
if isinstance(v, dict) and "blob" in v:
|
||||
with contextlib.suppress(Exception):
|
||||
v["blob"] = base64.b64decode(v["blob"])
|
||||
|
||||
# Force correct message type based on type field
|
||||
# Only wrap dict types to avoid wrapping already parsed Pydantic model objects
|
||||
if info.data and isinstance(info.data, dict) and isinstance(v, dict):
|
||||
msg_type = info.data.get("type")
|
||||
if msg_type == cls.MessageType.JSON:
|
||||
if "json_object" not in v:
|
||||
v = {"json_object": v}
|
||||
elif msg_type == cls.MessageType.FILE:
|
||||
v = {"file_marker": "file_marker"}
|
||||
|
||||
return v
|
||||
|
||||
@field_serializer("message")
|
||||
|
|
|
|||
|
|
@ -494,7 +494,7 @@ class AgentNode(Node[AgentNodeData]):
|
|||
|
||||
text = ""
|
||||
files: list[File] = []
|
||||
json_list: list[dict] = []
|
||||
json_list: list[dict | list] = []
|
||||
|
||||
agent_logs: list[AgentLogEvent] = []
|
||||
agent_execution_metadata: Mapping[WorkflowNodeExecutionMetadataKey, Any] = {}
|
||||
|
|
@ -568,13 +568,18 @@ class AgentNode(Node[AgentNodeData]):
|
|||
elif message.type == ToolInvokeMessage.MessageType.JSON:
|
||||
assert isinstance(message.message, ToolInvokeMessage.JsonMessage)
|
||||
if node_type == NodeType.AGENT:
|
||||
msg_metadata: dict[str, Any] = message.message.json_object.pop("execution_metadata", {})
|
||||
llm_usage = LLMUsage.from_metadata(cast(LLMUsageMetadata, msg_metadata))
|
||||
agent_execution_metadata = {
|
||||
WorkflowNodeExecutionMetadataKey(key): value
|
||||
for key, value in msg_metadata.items()
|
||||
if key in WorkflowNodeExecutionMetadataKey.__members__.values()
|
||||
}
|
||||
if isinstance(message.message.json_object, dict):
|
||||
msg_metadata: dict[str, Any] = message.message.json_object.pop("execution_metadata", {})
|
||||
llm_usage = LLMUsage.from_metadata(cast(LLMUsageMetadata, msg_metadata))
|
||||
agent_execution_metadata = {
|
||||
WorkflowNodeExecutionMetadataKey(key): value
|
||||
for key, value in msg_metadata.items()
|
||||
if key in WorkflowNodeExecutionMetadataKey.__members__.values()
|
||||
}
|
||||
else:
|
||||
msg_metadata = {}
|
||||
llm_usage = LLMUsage.empty_usage()
|
||||
agent_execution_metadata = {}
|
||||
if message.message.json_object:
|
||||
json_list.append(message.message.json_object)
|
||||
elif message.type == ToolInvokeMessage.MessageType.LINK:
|
||||
|
|
@ -683,7 +688,7 @@ class AgentNode(Node[AgentNodeData]):
|
|||
yield agent_log
|
||||
|
||||
# Add agent_logs to outputs['json'] to ensure frontend can access thinking process
|
||||
json_output: list[dict[str, Any]] = []
|
||||
json_output: list[dict[str, Any] | list[Any]] = []
|
||||
|
||||
# Step 1: append each agent log as its own dict.
|
||||
if agent_logs:
|
||||
|
|
|
|||
|
|
@ -301,7 +301,7 @@ class DatasourceNode(Node[DatasourceNodeData]):
|
|||
|
||||
text = ""
|
||||
files: list[File] = []
|
||||
json: list[dict] = []
|
||||
json: list[dict | list] = []
|
||||
|
||||
variables: dict[str, Any] = {}
|
||||
|
||||
|
|
|
|||
|
|
@ -244,7 +244,7 @@ class ToolNode(Node[ToolNodeData]):
|
|||
|
||||
text = ""
|
||||
files: list[File] = []
|
||||
json: list[dict] = []
|
||||
json: list[dict | list] = []
|
||||
|
||||
variables: dict[str, Any] = {}
|
||||
|
||||
|
|
@ -400,7 +400,7 @@ class ToolNode(Node[ToolNodeData]):
|
|||
message.message.metadata = dict_metadata
|
||||
|
||||
# Add agent_logs to outputs['json'] to ensure frontend can access thinking process
|
||||
json_output: list[dict[str, Any]] = []
|
||||
json_output: list[dict[str, Any] | list[Any]] = []
|
||||
|
||||
# Step 2: normalize JSON into {"data": [...]}.change json to list[dict]
|
||||
if json:
|
||||
|
|
|
|||
|
|
@ -0,0 +1,43 @@
|
|||
from fastopenapi.routers import FlaskRouter
|
||||
from flask_cors import CORS
|
||||
|
||||
from configs import dify_config
|
||||
from controllers.fastopenapi import console_router
|
||||
from dify_app import DifyApp
|
||||
from extensions.ext_blueprints import AUTHENTICATED_HEADERS, EXPOSED_HEADERS
|
||||
|
||||
DOCS_PREFIX = "/fastopenapi"
|
||||
|
||||
|
||||
def init_app(app: DifyApp) -> None:
|
||||
docs_enabled = dify_config.SWAGGER_UI_ENABLED
|
||||
docs_url = f"{DOCS_PREFIX}/docs" if docs_enabled else None
|
||||
redoc_url = f"{DOCS_PREFIX}/redoc" if docs_enabled else None
|
||||
openapi_url = f"{DOCS_PREFIX}/openapi.json" if docs_enabled else None
|
||||
|
||||
router = FlaskRouter(
|
||||
app=app,
|
||||
docs_url=docs_url,
|
||||
redoc_url=redoc_url,
|
||||
openapi_url=openapi_url,
|
||||
openapi_version="3.0.0",
|
||||
title="Dify API (FastOpenAPI PoC)",
|
||||
version="1.0",
|
||||
description="FastOpenAPI proof of concept for Dify API",
|
||||
)
|
||||
|
||||
# Ensure route decorators are evaluated.
|
||||
import controllers.console.ping as ping_module
|
||||
|
||||
_ = ping_module
|
||||
|
||||
router.include_router(console_router, prefix="/console/api")
|
||||
CORS(
|
||||
app,
|
||||
resources={r"/console/api/*": {"origins": dify_config.CONSOLE_CORS_ALLOW_ORIGINS}},
|
||||
supports_credentials=True,
|
||||
allow_headers=list(AUTHENTICATED_HEADERS),
|
||||
methods=["GET", "PUT", "POST", "DELETE", "OPTIONS", "PATCH"],
|
||||
expose_headers=list(EXPOSED_HEADERS),
|
||||
)
|
||||
app.extensions["fastopenapi"] = router
|
||||
|
|
@ -315,40 +315,48 @@ class App(Base):
|
|||
return None
|
||||
|
||||
|
||||
class AppModelConfig(Base):
|
||||
class AppModelConfig(TypeBase):
|
||||
__tablename__ = "app_model_configs"
|
||||
__table_args__ = (sa.PrimaryKeyConstraint("id", name="app_model_config_pkey"), sa.Index("app_app_id_idx", "app_id"))
|
||||
|
||||
id = mapped_column(StringUUID, default=lambda: str(uuid4()))
|
||||
app_id = mapped_column(StringUUID, nullable=False)
|
||||
provider = mapped_column(String(255), nullable=True)
|
||||
model_id = mapped_column(String(255), nullable=True)
|
||||
configs = mapped_column(sa.JSON, nullable=True)
|
||||
created_by = mapped_column(StringUUID, nullable=True)
|
||||
created_at = mapped_column(sa.DateTime, nullable=False, server_default=func.current_timestamp())
|
||||
updated_by = mapped_column(StringUUID, nullable=True)
|
||||
updated_at = mapped_column(
|
||||
sa.DateTime, nullable=False, server_default=func.current_timestamp(), onupdate=func.current_timestamp()
|
||||
id: Mapped[str] = mapped_column(StringUUID, default=lambda: str(uuid4()), init=False)
|
||||
app_id: Mapped[str] = mapped_column(StringUUID, nullable=False)
|
||||
provider: Mapped[str | None] = mapped_column(String(255), nullable=True, default=None)
|
||||
model_id: Mapped[str | None] = mapped_column(String(255), nullable=True, default=None)
|
||||
configs: Mapped[Any | None] = mapped_column(sa.JSON, nullable=True, default=None)
|
||||
created_by: Mapped[str | None] = mapped_column(StringUUID, nullable=True, default=None)
|
||||
created_at: Mapped[datetime] = mapped_column(
|
||||
sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False
|
||||
)
|
||||
opening_statement = mapped_column(LongText)
|
||||
suggested_questions = mapped_column(LongText)
|
||||
suggested_questions_after_answer = mapped_column(LongText)
|
||||
speech_to_text = mapped_column(LongText)
|
||||
text_to_speech = mapped_column(LongText)
|
||||
more_like_this = mapped_column(LongText)
|
||||
model = mapped_column(LongText)
|
||||
user_input_form = mapped_column(LongText)
|
||||
dataset_query_variable = mapped_column(String(255))
|
||||
pre_prompt = mapped_column(LongText)
|
||||
agent_mode = mapped_column(LongText)
|
||||
sensitive_word_avoidance = mapped_column(LongText)
|
||||
retriever_resource = mapped_column(LongText)
|
||||
prompt_type = mapped_column(String(255), nullable=False, server_default=sa.text("'simple'"))
|
||||
chat_prompt_config = mapped_column(LongText)
|
||||
completion_prompt_config = mapped_column(LongText)
|
||||
dataset_configs = mapped_column(LongText)
|
||||
external_data_tools = mapped_column(LongText)
|
||||
file_upload = mapped_column(LongText)
|
||||
updated_by: Mapped[str | None] = mapped_column(StringUUID, nullable=True, default=None)
|
||||
updated_at: Mapped[datetime] = mapped_column(
|
||||
sa.DateTime,
|
||||
nullable=False,
|
||||
server_default=func.current_timestamp(),
|
||||
onupdate=func.current_timestamp(),
|
||||
init=False,
|
||||
)
|
||||
opening_statement: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
suggested_questions: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
suggested_questions_after_answer: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
speech_to_text: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
text_to_speech: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
more_like_this: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
model: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
user_input_form: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
dataset_query_variable: Mapped[str | None] = mapped_column(String(255), default=None)
|
||||
pre_prompt: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
agent_mode: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
sensitive_word_avoidance: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
retriever_resource: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
prompt_type: Mapped[str] = mapped_column(
|
||||
String(255), nullable=False, server_default=sa.text("'simple'"), default="simple"
|
||||
)
|
||||
chat_prompt_config: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
completion_prompt_config: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
dataset_configs: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
external_data_tools: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
file_upload: Mapped[str | None] = mapped_column(LongText, default=None)
|
||||
|
||||
@property
|
||||
def app(self) -> App | None:
|
||||
|
|
@ -810,8 +818,8 @@ class Conversation(Base):
|
|||
override_model_configs = json.loads(self.override_model_configs)
|
||||
|
||||
if "model" in override_model_configs:
|
||||
app_model_config = AppModelConfig()
|
||||
app_model_config = app_model_config.from_model_config_dict(override_model_configs)
|
||||
# where is app_id?
|
||||
app_model_config = AppModelConfig(app_id=self.app_id).from_model_config_dict(override_model_configs)
|
||||
model_config = app_model_config.to_dict()
|
||||
else:
|
||||
model_config["configs"] = override_model_configs
|
||||
|
|
|
|||
|
|
@ -226,8 +226,7 @@ class Workflow(Base): # bug
|
|||
#
|
||||
# Currently, the following functions / methods would mutate the returned dict:
|
||||
#
|
||||
# - `_get_graph_and_variable_pool_of_single_iteration`.
|
||||
# - `_get_graph_and_variable_pool_of_single_loop`.
|
||||
# - `_get_graph_and_variable_pool_for_single_node_run`.
|
||||
return json.loads(self.graph) if self.graph else {}
|
||||
|
||||
def get_node_config_by_id(self, node_id: str) -> Mapping[str, Any]:
|
||||
|
|
|
|||
|
|
@ -31,7 +31,7 @@ dependencies = [
|
|||
"gunicorn~=23.0.0",
|
||||
"httpx[socks]~=0.27.0",
|
||||
"jieba==0.42.1",
|
||||
"json-repair>=0.41.1",
|
||||
"json-repair>=0.55.1",
|
||||
"jsonschema>=4.25.1",
|
||||
"langfuse~=2.51.3",
|
||||
"langsmith~=0.1.77",
|
||||
|
|
@ -93,6 +93,7 @@ dependencies = [
|
|||
"weaviate-client==4.17.0",
|
||||
"apscheduler>=3.11.0",
|
||||
"weave>=0.52.16",
|
||||
"fastopenapi[flask]>=0.7.0",
|
||||
]
|
||||
# Before adding new dependency, consider place it in
|
||||
# alphabet order (a-z) and suitable group.
|
||||
|
|
|
|||
|
|
@ -8,6 +8,7 @@
|
|||
],
|
||||
"typeCheckingMode": "strict",
|
||||
"allowedUntypedLibraries": [
|
||||
"fastopenapi",
|
||||
"flask_restx",
|
||||
"flask_login",
|
||||
"opentelemetry.instrumentation.celery",
|
||||
|
|
|
|||
|
|
@ -521,12 +521,10 @@ class AppDslService:
|
|||
raise ValueError("Missing model_config for chat/agent-chat/completion app")
|
||||
# Initialize or update model config
|
||||
if not app.app_model_config:
|
||||
app_model_config = AppModelConfig().from_model_config_dict(model_config)
|
||||
app_model_config = AppModelConfig(
|
||||
app_id=app.id, created_by=account.id, updated_by=account.id
|
||||
).from_model_config_dict(model_config)
|
||||
app_model_config.id = str(uuid4())
|
||||
app_model_config.app_id = app.id
|
||||
app_model_config.created_by = account.id
|
||||
app_model_config.updated_by = account.id
|
||||
|
||||
app.app_model_config_id = app_model_config.id
|
||||
|
||||
self._session.add(app_model_config)
|
||||
|
|
|
|||
|
|
@ -1,6 +1,8 @@
|
|||
from __future__ import annotations
|
||||
|
||||
import uuid
|
||||
from collections.abc import Generator, Mapping
|
||||
from typing import Any, Union
|
||||
from typing import TYPE_CHECKING, Any, Union
|
||||
|
||||
from configs import dify_config
|
||||
from core.app.apps.advanced_chat.app_generator import AdvancedChatAppGenerator
|
||||
|
|
@ -18,6 +20,9 @@ from services.errors.app import QuotaExceededError, WorkflowIdFormatError, Workf
|
|||
from services.errors.llm import InvokeRateLimitError
|
||||
from services.workflow_service import WorkflowService
|
||||
|
||||
if TYPE_CHECKING:
|
||||
from controllers.console.app.workflow import LoopNodeRunPayload
|
||||
|
||||
|
||||
class AppGenerateService:
|
||||
@classmethod
|
||||
|
|
@ -165,7 +170,9 @@ class AppGenerateService:
|
|||
raise ValueError(f"Invalid app mode {app_model.mode}")
|
||||
|
||||
@classmethod
|
||||
def generate_single_loop(cls, app_model: App, user: Account, node_id: str, args: Any, streaming: bool = True):
|
||||
def generate_single_loop(
|
||||
cls, app_model: App, user: Account, node_id: str, args: LoopNodeRunPayload, streaming: bool = True
|
||||
):
|
||||
if app_model.mode == AppMode.ADVANCED_CHAT:
|
||||
workflow = cls._get_workflow(app_model, InvokeFrom.DEBUGGER)
|
||||
return AdvancedChatAppGenerator.convert_to_event_stream(
|
||||
|
|
|
|||
|
|
@ -150,10 +150,9 @@ class AppService:
|
|||
db.session.flush()
|
||||
|
||||
if default_model_config:
|
||||
app_model_config = AppModelConfig(**default_model_config)
|
||||
app_model_config.app_id = app.id
|
||||
app_model_config.created_by = account.id
|
||||
app_model_config.updated_by = account.id
|
||||
app_model_config = AppModelConfig(
|
||||
**default_model_config, app_id=app.id, created_by=account.id, updated_by=account.id
|
||||
)
|
||||
db.session.add(app_model_config)
|
||||
db.session.flush()
|
||||
|
||||
|
|
|
|||
|
|
@ -261,10 +261,9 @@ class MessageService:
|
|||
else:
|
||||
conversation_override_model_configs = json.loads(conversation.override_model_configs)
|
||||
app_model_config = AppModelConfig(
|
||||
id=conversation.app_model_config_id,
|
||||
app_id=app_model.id,
|
||||
)
|
||||
|
||||
app_model_config.id = conversation.app_model_config_id
|
||||
app_model_config = app_model_config.from_model_config_dict(conversation_override_model_configs)
|
||||
if not app_model_config:
|
||||
raise ValueError("did not find app model config")
|
||||
|
|
|
|||
|
|
@ -172,7 +172,6 @@ class TestAgentService:
|
|||
|
||||
# Create app model config
|
||||
app_model_config = AppModelConfig(
|
||||
id=fake.uuid4(),
|
||||
app_id=app.id,
|
||||
provider="openai",
|
||||
model_id="gpt-3.5-turbo",
|
||||
|
|
@ -180,6 +179,7 @@ class TestAgentService:
|
|||
model="gpt-3.5-turbo",
|
||||
agent_mode=json.dumps({"enabled": True, "strategy": "react", "tools": []}),
|
||||
)
|
||||
app_model_config.id = fake.uuid4()
|
||||
db.session.add(app_model_config)
|
||||
db.session.commit()
|
||||
|
||||
|
|
@ -413,7 +413,6 @@ class TestAgentService:
|
|||
|
||||
# Create app model config
|
||||
app_model_config = AppModelConfig(
|
||||
id=fake.uuid4(),
|
||||
app_id=app.id,
|
||||
provider="openai",
|
||||
model_id="gpt-3.5-turbo",
|
||||
|
|
@ -421,6 +420,7 @@ class TestAgentService:
|
|||
model="gpt-3.5-turbo",
|
||||
agent_mode=json.dumps({"enabled": True, "strategy": "react", "tools": []}),
|
||||
)
|
||||
app_model_config.id = fake.uuid4()
|
||||
db.session.add(app_model_config)
|
||||
db.session.commit()
|
||||
|
||||
|
|
@ -485,7 +485,6 @@ class TestAgentService:
|
|||
|
||||
# Create app model config
|
||||
app_model_config = AppModelConfig(
|
||||
id=fake.uuid4(),
|
||||
app_id=app.id,
|
||||
provider="openai",
|
||||
model_id="gpt-3.5-turbo",
|
||||
|
|
@ -493,6 +492,7 @@ class TestAgentService:
|
|||
model="gpt-3.5-turbo",
|
||||
agent_mode=json.dumps({"enabled": True, "strategy": "react", "tools": []}),
|
||||
)
|
||||
app_model_config.id = fake.uuid4()
|
||||
db.session.add(app_model_config)
|
||||
db.session.commit()
|
||||
|
||||
|
|
|
|||
|
|
@ -226,26 +226,27 @@ class TestAppDslService:
|
|||
app, account = self._create_test_app_and_account(db_session_with_containers, mock_external_service_dependencies)
|
||||
|
||||
# Create model config for the app
|
||||
model_config = AppModelConfig()
|
||||
model_config.id = fake.uuid4()
|
||||
model_config.app_id = app.id
|
||||
model_config.provider = "openai"
|
||||
model_config.model_id = "gpt-3.5-turbo"
|
||||
model_config.model = json.dumps(
|
||||
{
|
||||
"provider": "openai",
|
||||
"name": "gpt-3.5-turbo",
|
||||
"mode": "chat",
|
||||
"completion_params": {
|
||||
"max_tokens": 1000,
|
||||
"temperature": 0.7,
|
||||
},
|
||||
}
|
||||
model_config = AppModelConfig(
|
||||
app_id=app.id,
|
||||
provider="openai",
|
||||
model_id="gpt-3.5-turbo",
|
||||
model=json.dumps(
|
||||
{
|
||||
"provider": "openai",
|
||||
"name": "gpt-3.5-turbo",
|
||||
"mode": "chat",
|
||||
"completion_params": {
|
||||
"max_tokens": 1000,
|
||||
"temperature": 0.7,
|
||||
},
|
||||
}
|
||||
),
|
||||
pre_prompt="You are a helpful assistant.",
|
||||
prompt_type="simple",
|
||||
created_by=account.id,
|
||||
updated_by=account.id,
|
||||
)
|
||||
model_config.pre_prompt = "You are a helpful assistant."
|
||||
model_config.prompt_type = "simple"
|
||||
model_config.created_by = account.id
|
||||
model_config.updated_by = account.id
|
||||
model_config.id = fake.uuid4()
|
||||
|
||||
# Set the app_model_config_id to link the config
|
||||
app.app_model_config_id = model_config.id
|
||||
|
|
|
|||
|
|
@ -925,24 +925,24 @@ class TestWorkflowService:
|
|||
# Create app model config (required for conversion)
|
||||
from models.model import AppModelConfig
|
||||
|
||||
app_model_config = AppModelConfig()
|
||||
app_model_config.id = fake.uuid4()
|
||||
app_model_config.app_id = app.id
|
||||
app_model_config.tenant_id = app.tenant_id
|
||||
app_model_config.provider = "openai"
|
||||
app_model_config.model_id = "gpt-3.5-turbo"
|
||||
# Set the model field directly - this is what model_dict property returns
|
||||
app_model_config.model = json.dumps(
|
||||
{
|
||||
"provider": "openai",
|
||||
"name": "gpt-3.5-turbo",
|
||||
"completion_params": {"max_tokens": 1000, "temperature": 0.7},
|
||||
}
|
||||
app_model_config = AppModelConfig(
|
||||
app_id=app.id,
|
||||
provider="openai",
|
||||
model_id="gpt-3.5-turbo",
|
||||
# Set the model field directly - this is what model_dict property returns
|
||||
model=json.dumps(
|
||||
{
|
||||
"provider": "openai",
|
||||
"name": "gpt-3.5-turbo",
|
||||
"completion_params": {"max_tokens": 1000, "temperature": 0.7},
|
||||
}
|
||||
),
|
||||
# Set pre_prompt for PromptTemplateConfigManager
|
||||
pre_prompt="You are a helpful assistant.",
|
||||
created_by=account.id,
|
||||
updated_by=account.id,
|
||||
)
|
||||
# Set pre_prompt for PromptTemplateConfigManager
|
||||
app_model_config.pre_prompt = "You are a helpful assistant."
|
||||
app_model_config.created_by = account.id
|
||||
app_model_config.updated_by = account.id
|
||||
app_model_config.id = fake.uuid4()
|
||||
|
||||
from extensions.ext_database import db
|
||||
|
||||
|
|
@ -987,24 +987,24 @@ class TestWorkflowService:
|
|||
# Create app model config (required for conversion)
|
||||
from models.model import AppModelConfig
|
||||
|
||||
app_model_config = AppModelConfig()
|
||||
app_model_config.id = fake.uuid4()
|
||||
app_model_config.app_id = app.id
|
||||
app_model_config.tenant_id = app.tenant_id
|
||||
app_model_config.provider = "openai"
|
||||
app_model_config.model_id = "gpt-3.5-turbo"
|
||||
# Set the model field directly - this is what model_dict property returns
|
||||
app_model_config.model = json.dumps(
|
||||
{
|
||||
"provider": "openai",
|
||||
"name": "gpt-3.5-turbo",
|
||||
"completion_params": {"max_tokens": 1000, "temperature": 0.7},
|
||||
}
|
||||
app_model_config = AppModelConfig(
|
||||
app_id=app.id,
|
||||
provider="openai",
|
||||
model_id="gpt-3.5-turbo",
|
||||
# Set the model field directly - this is what model_dict property returns
|
||||
model=json.dumps(
|
||||
{
|
||||
"provider": "openai",
|
||||
"name": "gpt-3.5-turbo",
|
||||
"completion_params": {"max_tokens": 1000, "temperature": 0.7},
|
||||
}
|
||||
),
|
||||
# Set pre_prompt for PromptTemplateConfigManager
|
||||
pre_prompt="Complete the following text:",
|
||||
created_by=account.id,
|
||||
updated_by=account.id,
|
||||
)
|
||||
# Set pre_prompt for PromptTemplateConfigManager
|
||||
app_model_config.pre_prompt = "Complete the following text:"
|
||||
app_model_config.created_by = account.id
|
||||
app_model_config.updated_by = account.id
|
||||
app_model_config.id = fake.uuid4()
|
||||
|
||||
from extensions.ext_database import db
|
||||
|
||||
|
|
|
|||
|
|
@ -0,0 +1,27 @@
|
|||
import builtins
|
||||
|
||||
import pytest
|
||||
from flask import Flask
|
||||
from flask.views import MethodView
|
||||
|
||||
from extensions import ext_fastopenapi
|
||||
|
||||
if not hasattr(builtins, "MethodView"):
|
||||
builtins.MethodView = MethodView # type: ignore[attr-defined]
|
||||
|
||||
|
||||
@pytest.fixture
|
||||
def app() -> Flask:
|
||||
app = Flask(__name__)
|
||||
app.config["TESTING"] = True
|
||||
return app
|
||||
|
||||
|
||||
def test_console_ping_fastopenapi_returns_pong(app: Flask):
|
||||
ext_fastopenapi.init_app(app)
|
||||
|
||||
client = app.test_client()
|
||||
response = client.get("/console/api/ping")
|
||||
|
||||
assert response.status_code == 200
|
||||
assert response.get_json() == {"result": "pong"}
|
||||
|
|
@ -475,3 +475,130 @@ def test_valid_api_key_works():
|
|||
headers = executor._assembling_headers()
|
||||
assert "Authorization" in headers
|
||||
assert headers["Authorization"] == "Bearer valid-api-key-123"
|
||||
|
||||
|
||||
def test_executor_with_json_body_and_unquoted_uuid_variable():
|
||||
"""Test that unquoted UUID variables are correctly handled in JSON body.
|
||||
|
||||
This test verifies the fix for issue #31436 where json_repair would truncate
|
||||
certain UUID patterns (like 57eeeeb1-...) when they appeared as unquoted values.
|
||||
"""
|
||||
# UUID that triggers the json_repair truncation bug
|
||||
test_uuid = "57eeeeb1-450b-482c-81b9-4be77e95dee2"
|
||||
|
||||
variable_pool = VariablePool(
|
||||
system_variables=SystemVariable.empty(),
|
||||
user_inputs={},
|
||||
)
|
||||
variable_pool.add(["pre_node_id", "uuid"], test_uuid)
|
||||
|
||||
node_data = HttpRequestNodeData(
|
||||
title="Test JSON Body with Unquoted UUID Variable",
|
||||
method="post",
|
||||
url="https://api.example.com/data",
|
||||
authorization=HttpRequestNodeAuthorization(type="no-auth"),
|
||||
headers="Content-Type: application/json",
|
||||
params="",
|
||||
body=HttpRequestNodeBody(
|
||||
type="json",
|
||||
data=[
|
||||
BodyData(
|
||||
key="",
|
||||
type="text",
|
||||
# UUID variable without quotes - this is the problematic case
|
||||
value='{"rowId": {{#pre_node_id.uuid#}}}',
|
||||
)
|
||||
],
|
||||
),
|
||||
)
|
||||
|
||||
executor = Executor(
|
||||
node_data=node_data,
|
||||
timeout=HttpRequestNodeTimeout(connect=10, read=30, write=30),
|
||||
variable_pool=variable_pool,
|
||||
)
|
||||
|
||||
# The UUID should be preserved in full, not truncated
|
||||
assert executor.json == {"rowId": test_uuid}
|
||||
assert len(executor.json["rowId"]) == len(test_uuid)
|
||||
|
||||
|
||||
def test_executor_with_json_body_and_unquoted_uuid_with_newlines():
|
||||
"""Test that unquoted UUID variables with newlines in JSON are handled correctly.
|
||||
|
||||
This is a specific case from issue #31436 where the JSON body contains newlines.
|
||||
"""
|
||||
test_uuid = "57eeeeb1-450b-482c-81b9-4be77e95dee2"
|
||||
|
||||
variable_pool = VariablePool(
|
||||
system_variables=SystemVariable.empty(),
|
||||
user_inputs={},
|
||||
)
|
||||
variable_pool.add(["pre_node_id", "uuid"], test_uuid)
|
||||
|
||||
node_data = HttpRequestNodeData(
|
||||
title="Test JSON Body with Unquoted UUID and Newlines",
|
||||
method="post",
|
||||
url="https://api.example.com/data",
|
||||
authorization=HttpRequestNodeAuthorization(type="no-auth"),
|
||||
headers="Content-Type: application/json",
|
||||
params="",
|
||||
body=HttpRequestNodeBody(
|
||||
type="json",
|
||||
data=[
|
||||
BodyData(
|
||||
key="",
|
||||
type="text",
|
||||
# JSON with newlines and unquoted UUID variable
|
||||
value='{\n"rowId": {{#pre_node_id.uuid#}}\n}',
|
||||
)
|
||||
],
|
||||
),
|
||||
)
|
||||
|
||||
executor = Executor(
|
||||
node_data=node_data,
|
||||
timeout=HttpRequestNodeTimeout(connect=10, read=30, write=30),
|
||||
variable_pool=variable_pool,
|
||||
)
|
||||
|
||||
# The UUID should be preserved in full
|
||||
assert executor.json == {"rowId": test_uuid}
|
||||
|
||||
|
||||
def test_executor_with_json_body_preserves_numbers_and_strings():
|
||||
"""Test that numbers are preserved and string values are properly quoted."""
|
||||
variable_pool = VariablePool(
|
||||
system_variables=SystemVariable.empty(),
|
||||
user_inputs={},
|
||||
)
|
||||
variable_pool.add(["node", "count"], 42)
|
||||
variable_pool.add(["node", "id"], "abc-123")
|
||||
|
||||
node_data = HttpRequestNodeData(
|
||||
title="Test JSON Body with mixed types",
|
||||
method="post",
|
||||
url="https://api.example.com/data",
|
||||
authorization=HttpRequestNodeAuthorization(type="no-auth"),
|
||||
headers="",
|
||||
params="",
|
||||
body=HttpRequestNodeBody(
|
||||
type="json",
|
||||
data=[
|
||||
BodyData(
|
||||
key="",
|
||||
type="text",
|
||||
value='{"count": {{#node.count#}}, "id": {{#node.id#}}}',
|
||||
)
|
||||
],
|
||||
),
|
||||
)
|
||||
|
||||
executor = Executor(
|
||||
node_data=node_data,
|
||||
timeout=HttpRequestNodeTimeout(connect=10, read=30, write=30),
|
||||
variable_pool=variable_pool,
|
||||
)
|
||||
|
||||
assert executor.json["count"] == 42
|
||||
assert executor.json["id"] == "abc-123"
|
||||
|
|
|
|||
4671
api/uv.lock
4671
api/uv.lock
File diff suppressed because it is too large
Load Diff
|
|
@ -0,0 +1,28 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(dirname "$(realpath "$0")")"
|
||||
ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
API_ENV_EXAMPLE="$ROOT/api/.env.example"
|
||||
API_ENV="$ROOT/api/.env"
|
||||
WEB_ENV_EXAMPLE="$ROOT/web/.env.example"
|
||||
WEB_ENV="$ROOT/web/.env.local"
|
||||
MIDDLEWARE_ENV_EXAMPLE="$ROOT/docker/middleware.env.example"
|
||||
MIDDLEWARE_ENV="$ROOT/docker/middleware.env"
|
||||
|
||||
# 1) Copy api/.env.example -> api/.env
|
||||
cp "$API_ENV_EXAMPLE" "$API_ENV"
|
||||
|
||||
# 2) Copy web/.env.example -> web/.env.local
|
||||
cp "$WEB_ENV_EXAMPLE" "$WEB_ENV"
|
||||
|
||||
# 3) Copy docker/middleware.env.example -> docker/middleware.env
|
||||
cp "$MIDDLEWARE_ENV_EXAMPLE" "$MIDDLEWARE_ENV"
|
||||
|
||||
# 4) Install deps
|
||||
cd "$ROOT/api"
|
||||
uv sync --group dev
|
||||
|
||||
cd "$ROOT/web"
|
||||
pnpm install
|
||||
|
|
@ -3,8 +3,9 @@
|
|||
set -x
|
||||
|
||||
SCRIPT_DIR="$(dirname "$(realpath "$0")")"
|
||||
cd "$SCRIPT_DIR/.."
|
||||
cd "$SCRIPT_DIR/../api"
|
||||
|
||||
uv run flask db upgrade
|
||||
|
||||
uv --directory api run \
|
||||
uv run \
|
||||
flask run --host 0.0.0.0 --port=5001 --debug
|
||||
|
|
|
|||
|
|
@ -0,0 +1,8 @@
|
|||
#!/usr/bin/env bash
|
||||
set -euo pipefail
|
||||
|
||||
SCRIPT_DIR="$(dirname "$(realpath "$0")")"
|
||||
ROOT="$(dirname "$SCRIPT_DIR")"
|
||||
|
||||
cd "$ROOT/docker"
|
||||
docker compose -f docker-compose.middleware.yaml --profile postgresql --profile weaviate -p dify up -d
|
||||
|
|
@ -83,7 +83,7 @@ while [[ $# -gt 0 ]]; do
|
|||
done
|
||||
|
||||
SCRIPT_DIR="$(dirname "$(realpath "$0")")"
|
||||
cd "$SCRIPT_DIR/.."
|
||||
cd "$SCRIPT_DIR/../api"
|
||||
|
||||
if [[ -n "${ENV_FILE}" ]]; then
|
||||
if [[ ! -f "${ENV_FILE}" ]]; then
|
||||
|
|
@ -123,6 +123,6 @@ echo " Concurrency: ${CONCURRENCY}"
|
|||
echo " Pool: ${POOL}"
|
||||
echo " Log Level: ${LOGLEVEL}"
|
||||
|
||||
uv --directory api run \
|
||||
uv run \
|
||||
celery -A app.celery worker \
|
||||
-P ${POOL} -c ${CONCURRENCY} --loglevel ${LOGLEVEL} -Q ${QUEUES}
|
||||
|
|
|
|||
|
|
@ -4,7 +4,7 @@
|
|||
set -e
|
||||
set -o pipefail
|
||||
|
||||
SCRIPT_DIR="$(dirname "$0")"
|
||||
SCRIPT_DIR="$(dirname "$(realpath "$0")")"
|
||||
REPO_ROOT="$(dirname "${SCRIPT_DIR}")"
|
||||
|
||||
# rely on `poetry` in path
|
||||
|
|
|
|||
|
|
@ -67,7 +67,7 @@
|
|||
"@lexical/react": "0.38.2",
|
||||
"@lexical/selection": "0.38.2",
|
||||
"@lexical/text": "0.38.2",
|
||||
"@lexical/utils": "0.38.2",
|
||||
"@lexical/utils": "0.39.0",
|
||||
"@monaco-editor/react": "4.7.0",
|
||||
"@octokit/core": "6.1.6",
|
||||
"@octokit/request-error": "6.1.8",
|
||||
|
|
|
|||
|
|
@ -94,8 +94,8 @@ importers:
|
|||
specifier: 0.38.2
|
||||
version: 0.38.2
|
||||
'@lexical/utils':
|
||||
specifier: 0.38.2
|
||||
version: 0.38.2
|
||||
specifier: 0.39.0
|
||||
version: 0.39.0
|
||||
'@monaco-editor/react':
|
||||
specifier: 4.7.0
|
||||
version: 4.7.0(monaco-editor@0.55.1)(react-dom@19.2.3(react@19.2.3))(react@19.2.3)
|
||||
|
|
@ -2066,6 +2066,9 @@ packages:
|
|||
'@lexical/clipboard@0.38.2':
|
||||
resolution: {integrity: sha512-dDShUplCu8/o6BB9ousr3uFZ9bltR+HtleF/Tl8FXFNPpZ4AXhbLKUoJuucRuIr+zqT7RxEv/3M6pk/HEoE6NQ==}
|
||||
|
||||
'@lexical/clipboard@0.39.0':
|
||||
resolution: {integrity: sha512-ylrHy8M+I5EH4utwqivslugqQhvgLTz9VEJdrb2RjbhKQEXwMcqKCRWh6cRfkYx64onE2YQE0nRIdzHhExEpLQ==}
|
||||
|
||||
'@lexical/code@0.38.2':
|
||||
resolution: {integrity: sha512-wpqgbmPsfi/+8SYP0zI2kml09fGPRhzO5litR9DIbbSGvcbawMbRNcKLO81DaTbsJRnBJiQvbBBBJAwZKRqgBw==}
|
||||
|
||||
|
|
@ -2081,6 +2084,9 @@ packages:
|
|||
'@lexical/extension@0.38.2':
|
||||
resolution: {integrity: sha512-qbUNxEVjAC0kxp7hEMTzktj0/51SyJoIJWK6Gm790b4yNBq82fEPkksfuLkRg9VQUteD0RT1Nkjy8pho8nNamw==}
|
||||
|
||||
'@lexical/extension@0.39.0':
|
||||
resolution: {integrity: sha512-mp/WcF8E53FWPiUHgHQz382J7u7C4+cELYNkC00dKaymf8NhS6M65Y8tyDikNGNUcLXSzaluwK0HkiKjTYGhVQ==}
|
||||
|
||||
'@lexical/hashtag@0.38.2':
|
||||
resolution: {integrity: sha512-jNI4Pv+plth39bjOeeQegMypkjDmoMWBMZtV0lCynBpkkPFlfMnyL9uzW/IxkZnX8LXWSw5mbWk07nqOUNTCrA==}
|
||||
|
||||
|
|
@ -2090,12 +2096,18 @@ packages:
|
|||
'@lexical/html@0.38.2':
|
||||
resolution: {integrity: sha512-pC5AV+07bmHistRwgG3NJzBMlIzSdxYO6rJU4eBNzyR4becdiLsI4iuv+aY7PhfSv+SCs7QJ9oc4i5caq48Pkg==}
|
||||
|
||||
'@lexical/html@0.39.0':
|
||||
resolution: {integrity: sha512-7VLWP5DpzBg3kKctpNK6PbhymKAtU6NAnKieopCfCIWlMW+EqpldteiIXGqSqrMRK0JWTmF1gKgr9nnQyOOsXw==}
|
||||
|
||||
'@lexical/link@0.38.2':
|
||||
resolution: {integrity: sha512-UOKTyYqrdCR9+7GmH6ZVqJTmqYefKGMUHMGljyGks+OjOGZAQs78S1QgcPEqltDy+SSdPSYK7wAo6gjxZfEq9g==}
|
||||
|
||||
'@lexical/list@0.38.2':
|
||||
resolution: {integrity: sha512-OQm9TzatlMrDZGxMxbozZEHzMJhKxAbH1TOnOGyFfzpfjbnFK2y8oLeVsfQZfZRmiqQS4Qc/rpFnRP2Ax5dsbA==}
|
||||
|
||||
'@lexical/list@0.39.0':
|
||||
resolution: {integrity: sha512-mxgSxUrakTCHtC+gF30BChQBJTsCMiMgfC2H5VvhcFwXMgsKE/aK9+a+C/sSvvzCmPXqzYsuAcGkJcrY3e5xlw==}
|
||||
|
||||
'@lexical/mark@0.38.2':
|
||||
resolution: {integrity: sha512-U+8KGwc3cP5DxSs15HfkP2YZJDs5wMbWQAwpGqep9bKphgxUgjPViKhdi+PxIt2QEzk7WcoZWUsK1d2ty/vSmg==}
|
||||
|
||||
|
|
@ -2123,15 +2135,24 @@ packages:
|
|||
'@lexical/selection@0.38.2':
|
||||
resolution: {integrity: sha512-eMFiWlBH6bEX9U9sMJ6PXPxVXTrihQfFeiIlWLuTpEIDF2HRz7Uo1KFRC/yN6q0DQaj7d9NZYA6Mei5DoQuz5w==}
|
||||
|
||||
'@lexical/selection@0.39.0':
|
||||
resolution: {integrity: sha512-j0cgNuTKDCdf/4MzRnAUwEqG6C/WQp18k2WKmX5KIVZJlhnGIJmlgSBrxjo8AuZ16DIHxTm2XNB4cUDCgZNuPA==}
|
||||
|
||||
'@lexical/table@0.38.2':
|
||||
resolution: {integrity: sha512-uu0i7yz0nbClmHOO5ZFsinRJE6vQnFz2YPblYHAlNigiBedhqMwSv5bedrzDq8nTTHwych3mC63tcyKIrM+I1g==}
|
||||
|
||||
'@lexical/table@0.39.0':
|
||||
resolution: {integrity: sha512-1eH11kV4bJ0fufCYl8DpE19kHwqUI8Ev5CZwivfAtC3ntwyNkeEpjCc0pqeYYIWN/4rTZ5jgB3IJV4FntyfCzw==}
|
||||
|
||||
'@lexical/text@0.38.2':
|
||||
resolution: {integrity: sha512-+juZxUugtC4T37aE3P0l4I9tsWbogDUnTI/mgYk4Ht9g+gLJnhQkzSA8chIyfTxbj5i0A8yWrUUSw+/xA7lKUQ==}
|
||||
|
||||
'@lexical/utils@0.38.2':
|
||||
resolution: {integrity: sha512-y+3rw15r4oAWIEXicUdNjfk8018dbKl7dWHqGHVEtqzAYefnEYdfD2FJ5KOTXfeoYfxi8yOW7FvzS4NZDi8Bfw==}
|
||||
|
||||
'@lexical/utils@0.39.0':
|
||||
resolution: {integrity: sha512-8YChidpMJpwQc4nex29FKUeuZzC++QCS/Jt46lPuy1GS/BZQoPHFKQ5hyVvM9QVhc5CEs4WGNoaCZvZIVN8bQw==}
|
||||
|
||||
'@lexical/yjs@0.38.2':
|
||||
resolution: {integrity: sha512-fg6ZHNrVQmy1AAxaTs8HrFbeNTJCaCoEDPi6pqypHQU3QVfqr4nq0L0EcHU/TRlR1CeduEPvZZIjUUxWTZ0u8g==}
|
||||
peerDependencies:
|
||||
|
|
@ -2619,6 +2640,9 @@ packages:
|
|||
'@preact/signals-core@1.12.1':
|
||||
resolution: {integrity: sha512-BwbTXpj+9QutoZLQvbttRg5x3l5468qaV2kufh+51yha1c53ep5dY4kTuZR35+3pAZxpfQerGJiQqg34ZNZ6uA==}
|
||||
|
||||
'@preact/signals-core@1.12.2':
|
||||
resolution: {integrity: sha512-5Yf8h1Ke3SMHr15xl630KtwPTW4sYDFkkxS0vQ8UiQLWwZQnrF9IKaVG1mN5VcJz52EcWs2acsc/Npjha/7ysA==}
|
||||
|
||||
'@preact/signals@1.3.2':
|
||||
resolution: {integrity: sha512-naxcJgUJ6BTOROJ7C3QML7KvwKwCXQJYTc5L/b0eEsdYgPB6SxwoQ1vDGcS0Q7GVjAenVq/tXrybVdFShHYZWg==}
|
||||
peerDependencies:
|
||||
|
|
@ -6223,6 +6247,9 @@ packages:
|
|||
lexical@0.38.2:
|
||||
resolution: {integrity: sha512-JJmfsG3c4gwBHzUGffbV7ifMNkKAWMCnYE3xJl87gty7hjyV5f3xq7eqTjP5HFYvO4XpjJvvWO2/djHp5S10tw==}
|
||||
|
||||
lexical@0.39.0:
|
||||
resolution: {integrity: sha512-lpLv7MEJH5QDujEDlYqettL3ATVtNYjqyimzqgrm0RvCm3AO9WXSdsgTxuN7IAZRu88xkxCDeYubeUf4mNZVdg==}
|
||||
|
||||
lib0@0.2.117:
|
||||
resolution: {integrity: sha512-DeXj9X5xDCjgKLU/7RR+/HQEVzuuEUiwldwOGsHK/sfAfELGWEyTcf0x+uOvCvK3O2zPmZePXWL85vtia6GyZw==}
|
||||
engines: {node: '>=16'}
|
||||
|
|
@ -10373,6 +10400,14 @@ snapshots:
|
|||
'@lexical/utils': 0.38.2
|
||||
lexical: 0.38.2
|
||||
|
||||
'@lexical/clipboard@0.39.0':
|
||||
dependencies:
|
||||
'@lexical/html': 0.39.0
|
||||
'@lexical/list': 0.39.0
|
||||
'@lexical/selection': 0.39.0
|
||||
'@lexical/utils': 0.39.0
|
||||
lexical: 0.39.0
|
||||
|
||||
'@lexical/code@0.38.2':
|
||||
dependencies:
|
||||
'@lexical/utils': 0.38.2
|
||||
|
|
@ -10401,6 +10436,12 @@ snapshots:
|
|||
'@preact/signals-core': 1.12.1
|
||||
lexical: 0.38.2
|
||||
|
||||
'@lexical/extension@0.39.0':
|
||||
dependencies:
|
||||
'@lexical/utils': 0.39.0
|
||||
'@preact/signals-core': 1.12.2
|
||||
lexical: 0.39.0
|
||||
|
||||
'@lexical/hashtag@0.38.2':
|
||||
dependencies:
|
||||
'@lexical/text': 0.38.2
|
||||
|
|
@ -10419,6 +10460,12 @@ snapshots:
|
|||
'@lexical/utils': 0.38.2
|
||||
lexical: 0.38.2
|
||||
|
||||
'@lexical/html@0.39.0':
|
||||
dependencies:
|
||||
'@lexical/selection': 0.39.0
|
||||
'@lexical/utils': 0.39.0
|
||||
lexical: 0.39.0
|
||||
|
||||
'@lexical/link@0.38.2':
|
||||
dependencies:
|
||||
'@lexical/extension': 0.38.2
|
||||
|
|
@ -10432,6 +10479,13 @@ snapshots:
|
|||
'@lexical/utils': 0.38.2
|
||||
lexical: 0.38.2
|
||||
|
||||
'@lexical/list@0.39.0':
|
||||
dependencies:
|
||||
'@lexical/extension': 0.39.0
|
||||
'@lexical/selection': 0.39.0
|
||||
'@lexical/utils': 0.39.0
|
||||
lexical: 0.39.0
|
||||
|
||||
'@lexical/mark@0.38.2':
|
||||
dependencies:
|
||||
'@lexical/utils': 0.38.2
|
||||
|
|
@ -10501,6 +10555,10 @@ snapshots:
|
|||
dependencies:
|
||||
lexical: 0.38.2
|
||||
|
||||
'@lexical/selection@0.39.0':
|
||||
dependencies:
|
||||
lexical: 0.39.0
|
||||
|
||||
'@lexical/table@0.38.2':
|
||||
dependencies:
|
||||
'@lexical/clipboard': 0.38.2
|
||||
|
|
@ -10508,6 +10566,13 @@ snapshots:
|
|||
'@lexical/utils': 0.38.2
|
||||
lexical: 0.38.2
|
||||
|
||||
'@lexical/table@0.39.0':
|
||||
dependencies:
|
||||
'@lexical/clipboard': 0.39.0
|
||||
'@lexical/extension': 0.39.0
|
||||
'@lexical/utils': 0.39.0
|
||||
lexical: 0.39.0
|
||||
|
||||
'@lexical/text@0.38.2':
|
||||
dependencies:
|
||||
lexical: 0.38.2
|
||||
|
|
@ -10519,6 +10584,13 @@ snapshots:
|
|||
'@lexical/table': 0.38.2
|
||||
lexical: 0.38.2
|
||||
|
||||
'@lexical/utils@0.39.0':
|
||||
dependencies:
|
||||
'@lexical/list': 0.39.0
|
||||
'@lexical/selection': 0.39.0
|
||||
'@lexical/table': 0.39.0
|
||||
lexical: 0.39.0
|
||||
|
||||
'@lexical/yjs@0.38.2(yjs@13.6.27)':
|
||||
dependencies:
|
||||
'@lexical/offset': 0.38.2
|
||||
|
|
@ -10973,6 +11045,8 @@ snapshots:
|
|||
|
||||
'@preact/signals-core@1.12.1': {}
|
||||
|
||||
'@preact/signals-core@1.12.2': {}
|
||||
|
||||
'@preact/signals@1.3.2(preact@10.28.0)':
|
||||
dependencies:
|
||||
'@preact/signals-core': 1.12.1
|
||||
|
|
@ -15090,6 +15164,8 @@ snapshots:
|
|||
|
||||
lexical@0.38.2: {}
|
||||
|
||||
lexical@0.39.0: {}
|
||||
|
||||
lib0@0.2.117:
|
||||
dependencies:
|
||||
isomorphic.js: 0.2.5
|
||||
|
|
|
|||
Loading…
Reference in New Issue