diff --git a/.agents/skills/e2e-cucumber-playwright/SKILL.md b/.agents/skills/e2e-cucumber-playwright/SKILL.md new file mode 100644 index 0000000000..de6b58f26d --- /dev/null +++ b/.agents/skills/e2e-cucumber-playwright/SKILL.md @@ -0,0 +1,79 @@ +--- +name: e2e-cucumber-playwright +description: Write, update, or review Dify end-to-end tests under `e2e/` that use Cucumber, Gherkin, and Playwright. Use when the task involves `.feature` files, `features/step-definitions/`, `features/support/`, `DifyWorld`, scenario tags, locator/assertion choices, or E2E testing best practices for this repository. +--- + +# Dify E2E Cucumber + Playwright + +Use this skill for Dify's repository-level E2E suite in `e2e/`. Use [`e2e/AGENTS.md`](../../../e2e/AGENTS.md) as the canonical guide for local architecture and conventions, then apply Playwright/Cucumber best practices only where they fit the current suite. + +## Scope + +- Use this skill for `.feature` files, Cucumber step definitions, `DifyWorld`, hooks, tags, and E2E review work under `e2e/`. +- Do not use this skill for Vitest or React Testing Library work under `web/`; use `frontend-testing` instead. +- Do not use this skill for backend test or API review tasks under `api/`. + +## Read Order + +1. Read [`e2e/AGENTS.md`](../../../e2e/AGENTS.md) first. +2. Read only the files directly involved in the task: + - target `.feature` files under `e2e/features/` + - related step files under `e2e/features/step-definitions/` + - `e2e/features/support/hooks.ts` and `e2e/features/support/world.ts` when session lifecycle or shared state matters + - `e2e/scripts/run-cucumber.ts` and `e2e/cucumber.config.ts` when tags or execution flow matter +3. Read [`references/playwright-best-practices.md`](references/playwright-best-practices.md) only when locator, assertion, isolation, or waiting choices are involved. +4. Read [`references/cucumber-best-practices.md`](references/cucumber-best-practices.md) only when scenario wording, step granularity, tags, or expression design are involved. +5. Re-check official docs with Context7 before introducing a new Playwright or Cucumber pattern. + +## Local Rules + +- `e2e/` uses Cucumber for scenarios and Playwright as the browser layer. +- `DifyWorld` is the per-scenario context object. Type `this` as `DifyWorld` and use `async function`, not arrow functions. +- Keep glue organized by capability under `e2e/features/step-definitions/`; use `common/` only for broadly reusable steps. +- Browser session behavior comes from `features/support/hooks.ts`: + - default: authenticated session with shared storage state + - `@unauthenticated`: clean browser context + - `@authenticated`: readability/selective-run tag only unless implementation changes + - `@fresh`: only for `e2e:full*` flows +- Do not import Playwright Test runner patterns that bypass the current Cucumber + `DifyWorld` architecture unless the task is explicitly about changing that architecture. + +## Workflow + +1. Rebuild local context. + - Inspect the target feature area. + - Reuse an existing step when wording and behavior already match. + - Add a new step only for a genuinely new user action or assertion. + - Keep edits close to the current capability folder unless the step is broadly reusable. +2. Write behavior-first scenarios. + - Describe user-observable behavior, not DOM mechanics. + - Keep each scenario focused on one workflow or outcome. + - Keep scenarios independent and re-runnable. +3. Write step definitions in the local style. + - Keep one step to one user-visible action or one assertion. + - Prefer Cucumber Expressions such as `{string}` and `{int}`. + - Scope locators to stable containers when the page has repeated elements. + - Avoid page-object layers or extra helper abstractions unless repeated complexity clearly justifies them. +4. Use Playwright in the local style. + - Prefer user-facing locators: `getByRole`, `getByLabel`, `getByPlaceholder`, `getByText`, then `getByTestId` for explicit contracts. + - Use web-first `expect(...)` assertions. + - Do not use `waitForTimeout`, manual polling, or raw visibility checks when a locator action or retrying assertion already expresses the behavior. +5. Validate narrowly. + - Run the narrowest tagged scenario or flow that exercises the change. + - Run `pnpm -C e2e check`. + - Broaden verification only when the change affects hooks, tags, setup, or shared step semantics. + +## Review Checklist + +- Does the scenario describe behavior rather than implementation? +- Does it fit the current session model, tags, and `DifyWorld` usage? +- Should an existing step be reused instead of adding a new one? +- Are locators user-facing and assertions web-first? +- Does the change introduce hidden coupling across scenarios, tags, or instance state? +- Does it document or implement behavior that differs from the real hooks or configuration? + +Lead findings with correctness, flake risk, and architecture drift. + +## References + +- [`references/playwright-best-practices.md`](references/playwright-best-practices.md) +- [`references/cucumber-best-practices.md`](references/cucumber-best-practices.md) diff --git a/.agents/skills/e2e-cucumber-playwright/agents/openai.yaml b/.agents/skills/e2e-cucumber-playwright/agents/openai.yaml new file mode 100644 index 0000000000..605cce041d --- /dev/null +++ b/.agents/skills/e2e-cucumber-playwright/agents/openai.yaml @@ -0,0 +1,4 @@ +interface: + display_name: "E2E Cucumber + Playwright" + short_description: "Write and review Dify E2E scenarios." + default_prompt: "Use $e2e-cucumber-playwright to write or review a Dify E2E scenario under e2e/." diff --git a/.agents/skills/e2e-cucumber-playwright/references/cucumber-best-practices.md b/.agents/skills/e2e-cucumber-playwright/references/cucumber-best-practices.md new file mode 100644 index 0000000000..d7a1a52852 --- /dev/null +++ b/.agents/skills/e2e-cucumber-playwright/references/cucumber-best-practices.md @@ -0,0 +1,93 @@ +# Cucumber Best Practices For Dify E2E + +Use this reference when writing or reviewing Gherkin scenarios, step definitions, parameter expressions, and step reuse in Dify's `e2e/` suite. + +Official sources: + +- https://cucumber.io/docs/guides/10-minute-tutorial/ +- https://cucumber.io/docs/cucumber/step-definitions/ +- https://cucumber.io/docs/cucumber/cucumber-expressions/ + +## What Matters Most + +### 1. Treat scenarios as executable specifications + +Cucumber scenarios should describe examples of behavior, not test implementation recipes. + +Apply it like this: + +- write what the user does and what should happen +- avoid UI-internal wording such as selector details, DOM structure, or component names +- keep language concrete enough that the scenario reads like living documentation + +### 2. Keep scenarios focused + +A scenario should usually prove one workflow or business outcome. If a scenario wanders across several unrelated behaviors, split it. + +In Dify's suite, this means: + +- one capability-focused scenario per feature path +- no long setup chains when existing bootstrap or reusable steps already cover them +- no hidden dependency on another scenario's side effects + +### 3. Reuse steps, but only when behavior really matches + +Good reuse reduces duplication. Bad reuse hides meaning. + +Prefer reuse when: + +- the user action is genuinely the same +- the expected outcome is genuinely the same +- the wording stays natural across features + +Write a new step when: + +- the behavior is materially different +- reusing the old wording would make the scenario misleading +- a supposedly generic step would become an implementation-detail wrapper + +### 4. Prefer Cucumber Expressions + +Use Cucumber Expressions for parameters unless regex is clearly necessary. + +Common examples: + +- `{string}` for labels, names, and visible text +- `{int}` for counts +- `{float}` for decimal values +- `{word}` only when the value is truly a single token + +Keep expressions readable. If a step needs complicated parsing logic, first ask whether the scenario wording should be simpler. + +### 5. Keep step definitions thin and meaningful + +Step definitions are glue between Gherkin and automation, not a second abstraction language. + +For Dify: + +- type `this` as `DifyWorld` +- use `async function` +- keep each step to one user-visible action or assertion +- rely on `DifyWorld` and existing support code for shared context +- avoid leaking cross-scenario state + +### 6. Use tags intentionally + +Tags should communicate run scope or session semantics, not become ad hoc metadata. + +In Dify's current suite: + +- capability tags group related scenarios +- `@unauthenticated` changes session behavior +- `@authenticated` is descriptive/selective, not a behavior switch by itself +- `@fresh` belongs to reset/full-install flows only + +If a proposed tag implies behavior, verify that hooks or runner configuration actually implement it. + +## Review Questions + +- Does the scenario read like a real example of product behavior? +- Are the steps behavior-oriented instead of implementation-oriented? +- Is a reused step still truthful in this feature? +- Is a new tag documenting real behavior, or inventing semantics that the suite does not implement? +- Would a new reader understand the outcome without opening the step-definition file? diff --git a/.agents/skills/e2e-cucumber-playwright/references/playwright-best-practices.md b/.agents/skills/e2e-cucumber-playwright/references/playwright-best-practices.md new file mode 100644 index 0000000000..02e763d46b --- /dev/null +++ b/.agents/skills/e2e-cucumber-playwright/references/playwright-best-practices.md @@ -0,0 +1,96 @@ +# Playwright Best Practices For Dify E2E + +Use this reference when writing or reviewing locator, assertion, isolation, or synchronization logic for Dify's Cucumber-based E2E suite. + +Official sources: + +- https://playwright.dev/docs/best-practices +- https://playwright.dev/docs/locators +- https://playwright.dev/docs/test-assertions +- https://playwright.dev/docs/browser-contexts + +## What Matters Most + +### 1. Keep scenarios isolated + +Playwright's model is built around clean browser contexts so one test does not leak into another. In Dify's suite, that principle maps to per-scenario session setup in `features/support/hooks.ts` and `DifyWorld`. + +Apply it like this: + +- do not depend on another scenario having run first +- do not persist ad hoc scenario state outside `DifyWorld` +- do not couple ordinary scenarios to `@fresh` behavior +- when a flow needs special auth/session semantics, express that through the existing tag model or explicit hook changes + +### 2. Prefer user-facing locators + +Playwright recommends built-in locators that reflect what users perceive on the page. + +Preferred order in this repository: + +1. `getByRole` +2. `getByLabel` +3. `getByPlaceholder` +4. `getByText` +5. `getByTestId` when an explicit test contract is the most stable option + +Avoid raw CSS/XPath selectors unless no stable user-facing contract exists and adding one is not practical. + +Also remember: + +- repeated content usually needs scoping to a stable container +- exact text matching is often too brittle when role/name or label already exists +- `getByTestId` is acceptable when semantics are weak but the contract is intentional + +### 3. Use web-first assertions + +Playwright assertions auto-wait and retry. Prefer them over manual state inspection. + +Prefer: + +- `await expect(page).toHaveURL(...)` +- `await expect(locator).toBeVisible()` +- `await expect(locator).toBeHidden()` +- `await expect(locator).toBeEnabled()` +- `await expect(locator).toHaveText(...)` + +Avoid: + +- `expect(await locator.isVisible()).toBe(true)` +- custom polling loops for DOM state +- `waitForTimeout` as synchronization + +If a condition genuinely needs custom retry logic, use Playwright's polling/assertion tools deliberately and keep that choice local and explicit. + +### 4. Let actions wait for actionability + +Locator actions already wait for the element to be actionable. Do not preface every click/fill with extra timing logic unless the action needs a specific visible/ready assertion for clarity. + +Good pattern: + +- assert a meaningful visible state when that is part of the behavior +- then click/fill/select via locator APIs + +Bad pattern: + +- stack arbitrary waits before every action +- wait on unstable implementation details instead of the visible state the user cares about + +### 5. Match debugging to the current suite + +Playwright's wider ecosystem supports traces and rich debugging tools. Dify's current suite already captures: + +- full-page screenshots +- page HTML +- console errors +- page errors + +Use the existing artifact flow by default. If a task is specifically about improving diagnostics, confirm the change fits the current Cucumber architecture before importing broader Playwright tooling. + +## Review Questions + +- Would this locator survive DOM refactors that do not change user-visible behavior? +- Is this assertion using Playwright's retrying semantics? +- Is any explicit wait masking a real readiness problem? +- Does this code preserve per-scenario isolation? +- Is a new abstraction really needed, or does it bypass the existing `DifyWorld` + step-definition model? diff --git a/.claude/skills/e2e-cucumber-playwright b/.claude/skills/e2e-cucumber-playwright new file mode 120000 index 0000000000..71b0eae34f --- /dev/null +++ b/.claude/skills/e2e-cucumber-playwright @@ -0,0 +1 @@ +../../.agents/skills/e2e-cucumber-playwright \ No newline at end of file diff --git a/.devcontainer/post_create_command.sh b/.devcontainer/post_create_command.sh index b92d4c35a8..7460636824 100755 --- a/.devcontainer/post_create_command.sh +++ b/.devcontainer/post_create_command.sh @@ -7,7 +7,7 @@ cd web && pnpm install pipx install uv echo "alias start-api=\"cd $WORKSPACE_ROOT/api && uv run python -m flask run --host 0.0.0.0 --port=5001 --debug\"" >> ~/.bashrc -echo "alias start-worker=\"cd $WORKSPACE_ROOT/api && uv run python -m celery -A app.celery worker -P threads -c 1 --loglevel INFO -Q dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention\"" >> ~/.bashrc +echo "alias start-worker=\"cd $WORKSPACE_ROOT/api && uv run python -m celery -A app.celery worker -P threads -c 1 --loglevel INFO -Q dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_publisher,trigger_refresh_executor,retention\"" >> ~/.bashrc echo "alias start-web=\"cd $WORKSPACE_ROOT/web && pnpm dev:inspect\"" >> ~/.bashrc echo "alias start-web-prod=\"cd $WORKSPACE_ROOT/web && pnpm build && pnpm start\"" >> ~/.bashrc echo "alias start-containers=\"cd $WORKSPACE_ROOT/docker && docker-compose -f docker-compose.middleware.yaml -p dify --env-file middleware.env up -d\"" >> ~/.bashrc diff --git a/.github/workflows/autofix.yml b/.github/workflows/autofix.yml index 772ab8dd56..3946834e09 100644 --- a/.github/workflows/autofix.yml +++ b/.github/workflows/autofix.yml @@ -120,7 +120,6 @@ jobs: - name: ESLint autofix if: github.event_name != 'merge_group' && steps.web-changes.outputs.any_changed == 'true' run: | - cd web vp exec eslint --concurrency=2 --prune-suppressions --quiet || true - if: github.event_name != 'merge_group' diff --git a/.github/workflows/pyrefly-type-coverage-comment.yml b/.github/workflows/pyrefly-type-coverage-comment.yml index 51f3ca54b6..974da99aad 100644 --- a/.github/workflows/pyrefly-type-coverage-comment.yml +++ b/.github/workflows/pyrefly-type-coverage-comment.yml @@ -62,7 +62,7 @@ jobs: - name: Render coverage markdown from structured data id: render run: | - comment_body="$(uv run --directory api python api/libs/pyrefly_type_coverage.py \ + comment_body="$(uv run --directory api python libs/pyrefly_type_coverage.py \ --base base_report.json \ < pr_report.json)" diff --git a/.github/workflows/style.yml b/.github/workflows/style.yml index c32fc9d0cb..29f5b090f8 100644 --- a/.github/workflows/style.yml +++ b/.github/workflows/style.yml @@ -77,6 +77,8 @@ jobs: with: files: | web/** + e2e/** + sdks/nodejs-client/** packages/** package.json pnpm-lock.yaml @@ -95,14 +97,14 @@ jobs: id: eslint-cache-restore uses: actions/cache/restore@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 with: - path: web/.eslintcache - key: ${{ runner.os }}-web-eslint-${{ hashFiles('web/package.json', 'pnpm-lock.yaml', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}-${{ github.sha }} + path: .eslintcache + key: ${{ runner.os }}-eslint-${{ hashFiles('pnpm-lock.yaml', 'eslint.config.mjs', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}-${{ github.sha }} restore-keys: | - ${{ runner.os }}-web-eslint-${{ hashFiles('web/package.json', 'pnpm-lock.yaml', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}- + ${{ runner.os }}-eslint-${{ hashFiles('pnpm-lock.yaml', 'eslint.config.mjs', 'web/eslint.config.mjs', 'web/eslint.constants.mjs', 'web/plugins/eslint/**') }}- - name: Web style check if: steps.changed-files.outputs.any_changed == 'true' - working-directory: ./web + working-directory: . run: vp run lint:ci - name: Web tsslint @@ -112,7 +114,7 @@ jobs: - name: Web type check if: steps.changed-files.outputs.any_changed == 'true' - working-directory: ./web + working-directory: . run: vp run type-check - name: Web dead code check @@ -124,7 +126,7 @@ jobs: if: steps.changed-files.outputs.any_changed == 'true' && success() && steps.eslint-cache-restore.outputs.cache-hit != 'true' uses: actions/cache/save@668228422ae6a00e4ad889ee87cd7109ec5666a7 # v5.0.4 with: - path: web/.eslintcache + path: .eslintcache key: ${{ steps.eslint-cache-restore.outputs.cache-primary-key }} superlinter: diff --git a/.gitignore b/.gitignore index 53dea88899..3493a7c756 100644 --- a/.gitignore +++ b/.gitignore @@ -203,6 +203,7 @@ sdks/python-client/dify_client.egg-info .vscode/* !.vscode/launch.json.template +!.vscode/settings.example.json !.vscode/README.md api/.vscode # vscode Code History Extension @@ -242,3 +243,5 @@ scripts/stress-test/reports/ # Code Agent Folder .qoder/* + +.eslintcache diff --git a/.vite-hooks/pre-commit b/.vite-hooks/pre-commit index 13bbd81cf6..d48381bce2 100755 --- a/.vite-hooks/pre-commit +++ b/.vite-hooks/pre-commit @@ -56,44 +56,9 @@ if $api_modified; then fi fi -if $web_modified; then - if $skip_web_checks; then - echo "Git operation in progress, skipping web checks" - exit 0 - fi - - echo "Running ESLint on web module" - - if git diff --cached --quiet -- 'web/**/*.ts' 'web/**/*.tsx'; then - web_ts_modified=false - else - ts_diff_status=$? - if [ $ts_diff_status -eq 1 ]; then - web_ts_modified=true - else - echo "Unable to determine staged TypeScript changes (git exit code: $ts_diff_status)." - exit $ts_diff_status - fi - fi - - cd ./web || exit 1 - vp staged - - if $web_ts_modified; then - echo "Running TypeScript type-check:tsgo" - if ! npm run type-check:tsgo; then - echo "Type check failed. Please run 'npm run type-check:tsgo' to fix the errors." - exit 1 - fi - else - echo "No staged TypeScript changes detected, skipping type-check:tsgo" - fi - - echo "Running knip" - if ! npm run knip; then - echo "Knip check failed. Please run 'npm run knip' to fix the errors." - exit 1 - fi - - cd ../ +if $skip_web_checks; then + echo "Git operation in progress, skipping web checks" + exit 0 fi + +vp staged diff --git a/.vscode/launch.json.template b/.vscode/launch.json.template index c3e2c50c52..2611b75c6c 100644 --- a/.vscode/launch.json.template +++ b/.vscode/launch.json.template @@ -2,21 +2,10 @@ "version": "0.2.0", "configurations": [ { - "name": "Python: Flask API", + "name": "Python: API (gevent)", "type": "debugpy", "request": "launch", - "module": "flask", - "env": { - "FLASK_APP": "app.py", - "FLASK_ENV": "development" - }, - "args": [ - "run", - "--host=0.0.0.0", - "--port=5001", - "--no-debugger", - "--no-reload" - ], + "program": "${workspaceFolder}/api/app.py", "jinja": true, "justMyCode": true, "cwd": "${workspaceFolder}/api", diff --git a/web/.vscode/settings.example.json b/.vscode/settings.example.json similarity index 86% rename from web/.vscode/settings.example.json rename to .vscode/settings.example.json index 4b356f5b7a..7cdbc51a3b 100644 --- a/web/.vscode/settings.example.json +++ b/.vscode/settings.example.json @@ -1,12 +1,16 @@ { - // Disable the default formatter, use eslint instead - "prettier.enable": false, - "editor.formatOnSave": false, + "cucumber.features": [ + "e2e/features/**/*.feature", + ], + "cucumber.glue": [ + "e2e/features/**/*.ts", + ], + + "tailwindCSS.experimental.configFile": "web/app/styles/globals.css", // Auto fix "editor.codeActionsOnSave": { "source.fixAll.eslint": "explicit", - "source.organizeImports": "never" }, // Silent the stylistic rules in your IDE, but still auto fix them diff --git a/api/.env.example b/api/.env.example index a04a18944a..7455d4a0e9 100644 --- a/api/.env.example +++ b/api/.env.example @@ -33,6 +33,9 @@ TRIGGER_URL=http://localhost:5001 # The time in seconds after the signature is rejected FILES_ACCESS_TIMEOUT=300 +# Collaboration mode toggle +ENABLE_COLLABORATION_MODE=false + # Access token expiration time in minutes ACCESS_TOKEN_EXPIRE_MINUTES=60 @@ -57,6 +60,9 @@ REDIS_SSL_CERTFILE= REDIS_SSL_KEYFILE= # Path to client private key file for SSL authentication REDIS_DB=0 +# Optional global prefix for Redis keys, topics, streams, and Celery Redis transport artifacts. +# Leave empty to preserve current unprefixed behavior. +REDIS_KEY_PREFIX= # redis Sentinel configuration. REDIS_USE_SENTINEL=false diff --git a/api/.ruff.toml b/api/.ruff.toml index 2a825f1ef0..dd78024a02 100644 --- a/api/.ruff.toml +++ b/api/.ruff.toml @@ -69,8 +69,6 @@ ignore = [ "FURB152", # math-constant "UP007", # non-pep604-annotation "UP032", # f-string - "UP045", # non-pep604-annotation-optional - "B005", # strip-with-multi-characters "B006", # mutable-argument-default "B007", # unused-loop-control-variable "B026", # star-arg-unpacking-after-keyword-arg @@ -84,7 +82,6 @@ ignore = [ "SIM102", # collapsible-if "SIM103", # needless-bool "SIM105", # suppressible-exception - "SIM107", # return-in-try-except-finally "SIM108", # if-else-block-instead-of-if-exp "SIM113", # enumerate-for-loop "SIM117", # multiple-with-statements @@ -93,32 +90,22 @@ ignore = [ ] [lint.per-file-ignores] -"__init__.py" = [ - "F401", # unused-import - "F811", # redefined-while-unused -] "configs/*" = [ "N802", # invalid-function-name ] -"graphon/model_runtime/callbacks/base_callback.py" = ["T201"] -"core/workflow/callbacks/workflow_logging_callback.py" = ["T201"] "libs/gmpy2_pkcs10aep_cipher.py" = [ "N803", # invalid-argument-name ] "tests/*" = [ - "F811", # redefined-while-unused "T201", # allow print in tests, "S110", # allow ignoring exceptions in tests code (currently) - ] -"controllers/console/explore/trial.py" = ["TID251"] -"controllers/console/human_input_form.py" = ["TID251"] -"controllers/web/human_input_form.py" = ["TID251"] - -[lint.flake8-tidy-imports] [lint.flake8-tidy-imports.banned-api."flask_restx.reqparse"] msg = "Use Pydantic payload/query models instead of reqparse." [lint.flake8-tidy-imports.banned-api."flask_restx.reqparse.RequestParser"] msg = "Use Pydantic payload/query models instead of reqparse." + +[lint.isort] +known-first-party = ["graphon"] \ No newline at end of file diff --git a/api/.vscode/launch.json.example b/api/.vscode/launch.json.example index 6bdfa2c039..1001559176 100644 --- a/api/.vscode/launch.json.example +++ b/api/.vscode/launch.json.example @@ -3,29 +3,21 @@ "compounds": [ { "name": "Launch Flask and Celery", - "configurations": ["Python: Flask", "Python: Celery"] + "configurations": ["Python: API (gevent)", "Python: Celery"] } ], "configurations": [ { - "name": "Python: Flask", - "consoleName": "Flask", + "name": "Python: API (gevent)", + "consoleName": "API", "type": "debugpy", "request": "launch", "python": "${workspaceFolder}/.venv/bin/python", "cwd": "${workspaceFolder}", "envFile": ".env", - "module": "flask", + "program": "${workspaceFolder}/app.py", "justMyCode": true, - "jinja": true, - "env": { - "FLASK_APP": "app.py", - "GEVENT_SUPPORT": "True" - }, - "args": [ - "run", - "--port=5001" - ] + "jinja": true }, { "name": "Python: Celery", diff --git a/api/app.py b/api/app.py index c018c8a045..e53b037be5 100644 --- a/api/app.py +++ b/api/app.py @@ -1,5 +1,6 @@ from __future__ import annotations +import logging import sys from typing import TYPE_CHECKING, cast @@ -9,17 +10,35 @@ if TYPE_CHECKING: celery: Celery +HOST = "0.0.0.0" +PORT = 5001 +logger = logging.getLogger(__name__) + + def is_db_command() -> bool: if len(sys.argv) > 1 and sys.argv[0].endswith("flask") and sys.argv[1] == "db": return True return False +def log_startup_banner(host: str, port: int) -> None: + debugger_attached = sys.gettrace() is not None + logger.info("Serving Dify API via gevent WebSocket server") + logger.info("Bound to http://%s:%s", host, port) + logger.info("Debugger attached: %s", "on" if debugger_attached else "off") + logger.info("Press CTRL+C to quit") + + # create app +flask_app = None +socketio_app = None + if is_db_command(): from app_factory import create_migrations_app app = create_migrations_app() + socketio_app = app + flask_app = app else: # Gunicorn and Celery handle monkey patching automatically in production by # specifying the `gevent` worker class. Manual monkey patching is not required here. @@ -30,8 +49,14 @@ else: from app_factory import create_app - app = create_app() + socketio_app, flask_app = create_app() + app = flask_app celery = cast("Celery", app.extensions["celery"]) if __name__ == "__main__": - app.run(host="0.0.0.0", port=5001) + from gevent import pywsgi + from geventwebsocket.handler import WebSocketHandler # type: ignore[reportMissingTypeStubs] + + log_startup_banner(HOST, PORT) + server = pywsgi.WSGIServer((HOST, PORT), socketio_app, handler_class=WebSocketHandler) + server.serve_forever() diff --git a/api/app_factory.py b/api/app_factory.py index 76838f9925..48e50ceae9 100644 --- a/api/app_factory.py +++ b/api/app_factory.py @@ -1,6 +1,7 @@ import logging import time +import socketio # type: ignore[reportMissingTypeStubs] from flask import request from opentelemetry.trace import get_current_span from opentelemetry.trace.span import INVALID_SPAN_ID, INVALID_TRACE_ID @@ -10,6 +11,7 @@ from contexts.wrapper import RecyclableContextVar from controllers.console.error import UnauthorizedAndForceLogout from core.logging.context import init_request_context from dify_app import DifyApp +from extensions.ext_socketio import sio from services.enterprise.enterprise_service import EnterpriseService from services.feature_service import LicenseStatus @@ -122,14 +124,18 @@ def create_flask_app_with_configs() -> DifyApp: return dify_app -def create_app() -> DifyApp: +def create_app() -> tuple[socketio.WSGIApp, DifyApp]: start_time = time.perf_counter() app = create_flask_app_with_configs() initialize_extensions(app) + + sio.app = app + socketio_app = socketio.WSGIApp(sio, app) + end_time = time.perf_counter() if dify_config.DEBUG: logger.info("Finished create_app (%s ms)", round((end_time - start_time) * 1000, 2)) - return app + return socketio_app, app def initialize_extensions(app: DifyApp): diff --git a/api/configs/feature/__init__.py b/api/configs/feature/__init__.py index d37cff63e9..ae49ae47d0 100644 --- a/api/configs/feature/__init__.py +++ b/api/configs/feature/__init__.py @@ -1274,6 +1274,13 @@ class PositionConfig(BaseSettings): return {item.strip() for item in self.POSITION_TOOL_EXCLUDES.split(",") if item.strip() != ""} +class CollaborationConfig(BaseSettings): + ENABLE_COLLABORATION_MODE: bool = Field( + description="Whether to enable collaboration mode features across the workspace", + default=False, + ) + + class LoginConfig(BaseSettings): ENABLE_EMAIL_CODE_LOGIN: bool = Field( description="whether to enable email code login", @@ -1399,6 +1406,7 @@ class FeatureConfig( WorkflowConfig, WorkflowNodeExecutionConfig, WorkspaceConfig, + CollaborationConfig, LoginConfig, AccountConfig, SwaggerUIConfig, diff --git a/api/configs/middleware/__init__.py b/api/configs/middleware/__init__.py index 817284d26f..c392b8840f 100644 --- a/api/configs/middleware/__init__.py +++ b/api/configs/middleware/__init__.py @@ -160,6 +160,16 @@ class DatabaseConfig(BaseSettings): default="", ) + DB_SESSION_TIMEZONE_OVERRIDE: str = Field( + description=( + "PostgreSQL session timezone override injected via startup options." + " Default is 'UTC' for out-of-the-box consistency." + " Set to empty string to disable app-level timezone injection, for example when using RDS Proxy" + " together with a database-side default timezone." + ), + default="UTC", + ) + @computed_field # type: ignore[prop-decorator] @property def SQLALCHEMY_DATABASE_URI_SCHEME(self) -> str: @@ -227,12 +237,13 @@ class DatabaseConfig(BaseSettings): connect_args: dict[str, str] = {} # Use the dynamic SQLALCHEMY_DATABASE_URI_SCHEME property if self.SQLALCHEMY_DATABASE_URI_SCHEME.startswith("postgresql"): - timezone_opt = "-c timezone=UTC" - if options: - merged_options = f"{options} {timezone_opt}" - else: - merged_options = timezone_opt - connect_args = {"options": merged_options} + merged_options = options.strip() + session_timezone_override = self.DB_SESSION_TIMEZONE_OVERRIDE.strip() + if session_timezone_override: + timezone_opt = f"-c timezone={session_timezone_override}" + merged_options = f"{merged_options} {timezone_opt}".strip() if merged_options else timezone_opt + if merged_options: + connect_args = {"options": merged_options} result: SQLAlchemyEngineOptionsDict = { "pool_size": self.SQLALCHEMY_POOL_SIZE, diff --git a/api/configs/middleware/cache/redis_config.py b/api/configs/middleware/cache/redis_config.py index b49275758a..2def0a0d4e 100644 --- a/api/configs/middleware/cache/redis_config.py +++ b/api/configs/middleware/cache/redis_config.py @@ -32,6 +32,11 @@ class RedisConfig(BaseSettings): default=0, ) + REDIS_KEY_PREFIX: str = Field( + description="Optional global prefix for Redis keys, topics, and transport artifacts", + default="", + ) + REDIS_USE_SSL: bool = Field( description="Enable SSL/TLS for the Redis connection", default=False, diff --git a/api/configs/middleware/vdb/iris_config.py b/api/configs/middleware/vdb/iris_config.py index c532d191c3..f5993dd8f8 100644 --- a/api/configs/middleware/vdb/iris_config.py +++ b/api/configs/middleware/vdb/iris_config.py @@ -1,5 +1,7 @@ """Configuration for InterSystems IRIS vector database.""" +from typing import Any + from pydantic import Field, PositiveInt, model_validator from pydantic_settings import BaseSettings @@ -64,7 +66,7 @@ class IrisVectorConfig(BaseSettings): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict) -> dict: + def validate_config(cls, values: dict[str, Any]) -> dict[str, Any]: """Validate IRIS configuration values. Args: diff --git a/api/controllers/common/fields.py b/api/controllers/common/fields.py index 4fe3fc9062..8e665c1386 100644 --- a/api/controllers/common/fields.py +++ b/api/controllers/common/fields.py @@ -2,9 +2,9 @@ from __future__ import annotations from typing import Any -from graphon.file import helpers as file_helpers from pydantic import BaseModel, ConfigDict, computed_field +from graphon.file import helpers as file_helpers from models.model import IconType type JSONValue = str | int | float | bool | None | dict[str, Any] | list[Any] diff --git a/api/controllers/console/__init__.py b/api/controllers/console/__init__.py index d624b10b22..980e828945 100644 --- a/api/controllers/console/__init__.py +++ b/api/controllers/console/__init__.py @@ -65,6 +65,7 @@ from .app import ( statistic, workflow, workflow_app_log, + workflow_comment, workflow_draft_variable, workflow_run, workflow_statistic, @@ -116,6 +117,7 @@ from .explore import ( saved_message, trial, ) +from .socketio import workflow as socketio_workflow # pyright: ignore[reportUnusedImport] # Import tag controllers from .tag import tags @@ -201,6 +203,7 @@ __all__ = [ "saved_message", "setup", "site", + "socketio_workflow", "spec", "statistic", "tags", @@ -211,6 +214,7 @@ __all__ = [ "website", "workflow", "workflow_app_log", + "workflow_comment", "workflow_draft_variable", "workflow_run", "workflow_statistic", diff --git a/api/controllers/console/app/app.py b/api/controllers/console/app/app.py index 2018f60215..051d08aa36 100644 --- a/api/controllers/console/app/app.py +++ b/api/controllers/console/app/app.py @@ -5,11 +5,9 @@ from typing import Any, Literal from flask import request from flask_restx import Resource -from graphon.enums import WorkflowExecutionStatus -from graphon.file import helpers as file_helpers from pydantic import AliasChoices, BaseModel, Field, computed_field, field_validator from sqlalchemy import select -from sqlalchemy.orm import sessionmaker +from sqlalchemy.orm import Session from werkzeug.exceptions import BadRequest from controllers.common.helpers import FileInfo @@ -31,13 +29,15 @@ from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.trigger.constants import TRIGGER_NODE_TYPES from extensions.ext_database import db from fields.base import ResponseModel +from graphon.enums import WorkflowExecutionStatus +from libs.helper import build_icon_url from libs.login import current_account_with_tenant, login_required from models import App, DatasetPermissionEnum, Workflow from models.model import IconType from services.app_dsl_service import AppDslService from services.app_service import AppService from services.enterprise.enterprise_service import EnterpriseService -from services.entities.dsl_entities import ImportMode +from services.entities.dsl_entities import ImportMode, ImportStatus from services.entities.knowledge_entities.knowledge_entities import ( DataSource, InfoList, @@ -161,15 +161,6 @@ def _to_timestamp(value: datetime | int | None) -> int | None: return value -def _build_icon_url(icon_type: str | IconType | None, icon: str | None) -> str | None: - if icon is None or icon_type is None: - return None - icon_type_value = icon_type.value if isinstance(icon_type, IconType) else str(icon_type) - if icon_type_value.lower() != IconType.IMAGE: - return None - return file_helpers.get_signed_file_url(icon) - - class Tag(ResponseModel): id: str name: str @@ -292,7 +283,7 @@ class Site(ResponseModel): @computed_field(return_type=str | None) # type: ignore @property def icon_url(self) -> str | None: - return _build_icon_url(self.icon_type, self.icon) + return build_icon_url(self.icon_type, self.icon) @field_validator("icon_type", mode="before") @classmethod @@ -342,7 +333,7 @@ class AppPartial(ResponseModel): @computed_field(return_type=str | None) # type: ignore @property def icon_url(self) -> str | None: - return _build_icon_url(self.icon_type, self.icon) + return build_icon_url(self.icon_type, self.icon) @field_validator("created_at", "updated_at", mode="before") @classmethod @@ -390,7 +381,7 @@ class AppDetailWithSite(AppDetail): @computed_field(return_type=str | None) # type: ignore @property def icon_url(self) -> str | None: - return _build_icon_url(self.icon_type, self.icon) + return build_icon_url(self.icon_type, self.icon) class AppPagination(ResponseModel): @@ -632,7 +623,7 @@ class AppCopyApi(Resource): args = CopyAppPayload.model_validate(console_ns.payload or {}) - with sessionmaker(db.engine, expire_on_commit=False).begin() as session: + with Session(db.engine, expire_on_commit=False) as session: import_service = AppDslService(session) yaml_content = import_service.export_dsl(app_model=app_model, include_secret=True) result = import_service.import_app( @@ -645,6 +636,13 @@ class AppCopyApi(Resource): icon=args.icon, icon_background=args.icon_background, ) + if result.status == ImportStatus.FAILED: + session.rollback() + return result.model_dump(mode="json"), 400 + if result.status == ImportStatus.PENDING: + session.rollback() + return result.model_dump(mode="json"), 202 + session.commit() # Inherit web app permission from original app if result.app_id and FeatureService.get_system_features().webapp_auth.enabled: diff --git a/api/controllers/console/app/app_import.py b/api/controllers/console/app/app_import.py index 80bd7d1d8d..e91dc9cfe5 100644 --- a/api/controllers/console/app/app_import.py +++ b/api/controllers/console/app/app_import.py @@ -1,6 +1,6 @@ from flask_restx import Resource from pydantic import BaseModel, Field -from sqlalchemy.orm import sessionmaker +from sqlalchemy.orm import Session from controllers.common.schema import register_schema_models from controllers.console.app.wraps import get_app_model @@ -52,8 +52,9 @@ class AppImportApi(Resource): current_user, _ = current_account_with_tenant() args = AppImportPayload.model_validate(console_ns.payload) - # Create service with session - with sessionmaker(db.engine).begin() as session: + # AppDslService performs internal commits for some creation paths, so use a plain + # Session here instead of nesting it inside sessionmaker(...).begin(). + with Session(db.engine, expire_on_commit=False) as session: import_service = AppDslService(session) # Import app account = current_user @@ -69,6 +70,10 @@ class AppImportApi(Resource): icon_background=args.icon_background, app_id=args.app_id, ) + if result.status == ImportStatus.FAILED: + session.rollback() + else: + session.commit() if result.app_id and FeatureService.get_system_features().webapp_auth.enabled: # update web app setting as private EnterpriseService.WebAppAuth.update_app_access_mode(result.app_id, "private") @@ -95,12 +100,15 @@ class AppImportConfirmApi(Resource): # Check user role first current_user, _ = current_account_with_tenant() - # Create service with session - with sessionmaker(db.engine).begin() as session: + with Session(db.engine, expire_on_commit=False) as session: import_service = AppDslService(session) # Confirm import account = current_user result = import_service.confirm_import(import_id=import_id, account=account) + if result.status == ImportStatus.FAILED: + session.rollback() + else: + session.commit() # Return appropriate status code based on result if result.status == ImportStatus.FAILED: @@ -117,7 +125,7 @@ class AppImportCheckDependenciesApi(Resource): @account_initialization_required @edit_permission_required def get(self, app_model: App): - with sessionmaker(db.engine).begin() as session: + with Session(db.engine, expire_on_commit=False) as session: import_service = AppDslService(session) result = import_service.check_dependencies(app_model=app_model) diff --git a/api/controllers/console/app/audio.py b/api/controllers/console/app/audio.py index 78ddb904e1..91fbe4a85a 100644 --- a/api/controllers/console/app/audio.py +++ b/api/controllers/console/app/audio.py @@ -2,7 +2,6 @@ import logging from flask import request from flask_restx import Resource, fields -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field from werkzeug.exceptions import InternalServerError @@ -23,6 +22,7 @@ from controllers.console.app.error import ( from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, setup_required from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from libs.login import login_required from models import App, AppMode from services.audio_service import AudioService diff --git a/api/controllers/console/app/completion.py b/api/controllers/console/app/completion.py index d83925d173..fe274e4c9a 100644 --- a/api/controllers/console/app/completion.py +++ b/api/controllers/console/app/completion.py @@ -3,7 +3,6 @@ from typing import Any, Literal from flask import request from flask_restx import Resource -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import InternalServerError, NotFound @@ -27,6 +26,7 @@ from core.errors.error import ( QuotaExceededError, ) from core.helper.trace_id_helper import get_external_trace_id +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.helper import uuid_value from libs.login import current_user, login_required diff --git a/api/controllers/console/app/conversation.py b/api/controllers/console/app/conversation.py index d329d22309..b2b1049f0c 100644 --- a/api/controllers/console/app/conversation.py +++ b/api/controllers/console/app/conversation.py @@ -2,20 +2,37 @@ from typing import Literal import sqlalchemy as sa from flask import abort, request -from flask_restx import Resource, fields, marshal_with +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator from sqlalchemy import func, or_ from sqlalchemy.orm import selectinload from werkzeug.exceptions import NotFound +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_database import db -from fields.raws import FilesContainedField +from fields.conversation_fields import ( + Conversation as ConversationResponse, +) +from fields.conversation_fields import ( + ConversationDetail as ConversationDetailResponse, +) +from fields.conversation_fields import ( + ConversationMessageDetail as ConversationMessageDetailResponse, +) +from fields.conversation_fields import ( + ConversationPagination as ConversationPaginationResponse, +) +from fields.conversation_fields import ( + ConversationWithSummaryPagination as ConversationWithSummaryPaginationResponse, +) +from fields.conversation_fields import ( + ResultResponse, +) from libs.datetime_utils import naive_utc_now, parse_time_range -from libs.helper import TimestampField from libs.login import current_account_with_tenant, login_required from models import Conversation, EndUser, Message, MessageAnnotation from models.model import AppMode @@ -62,267 +79,16 @@ console_ns.schema_model( ChatConversationQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), ) -# Register models for flask_restx to avoid dict type issues in Swagger -# Register in dependency order: base models first, then dependent models - -# Base models -simple_account_model = console_ns.model( - "SimpleAccount", - { - "id": fields.String, - "name": fields.String, - "email": fields.String, - }, -) - -feedback_stat_model = console_ns.model( - "FeedbackStat", - { - "like": fields.Integer, - "dislike": fields.Integer, - }, -) - -status_count_model = console_ns.model( - "StatusCount", - { - "success": fields.Integer, - "failed": fields.Integer, - "partial_success": fields.Integer, - "paused": fields.Integer, - }, -) - -message_file_model = console_ns.model( - "MessageFile", - { - "id": fields.String, - "filename": fields.String, - "type": fields.String, - "url": fields.String, - "mime_type": fields.String, - "size": fields.Integer, - "transfer_method": fields.String, - "belongs_to": fields.String(default="user"), - "upload_file_id": fields.String(default=None), - }, -) - -agent_thought_model = console_ns.model( - "AgentThought", - { - "id": fields.String, - "chain_id": fields.String, - "message_id": fields.String, - "position": fields.Integer, - "thought": fields.String, - "tool": fields.String, - "tool_labels": fields.Raw, - "tool_input": fields.String, - "created_at": TimestampField, - "observation": fields.String, - "files": fields.List(fields.String), - }, -) - -simple_model_config_model = console_ns.model( - "SimpleModelConfig", - { - "model": fields.Raw(attribute="model_dict"), - "pre_prompt": fields.String, - }, -) - -model_config_model = console_ns.model( - "ModelConfig", - { - "opening_statement": fields.String, - "suggested_questions": fields.Raw, - "model": fields.Raw, - "user_input_form": fields.Raw, - "pre_prompt": fields.String, - "agent_mode": fields.Raw, - }, -) - -# Models that depend on simple_account_model -feedback_model = console_ns.model( - "Feedback", - { - "rating": fields.String, - "content": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account": fields.Nested(simple_account_model, allow_null=True), - }, -) - -annotation_model = console_ns.model( - "Annotation", - { - "id": fields.String, - "question": fields.String, - "content": fields.String, - "account": fields.Nested(simple_account_model, allow_null=True), - "created_at": TimestampField, - }, -) - -annotation_hit_history_model = console_ns.model( - "AnnotationHitHistory", - { - "annotation_id": fields.String(attribute="id"), - "annotation_create_account": fields.Nested(simple_account_model, allow_null=True), - "created_at": TimestampField, - }, -) - - -class MessageTextField(fields.Raw): - def format(self, value): - return value[0]["text"] if value else "" - - -# Simple message detail model -simple_message_detail_model = console_ns.model( - "SimpleMessageDetail", - { - "inputs": FilesContainedField, - "query": fields.String, - "message": MessageTextField, - "answer": fields.String, - }, -) - -# Message detail model that depends on multiple models -message_detail_model = console_ns.model( - "MessageDetail", - { - "id": fields.String, - "conversation_id": fields.String, - "inputs": FilesContainedField, - "query": fields.String, - "message": fields.Raw, - "message_tokens": fields.Integer, - "answer": fields.String(attribute="re_sign_file_url_answer"), - "answer_tokens": fields.Integer, - "provider_response_latency": fields.Float, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account_id": fields.String, - "feedbacks": fields.List(fields.Nested(feedback_model)), - "workflow_run_id": fields.String, - "annotation": fields.Nested(annotation_model, allow_null=True), - "annotation_hit_history": fields.Nested(annotation_hit_history_model, allow_null=True), - "created_at": TimestampField, - "agent_thoughts": fields.List(fields.Nested(agent_thought_model)), - "message_files": fields.List(fields.Nested(message_file_model)), - "metadata": fields.Raw(attribute="message_metadata_dict"), - "status": fields.String, - "error": fields.String, - "parent_message_id": fields.String, - }, -) - -# Conversation models -conversation_fields_model = console_ns.model( - "Conversation", - { - "id": fields.String, - "status": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_end_user_session_id": fields.String(), - "from_account_id": fields.String, - "from_account_name": fields.String, - "read_at": TimestampField, - "created_at": TimestampField, - "updated_at": TimestampField, - "annotation": fields.Nested(annotation_model, allow_null=True), - "model_config": fields.Nested(simple_model_config_model), - "user_feedback_stats": fields.Nested(feedback_stat_model), - "admin_feedback_stats": fields.Nested(feedback_stat_model), - "message": fields.Nested(simple_message_detail_model, attribute="first_message"), - }, -) - -conversation_pagination_model = console_ns.model( - "ConversationPagination", - { - "page": fields.Integer, - "limit": fields.Integer(attribute="per_page"), - "total": fields.Integer, - "has_more": fields.Boolean(attribute="has_next"), - "data": fields.List(fields.Nested(conversation_fields_model), attribute="items"), - }, -) - -conversation_message_detail_model = console_ns.model( - "ConversationMessageDetail", - { - "id": fields.String, - "status": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account_id": fields.String, - "created_at": TimestampField, - "model_config": fields.Nested(model_config_model), - "message": fields.Nested(message_detail_model, attribute="first_message"), - }, -) - -conversation_with_summary_model = console_ns.model( - "ConversationWithSummary", - { - "id": fields.String, - "status": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_end_user_session_id": fields.String, - "from_account_id": fields.String, - "from_account_name": fields.String, - "name": fields.String, - "summary": fields.String(attribute="summary_or_query"), - "read_at": TimestampField, - "created_at": TimestampField, - "updated_at": TimestampField, - "annotated": fields.Boolean, - "model_config": fields.Nested(simple_model_config_model), - "message_count": fields.Integer, - "user_feedback_stats": fields.Nested(feedback_stat_model), - "admin_feedback_stats": fields.Nested(feedback_stat_model), - "status_count": fields.Nested(status_count_model), - }, -) - -conversation_with_summary_pagination_model = console_ns.model( - "ConversationWithSummaryPagination", - { - "page": fields.Integer, - "limit": fields.Integer(attribute="per_page"), - "total": fields.Integer, - "has_more": fields.Boolean(attribute="has_next"), - "data": fields.List(fields.Nested(conversation_with_summary_model), attribute="items"), - }, -) - -conversation_detail_model = console_ns.model( - "ConversationDetail", - { - "id": fields.String, - "status": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account_id": fields.String, - "created_at": TimestampField, - "updated_at": TimestampField, - "annotated": fields.Boolean, - "introduction": fields.String, - "model_config": fields.Nested(model_config_model), - "message_count": fields.Integer, - "user_feedback_stats": fields.Nested(feedback_stat_model), - "admin_feedback_stats": fields.Nested(feedback_stat_model), - }, +register_schema_models( + console_ns, + CompletionConversationQuery, + ChatConversationQuery, + ConversationResponse, + ConversationPaginationResponse, + ConversationMessageDetailResponse, + ConversationWithSummaryPaginationResponse, + ConversationDetailResponse, + ResultResponse, ) @@ -332,13 +98,12 @@ class CompletionConversationApi(Resource): @console_ns.doc(description="Get completion conversations with pagination and filtering") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[CompletionConversationQuery.__name__]) - @console_ns.response(200, "Success", conversation_pagination_model) + @console_ns.response(200, "Success", console_ns.models[ConversationPaginationResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @setup_required @login_required @account_initialization_required @get_app_model(mode=AppMode.COMPLETION) - @marshal_with(conversation_pagination_model) @edit_permission_required def get(self, app_model): current_user, _ = current_account_with_tenant() @@ -394,7 +159,9 @@ class CompletionConversationApi(Resource): conversations = db.paginate(query, page=args.page, per_page=args.limit, error_out=False) - return conversations + return ConversationPaginationResponse.model_validate(conversations, from_attributes=True).model_dump( + mode="json" + ) @console_ns.route("/apps//completion-conversations/") @@ -402,19 +169,19 @@ class CompletionConversationDetailApi(Resource): @console_ns.doc("get_completion_conversation") @console_ns.doc(description="Get completion conversation details with messages") @console_ns.doc(params={"app_id": "Application ID", "conversation_id": "Conversation ID"}) - @console_ns.response(200, "Success", conversation_message_detail_model) + @console_ns.response(200, "Success", console_ns.models[ConversationMessageDetailResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "Conversation not found") @setup_required @login_required @account_initialization_required @get_app_model(mode=AppMode.COMPLETION) - @marshal_with(conversation_message_detail_model) @edit_permission_required def get(self, app_model, conversation_id): conversation_id = str(conversation_id) - - return _get_conversation(app_model, conversation_id) + return ConversationMessageDetailResponse.model_validate( + _get_conversation(app_model, conversation_id), from_attributes=True + ).model_dump(mode="json") @console_ns.doc("delete_completion_conversation") @console_ns.doc(description="Delete a completion conversation") @@ -436,7 +203,7 @@ class CompletionConversationDetailApi(Resource): except ConversationNotExistsError: raise NotFound("Conversation Not Exists.") - return {"result": "success"}, 204 + return ResultResponse(result="success").model_dump(mode="json"), 204 @console_ns.route("/apps//chat-conversations") @@ -445,13 +212,12 @@ class ChatConversationApi(Resource): @console_ns.doc(description="Get chat conversations with pagination, filtering and summary") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[ChatConversationQuery.__name__]) - @console_ns.response(200, "Success", conversation_with_summary_pagination_model) + @console_ns.response(200, "Success", console_ns.models[ConversationWithSummaryPaginationResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @setup_required @login_required @account_initialization_required @get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]) - @marshal_with(conversation_with_summary_pagination_model) @edit_permission_required def get(self, app_model): current_user, _ = current_account_with_tenant() @@ -546,7 +312,9 @@ class ChatConversationApi(Resource): conversations = db.paginate(query, page=args.page, per_page=args.limit, error_out=False) - return conversations + return ConversationWithSummaryPaginationResponse.model_validate(conversations, from_attributes=True).model_dump( + mode="json" + ) @console_ns.route("/apps//chat-conversations/") @@ -554,19 +322,19 @@ class ChatConversationDetailApi(Resource): @console_ns.doc("get_chat_conversation") @console_ns.doc(description="Get chat conversation details") @console_ns.doc(params={"app_id": "Application ID", "conversation_id": "Conversation ID"}) - @console_ns.response(200, "Success", conversation_detail_model) + @console_ns.response(200, "Success", console_ns.models[ConversationDetailResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "Conversation not found") @setup_required @login_required @account_initialization_required @get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]) - @marshal_with(conversation_detail_model) @edit_permission_required def get(self, app_model, conversation_id): conversation_id = str(conversation_id) - - return _get_conversation(app_model, conversation_id) + return ConversationDetailResponse.model_validate( + _get_conversation(app_model, conversation_id), from_attributes=True + ).model_dump(mode="json") @console_ns.doc("delete_chat_conversation") @console_ns.doc(description="Delete a chat conversation") @@ -588,7 +356,7 @@ class ChatConversationDetailApi(Resource): except ConversationNotExistsError: raise NotFound("Conversation Not Exists.") - return {"result": "success"}, 204 + return ResultResponse(result="success").model_dump(mode="json"), 204 def _get_conversation(app_model, conversation_id): diff --git a/api/controllers/console/app/conversation_variables.py b/api/controllers/console/app/conversation_variables.py index 369c26a80c..cead33d14f 100644 --- a/api/controllers/console/app/conversation_variables.py +++ b/api/controllers/console/app/conversation_variables.py @@ -1,44 +1,86 @@ +from __future__ import annotations + +from datetime import datetime +from typing import Any + from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, field_validator from sqlalchemy import select from sqlalchemy.orm import sessionmaker +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, setup_required from extensions.ext_database import db -from fields.conversation_variable_fields import ( - conversation_variable_fields, - paginated_conversation_variable_fields, -) +from fields._value_type_serializer import serialize_value_type +from fields.base import ResponseModel from libs.login import login_required from models import ConversationVariable from models.model import AppMode -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - class ConversationVariablesQuery(BaseModel): conversation_id: str = Field(..., description="Conversation ID to filter variables") -console_ns.schema_model( - ConversationVariablesQuery.__name__, - ConversationVariablesQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), -) +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -# Register models for flask_restx to avoid dict type issues in Swagger -# Register base model first -conversation_variable_model = console_ns.model("ConversationVariable", conversation_variable_fields) -# For nested models, need to replace nested dict with registered model -paginated_conversation_variable_fields_copy = paginated_conversation_variable_fields.copy() -paginated_conversation_variable_fields_copy["data"] = fields.List( - fields.Nested(conversation_variable_model), attribute="data" -) -paginated_conversation_variable_model = console_ns.model( - "PaginatedConversationVariable", paginated_conversation_variable_fields_copy +class ConversationVariableResponse(ResponseModel): + id: str + name: str + value_type: str + value: str | None = None + description: str | None = None + created_at: int | None = None + updated_at: int | None = None + + @field_validator("value_type", mode="before") + @classmethod + def _normalize_value_type(cls, value: Any) -> str: + exposed_type = getattr(value, "exposed_type", None) + if callable(exposed_type): + return str(exposed_type().value) + if isinstance(value, str): + return value + try: + return serialize_value_type(value) + except Exception: + return serialize_value_type({"value_type": value}) + + @field_validator("value", mode="before") + @classmethod + def _normalize_value(cls, value: Any | None) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(value) + + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class PaginatedConversationVariableResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[ConversationVariableResponse] + + +register_schema_models( + console_ns, + ConversationVariablesQuery, + ConversationVariableResponse, + PaginatedConversationVariableResponse, ) @@ -48,12 +90,15 @@ class ConversationVariablesApi(Resource): @console_ns.doc(description="Get conversation variables for an application") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[ConversationVariablesQuery.__name__]) - @console_ns.response(200, "Conversation variables retrieved successfully", paginated_conversation_variable_model) + @console_ns.response( + 200, + "Conversation variables retrieved successfully", + console_ns.models[PaginatedConversationVariableResponse.__name__], + ) @setup_required @login_required @account_initialization_required @get_app_model(mode=AppMode.ADVANCED_CHAT) - @marshal_with(paginated_conversation_variable_model) def get(self, app_model): args = ConversationVariablesQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore @@ -72,17 +117,22 @@ class ConversationVariablesApi(Resource): with sessionmaker(db.engine, expire_on_commit=False).begin() as session: rows = session.scalars(stmt).all() - return { - "page": page, - "limit": page_size, - "total": len(rows), - "has_more": False, - "data": [ - { - "created_at": row.created_at, - "updated_at": row.updated_at, - **row.to_variable().model_dump(), - } - for row in rows - ], - } + response = PaginatedConversationVariableResponse.model_validate( + { + "page": page, + "limit": page_size, + "total": len(rows), + "has_more": False, + "data": [ + ConversationVariableResponse.model_validate( + { + "created_at": row.created_at, + "updated_at": row.updated_at, + **row.to_variable().model_dump(), + } + ) + for row in rows + ], + } + ) + return response.model_dump(mode="json") diff --git a/api/controllers/console/app/generator.py b/api/controllers/console/app/generator.py index 7101d5df7b..c720a5e074 100644 --- a/api/controllers/console/app/generator.py +++ b/api/controllers/console/app/generator.py @@ -1,7 +1,6 @@ from collections.abc import Sequence from flask_restx import Resource -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field from controllers.console import console_ns @@ -20,6 +19,7 @@ from core.helper.code_executor.python3.python3_code_provider import Python3CodeP from core.llm_generator.entities import RuleCodeGeneratePayload, RuleGeneratePayload, RuleStructuredOutputPayload from core.llm_generator.llm_generator import LLMGenerator from extensions.ext_database import db +from graphon.model_runtime.errors.invoke import InvokeError from libs.login import current_account_with_tenant, login_required from models import App from services.workflow_service import WorkflowService diff --git a/api/controllers/console/app/mcp_server.py b/api/controllers/console/app/mcp_server.py index 9454d28bcf..d517f695b8 100644 --- a/api/controllers/console/app/mcp_server.py +++ b/api/controllers/console/app/mcp_server.py @@ -18,37 +18,37 @@ from models.enums import AppMCPServerStatus from models.model import AppMCPServer -def _to_timestamp(value: datetime | int | None) -> int | None: - if isinstance(value, datetime): - return int(value.timestamp()) - return value - - class MCPServerCreatePayload(BaseModel): description: str | None = Field(default=None, description="Server description") - parameters: dict = Field(..., description="Server parameters configuration") + parameters: dict[str, Any] = Field(..., description="Server parameters configuration") class MCPServerUpdatePayload(BaseModel): id: str = Field(..., description="Server ID") description: str | None = Field(default=None, description="Server description") - parameters: dict = Field(..., description="Server parameters configuration") + parameters: dict[str, Any] = Field(..., description="Server parameters configuration") status: str | None = Field(default=None, description="Server status") +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + class AppMCPServerResponse(ResponseModel): id: str name: str server_code: str description: str - status: str + status: AppMCPServerStatus parameters: dict[str, Any] | list[Any] | str created_at: int | None = None updated_at: int | None = None @field_validator("parameters", mode="before") @classmethod - def _parse_json_string(cls, value: Any) -> Any: + def _normalize_parameters(cls, value: Any) -> Any: if isinstance(value, str): try: return json.loads(value) @@ -70,7 +70,9 @@ class AppMCPServerController(Resource): @console_ns.doc("get_app_mcp_server") @console_ns.doc(description="Get MCP server configuration for an application") @console_ns.doc(params={"app_id": "Application ID"}) - @console_ns.response(200, "Server configuration", console_ns.models[AppMCPServerResponse.__name__]) + @console_ns.response( + 200, "MCP server configuration retrieved successfully", console_ns.models[AppMCPServerResponse.__name__] + ) @login_required @account_initialization_required @setup_required @@ -85,7 +87,9 @@ class AppMCPServerController(Resource): @console_ns.doc(description="Create MCP server configuration for an application") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[MCPServerCreatePayload.__name__]) - @console_ns.response(200, "Server created", console_ns.models[AppMCPServerResponse.__name__]) + @console_ns.response( + 201, "MCP server configuration created successfully", console_ns.models[AppMCPServerResponse.__name__] + ) @console_ns.response(403, "Insufficient permissions") @account_initialization_required @get_app_model @@ -111,13 +115,15 @@ class AppMCPServerController(Resource): ) db.session.add(server) db.session.commit() - return AppMCPServerResponse.model_validate(server, from_attributes=True).model_dump(mode="json") + return AppMCPServerResponse.model_validate(server, from_attributes=True).model_dump(mode="json"), 201 @console_ns.doc("update_app_mcp_server") @console_ns.doc(description="Update MCP server configuration for an application") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[MCPServerUpdatePayload.__name__]) - @console_ns.response(200, "Server updated", console_ns.models[AppMCPServerResponse.__name__]) + @console_ns.response( + 200, "MCP server configuration updated successfully", console_ns.models[AppMCPServerResponse.__name__] + ) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "Server not found") @get_app_model @@ -154,7 +160,7 @@ class AppMCPServerRefreshController(Resource): @console_ns.doc("refresh_app_mcp_server") @console_ns.doc(description="Refresh MCP server configuration and regenerate server code") @console_ns.doc(params={"server_id": "Server ID"}) - @console_ns.response(200, "Server refreshed", console_ns.models[AppMCPServerResponse.__name__]) + @console_ns.response(200, "MCP server refreshed successfully", console_ns.models[AppMCPServerResponse.__name__]) @console_ns.response(403, "Insufficient permissions") @console_ns.response(404, "Server not found") @setup_required diff --git a/api/controllers/console/app/message.py b/api/controllers/console/app/message.py index 5a19544eab..44e19b57db 100644 --- a/api/controllers/console/app/message.py +++ b/api/controllers/console/app/message.py @@ -1,9 +1,9 @@ import logging +from datetime import datetime from typing import Literal from flask import request -from flask_restx import Resource, fields, marshal_with -from graphon.model_runtime.errors.invoke import InvokeError +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator from sqlalchemy import exists, func, select from werkzeug.exceptions import InternalServerError, NotFound @@ -25,10 +25,22 @@ from controllers.console.wraps import ( setup_required, ) from core.app.entities.app_invoke_entities import InvokeFrom +from core.entities.execution_extra_content import ExecutionExtraContentDomainModel from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError from extensions.ext_database import db -from fields.raws import FilesContainedField -from libs.helper import TimestampField, uuid_value +from fields.base import ResponseModel +from fields.conversation_fields import ( + AgentThought, + ConversationAnnotation, + ConversationAnnotationHitHistory, + Feedback, + JSONValue, + MessageFile, + format_files_contained, + to_timestamp, +) +from graphon.model_runtime.errors.invoke import InvokeError +from libs.helper import uuid_value from libs.infinite_scroll_pagination import InfiniteScrollPagination from libs.login import current_account_with_tenant, login_required from models.enums import FeedbackFromSource, FeedbackRating @@ -98,6 +110,51 @@ class SuggestedQuestionsResponse(BaseModel): data: list[str] = Field(description="Suggested question") +class MessageDetailResponse(ResponseModel): + id: str + conversation_id: str + inputs: dict[str, JSONValue] + query: str + message: JSONValue | None = None + message_tokens: int | None = None + answer: str = Field(validation_alias="re_sign_file_url_answer") + answer_tokens: int | None = None + provider_response_latency: float | None = None + from_source: str + from_end_user_id: str | None = None + from_account_id: str | None = None + feedbacks: list[Feedback] = Field(default_factory=list) + workflow_run_id: str | None = None + annotation: ConversationAnnotation | None = None + annotation_hit_history: ConversationAnnotationHitHistory | None = None + created_at: int | None = None + agent_thoughts: list[AgentThought] = Field(default_factory=list) + message_files: list[MessageFile] = Field(default_factory=list) + extra_contents: list[ExecutionExtraContentDomainModel] = Field(default_factory=list) + metadata: JSONValue | None = Field(default=None, validation_alias="message_metadata_dict") + status: str + error: str | None = None + parent_message_id: str | None = None + + @field_validator("inputs", mode="before") + @classmethod + def _normalize_inputs(cls, value: JSONValue) -> JSONValue: + return format_files_contained(value) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value + + +class MessageInfiniteScrollPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[MessageDetailResponse] + + register_schema_models( console_ns, ChatMessagesQuery, @@ -105,124 +162,8 @@ register_schema_models( FeedbackExportQuery, AnnotationCountResponse, SuggestedQuestionsResponse, -) - -# Register models for flask_restx to avoid dict type issues in Swagger -# Register in dependency order: base models first, then dependent models - -# Base models -simple_account_model = console_ns.model( - "SimpleAccount", - { - "id": fields.String, - "name": fields.String, - "email": fields.String, - }, -) - -message_file_model = console_ns.model( - "MessageFile", - { - "id": fields.String, - "filename": fields.String, - "type": fields.String, - "url": fields.String, - "mime_type": fields.String, - "size": fields.Integer, - "transfer_method": fields.String, - "belongs_to": fields.String(default="user"), - "upload_file_id": fields.String(default=None), - }, -) - -agent_thought_model = console_ns.model( - "AgentThought", - { - "id": fields.String, - "chain_id": fields.String, - "message_id": fields.String, - "position": fields.Integer, - "thought": fields.String, - "tool": fields.String, - "tool_labels": fields.Raw, - "tool_input": fields.String, - "created_at": TimestampField, - "observation": fields.String, - "files": fields.List(fields.String), - }, -) - -# Models that depend on simple_account_model -feedback_model = console_ns.model( - "Feedback", - { - "rating": fields.String, - "content": fields.String, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account": fields.Nested(simple_account_model, allow_null=True), - }, -) - -annotation_model = console_ns.model( - "Annotation", - { - "id": fields.String, - "question": fields.String, - "content": fields.String, - "account": fields.Nested(simple_account_model, allow_null=True), - "created_at": TimestampField, - }, -) - -annotation_hit_history_model = console_ns.model( - "AnnotationHitHistory", - { - "annotation_id": fields.String(attribute="id"), - "annotation_create_account": fields.Nested(simple_account_model, allow_null=True), - "created_at": TimestampField, - }, -) - -# Message detail model that depends on multiple models -message_detail_model = console_ns.model( - "MessageDetail", - { - "id": fields.String, - "conversation_id": fields.String, - "inputs": FilesContainedField, - "query": fields.String, - "message": fields.Raw, - "message_tokens": fields.Integer, - "answer": fields.String(attribute="re_sign_file_url_answer"), - "answer_tokens": fields.Integer, - "provider_response_latency": fields.Float, - "from_source": fields.String, - "from_end_user_id": fields.String, - "from_account_id": fields.String, - "feedbacks": fields.List(fields.Nested(feedback_model)), - "workflow_run_id": fields.String, - "annotation": fields.Nested(annotation_model, allow_null=True), - "annotation_hit_history": fields.Nested(annotation_hit_history_model, allow_null=True), - "created_at": TimestampField, - "agent_thoughts": fields.List(fields.Nested(agent_thought_model)), - "message_files": fields.List(fields.Nested(message_file_model)), - "extra_contents": fields.List(fields.Raw), - "metadata": fields.Raw(attribute="message_metadata_dict"), - "status": fields.String, - "error": fields.String, - "parent_message_id": fields.String, - }, -) - -# Message infinite scroll pagination model -message_infinite_scroll_pagination_model = console_ns.model( - "MessageInfiniteScrollPagination", - { - "limit": fields.Integer, - "has_more": fields.Boolean, - "data": fields.List(fields.Nested(message_detail_model)), - }, + MessageDetailResponse, + MessageInfiniteScrollPaginationResponse, ) @@ -232,13 +173,12 @@ class ChatMessageListApi(Resource): @console_ns.doc(description="Get chat messages for a conversation with pagination") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[ChatMessagesQuery.__name__]) - @console_ns.response(200, "Success", message_infinite_scroll_pagination_model) + @console_ns.response(200, "Success", console_ns.models[MessageInfiniteScrollPaginationResponse.__name__]) @console_ns.response(404, "Conversation not found") @login_required @account_initialization_required @setup_required @get_app_model(mode=[AppMode.CHAT, AppMode.AGENT_CHAT, AppMode.ADVANCED_CHAT]) - @marshal_with(message_infinite_scroll_pagination_model) @edit_permission_required def get(self, app_model): args = ChatMessagesQuery.model_validate(request.args.to_dict()) @@ -298,7 +238,10 @@ class ChatMessageListApi(Resource): history_messages = list(reversed(history_messages)) attach_message_extra_contents(history_messages) - return InfiniteScrollPagination(data=history_messages, limit=args.limit, has_more=has_more) + return MessageInfiniteScrollPaginationResponse.model_validate( + InfiniteScrollPagination(data=history_messages, limit=args.limit, has_more=has_more), + from_attributes=True, + ).model_dump(mode="json") @console_ns.route("/apps//feedbacks") @@ -468,13 +411,12 @@ class MessageApi(Resource): @console_ns.doc("get_message") @console_ns.doc(description="Get message details by ID") @console_ns.doc(params={"app_id": "Application ID", "message_id": "Message ID"}) - @console_ns.response(200, "Message retrieved successfully", message_detail_model) + @console_ns.response(200, "Message retrieved successfully", console_ns.models[MessageDetailResponse.__name__]) @console_ns.response(404, "Message not found") @get_app_model @setup_required @login_required @account_initialization_required - @marshal_with(message_detail_model) def get(self, app_model, message_id: str): message_id = str(message_id) @@ -486,4 +428,4 @@ class MessageApi(Resource): raise NotFound("Message Not Exists.") attach_message_extra_contents([message]) - return message + return MessageDetailResponse.model_validate(message, from_attributes=True).model_dump(mode="json") diff --git a/api/controllers/console/app/workflow.py b/api/controllers/console/app/workflow.py index da8d25c2eb..478f783eb0 100644 --- a/api/controllers/console/app/workflow.py +++ b/api/controllers/console/app/workflow.py @@ -4,11 +4,7 @@ from collections.abc import Sequence from typing import Any from flask import abort, request -from flask_restx import Resource, fields, marshal_with -from graphon.enums import NodeType -from graphon.file import File -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.utils.encoders import jsonable_encoder +from flask_restx import Resource, fields, marshal, marshal_with from pydantic import BaseModel, Field, ValidationError, field_validator from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import BadRequest, Forbidden, InternalServerError, NotFound @@ -39,7 +35,13 @@ from extensions.ext_database import db from extensions.ext_redis import redis_client from factories import file_factory, variable_factory from fields.member_fields import simple_account_fields +from fields.online_user_fields import online_user_list_fields from fields.workflow_fields import workflow_fields, workflow_pagination_fields +from graphon.enums import NodeType +from graphon.file import File +from graphon.file import helpers as file_helpers +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs import helper from libs.datetime_utils import naive_utc_now from libs.helper import TimestampField, uuid_value @@ -47,6 +49,7 @@ from libs.login import current_account_with_tenant, login_required from models import App from models.model import AppMode from models.workflow import Workflow +from repositories.workflow_collaboration_repository import WORKFLOW_ONLINE_USERS_PREFIX from services.app_generate_service import AppGenerateService from services.errors.app import IsDraftWorkflowError, WorkflowHashNotEqualError, WorkflowNotFoundError from services.errors.llm import InvokeRateLimitError @@ -57,6 +60,7 @@ _file_access_controller = DatabaseFileAccessController() LISTENING_RETRY_IN = 2000 DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" RESTORE_SOURCE_WORKFLOW_MUST_BE_PUBLISHED_MESSAGE = "source workflow must be published" +MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS = 50 # Register models for flask_restx to avoid dict type issues in Swagger # Register in dependency order: base models first, then dependent models @@ -150,6 +154,14 @@ class ConvertToWorkflowPayload(BaseModel): icon_background: str | None = None +class WorkflowFeaturesPayload(BaseModel): + features: dict[str, Any] = Field(..., description="Workflow feature configuration") + + +class WorkflowOnlineUsersQuery(BaseModel): + app_ids: str = Field(..., description="Comma-separated app IDs") + + class DraftWorkflowTriggerRunPayload(BaseModel): node_id: str @@ -173,6 +185,8 @@ reg(DefaultBlockConfigQuery) reg(ConvertToWorkflowPayload) reg(WorkflowListQuery) reg(WorkflowUpdatePayload) +reg(WorkflowFeaturesPayload) +reg(WorkflowOnlineUsersQuery) reg(DraftWorkflowTriggerRunPayload) reg(DraftWorkflowTriggerRunAllPayload) @@ -931,6 +945,32 @@ class ConvertToWorkflowApi(Resource): } +@console_ns.route("/apps//workflows/draft/features") +class WorkflowFeaturesApi(Resource): + """Update draft workflow features.""" + + @console_ns.expect(console_ns.models[WorkflowFeaturesPayload.__name__]) + @console_ns.doc("update_workflow_features") + @console_ns.doc(description="Update draft workflow features") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response(200, "Workflow features updated successfully") + @setup_required + @login_required + @account_initialization_required + @get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW]) + @edit_permission_required + def post(self, app_model: App): + current_user, _ = current_account_with_tenant() + + args = WorkflowFeaturesPayload.model_validate(console_ns.payload or {}) + features = args.features + + workflow_service = WorkflowService() + workflow_service.update_draft_workflow_features(app_model=app_model, features=features, account=current_user) + + return {"result": "success"} + + @console_ns.route("/apps//workflows") class PublishedAllWorkflowApi(Resource): @console_ns.expect(console_ns.models[WorkflowListQuery.__name__]) @@ -942,7 +982,6 @@ class PublishedAllWorkflowApi(Resource): @login_required @account_initialization_required @get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW]) - @marshal_with(workflow_pagination_model) @edit_permission_required def get(self, app_model: App): """ @@ -970,9 +1009,10 @@ class PublishedAllWorkflowApi(Resource): user_id=user_id, named_only=named_only, ) + serialized_workflows = marshal(workflows, workflow_fields_copy) return { - "items": workflows, + "items": serialized_workflows, "page": page, "limit": limit, "has_more": has_more, @@ -1340,3 +1380,62 @@ class DraftWorkflowTriggerRunAllApi(Resource): "status": "error", } ), 400 + + +@console_ns.route("/apps/workflows/online-users") +class WorkflowOnlineUsersApi(Resource): + @console_ns.expect(console_ns.models[WorkflowOnlineUsersQuery.__name__]) + @console_ns.doc("get_workflow_online_users") + @console_ns.doc(description="Get workflow online users") + @setup_required + @login_required + @account_initialization_required + @marshal_with(online_user_list_fields) + def get(self): + args = WorkflowOnlineUsersQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore + + app_ids = list(dict.fromkeys(app_id.strip() for app_id in args.app_ids.split(",") if app_id.strip())) + if len(app_ids) > MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS: + raise BadRequest(f"Maximum {MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS} app_ids are allowed per request.") + + if not app_ids: + return {"data": []} + + _, current_tenant_id = current_account_with_tenant() + workflow_service = WorkflowService() + accessible_app_ids = workflow_service.get_accessible_app_ids(app_ids, current_tenant_id) + + results = [] + for app_id in app_ids: + if app_id not in accessible_app_ids: + continue + + users_json = redis_client.hgetall(f"{WORKFLOW_ONLINE_USERS_PREFIX}{app_id}") + + users = [] + for _, user_info_json in users_json.items(): + try: + user_info = json.loads(user_info_json) + except Exception: + continue + + if not isinstance(user_info, dict): + continue + + avatar = user_info.get("avatar") + if isinstance(avatar, str) and avatar and not avatar.startswith(("http://", "https://")): + try: + user_info["avatar"] = file_helpers.get_signed_file_url(avatar) + except Exception as exc: + logger.warning( + "Failed to sign workflow online user avatar; using original value. " + "app_id=%s avatar=%s error=%s", + app_id, + avatar, + exc, + ) + + users.append(user_info) + results.append({"app_id": app_id, "users": users}) + + return {"data": results} diff --git a/api/controllers/console/app/workflow_app_log.py b/api/controllers/console/app/workflow_app_log.py index 8ae6a78a62..4b39590235 100644 --- a/api/controllers/console/app/workflow_app_log.py +++ b/api/controllers/console/app/workflow_app_log.py @@ -1,27 +1,26 @@ from datetime import datetime +from typing import Any from dateutil.parser import isoparse from flask import request -from flask_restx import Resource, marshal_with -from graphon.enums import WorkflowExecutionStatus +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator from sqlalchemy.orm import sessionmaker +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.app.wraps import get_app_model from controllers.console.wraps import account_initialization_required, setup_required from extensions.ext_database import db -from fields.workflow_app_log_fields import ( - build_workflow_app_log_pagination_model, - build_workflow_archived_log_pagination_model, -) +from fields.base import ResponseModel +from fields.end_user_fields import SimpleEndUser +from fields.member_fields import SimpleAccount +from graphon.enums import WorkflowExecutionStatus from libs.login import login_required from models import App from models.model import AppMode from services.workflow_app_service import WorkflowAppService -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - class WorkflowAppLogQuery(BaseModel): keyword: str | None = Field(default=None, description="Search keyword for filtering logs") @@ -58,13 +57,113 @@ class WorkflowAppLogQuery(BaseModel): raise ValueError("Invalid boolean value for detail") -console_ns.schema_model( - WorkflowAppLogQuery.__name__, WorkflowAppLogQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0) -) +class WorkflowRunForLogResponse(ResponseModel): + id: str + version: str | None = None + status: str | None = None + triggered_from: str | None = None + error: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None -# Register model for flask_restx to avoid dict type issues in Swagger -workflow_app_log_pagination_model = build_workflow_app_log_pagination_model(console_ns) -workflow_archived_log_pagination_model = build_workflow_archived_log_pagination_model(console_ns) + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowRunForArchivedLogResponse(ResponseModel): + id: str + status: str | None = None + triggered_from: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + +class WorkflowAppLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForLogResponse | None = None + details: Any = None + created_from: str | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowArchivedLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForArchivedLogResponse | None = None + trigger_metadata: Any = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowAppLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowAppLogPartialResponse] + + +class WorkflowArchivedLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowArchivedLogPartialResponse] + + +register_schema_models( + console_ns, + WorkflowAppLogQuery, + WorkflowRunForLogResponse, + WorkflowRunForArchivedLogResponse, + WorkflowAppLogPartialResponse, + WorkflowArchivedLogPartialResponse, + WorkflowAppLogPaginationResponse, + WorkflowArchivedLogPaginationResponse, +) @console_ns.route("/apps//workflow-app-logs") @@ -73,12 +172,15 @@ class WorkflowAppLogApi(Resource): @console_ns.doc(description="Get workflow application execution logs") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[WorkflowAppLogQuery.__name__]) - @console_ns.response(200, "Workflow app logs retrieved successfully", workflow_app_log_pagination_model) + @console_ns.response( + 200, + "Workflow app logs retrieved successfully", + console_ns.models[WorkflowAppLogPaginationResponse.__name__], + ) @setup_required @login_required @account_initialization_required @get_app_model(mode=[AppMode.WORKFLOW]) - @marshal_with(workflow_app_log_pagination_model) def get(self, app_model: App): """ Get workflow app logs @@ -102,7 +204,9 @@ class WorkflowAppLogApi(Resource): created_by_account=args.created_by_account, ) - return workflow_app_log_pagination + return WorkflowAppLogPaginationResponse.model_validate( + workflow_app_log_pagination, from_attributes=True + ).model_dump(mode="json") @console_ns.route("/apps//workflow-archived-logs") @@ -111,12 +215,15 @@ class WorkflowArchivedLogApi(Resource): @console_ns.doc(description="Get workflow archived execution logs") @console_ns.doc(params={"app_id": "Application ID"}) @console_ns.expect(console_ns.models[WorkflowAppLogQuery.__name__]) - @console_ns.response(200, "Workflow archived logs retrieved successfully", workflow_archived_log_pagination_model) + @console_ns.response( + 200, + "Workflow archived logs retrieved successfully", + console_ns.models[WorkflowArchivedLogPaginationResponse.__name__], + ) @setup_required @login_required @account_initialization_required @get_app_model(mode=[AppMode.WORKFLOW]) - @marshal_with(workflow_archived_log_pagination_model) def get(self, app_model: App): """ Get workflow archived logs @@ -132,4 +239,6 @@ class WorkflowArchivedLogApi(Resource): limit=args.limit, ) - return workflow_app_log_pagination + return WorkflowArchivedLogPaginationResponse.model_validate( + workflow_app_log_pagination, from_attributes=True + ).model_dump(mode="json") diff --git a/api/controllers/console/app/workflow_comment.py b/api/controllers/console/app/workflow_comment.py new file mode 100644 index 0000000000..e7c3e982a6 --- /dev/null +++ b/api/controllers/console/app/workflow_comment.py @@ -0,0 +1,335 @@ +import logging + +from flask_restx import Resource, marshal_with +from pydantic import BaseModel, Field, TypeAdapter + +from controllers.common.schema import register_schema_models +from controllers.console import console_ns +from controllers.console.app.wraps import get_app_model +from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required +from fields.member_fields import AccountWithRole +from fields.workflow_comment_fields import ( + workflow_comment_basic_fields, + workflow_comment_create_fields, + workflow_comment_detail_fields, + workflow_comment_reply_create_fields, + workflow_comment_reply_update_fields, + workflow_comment_resolve_fields, + workflow_comment_update_fields, +) +from libs.login import current_user, login_required +from models import App +from services.account_service import TenantService +from services.workflow_comment_service import WorkflowCommentService + +logger = logging.getLogger(__name__) +DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" + + +class WorkflowCommentCreatePayload(BaseModel): + content: str = Field(..., description="Comment content") + position_x: float = Field(..., description="Comment X position") + position_y: float = Field(..., description="Comment Y position") + mentioned_user_ids: list[str] = Field(default_factory=list, description="Mentioned user IDs") + + +class WorkflowCommentUpdatePayload(BaseModel): + content: str = Field(..., description="Comment content") + position_x: float | None = Field(default=None, description="Comment X position") + position_y: float | None = Field(default=None, description="Comment Y position") + mentioned_user_ids: list[str] | None = Field( + default=None, + description="Mentioned user IDs. Omit to keep existing mentions.", + ) + + +class WorkflowCommentReplyPayload(BaseModel): + content: str = Field(..., description="Reply content") + mentioned_user_ids: list[str] = Field(default_factory=list, description="Mentioned user IDs") + + +class WorkflowCommentMentionUsersPayload(BaseModel): + users: list[AccountWithRole] + + +for model in ( + WorkflowCommentCreatePayload, + WorkflowCommentUpdatePayload, + WorkflowCommentReplyPayload, +): + console_ns.schema_model(model.__name__, model.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) +register_schema_models(console_ns, AccountWithRole, WorkflowCommentMentionUsersPayload) + +workflow_comment_basic_model = console_ns.model("WorkflowCommentBasic", workflow_comment_basic_fields) +workflow_comment_detail_model = console_ns.model("WorkflowCommentDetail", workflow_comment_detail_fields) +workflow_comment_create_model = console_ns.model("WorkflowCommentCreate", workflow_comment_create_fields) +workflow_comment_update_model = console_ns.model("WorkflowCommentUpdate", workflow_comment_update_fields) +workflow_comment_resolve_model = console_ns.model("WorkflowCommentResolve", workflow_comment_resolve_fields) +workflow_comment_reply_create_model = console_ns.model( + "WorkflowCommentReplyCreate", workflow_comment_reply_create_fields +) +workflow_comment_reply_update_model = console_ns.model( + "WorkflowCommentReplyUpdate", workflow_comment_reply_update_fields +) + + +@console_ns.route("/apps//workflow/comments") +class WorkflowCommentListApi(Resource): + """API for listing and creating workflow comments.""" + + @console_ns.doc("list_workflow_comments") + @console_ns.doc(description="Get all comments for a workflow") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response(200, "Comments retrieved successfully", workflow_comment_basic_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_basic_model, envelope="data") + def get(self, app_model: App): + """Get all comments for a workflow.""" + comments = WorkflowCommentService.get_comments(tenant_id=current_user.current_tenant_id, app_id=app_model.id) + + return comments + + @console_ns.doc("create_workflow_comment") + @console_ns.doc(description="Create a new workflow comment") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.expect(console_ns.models[WorkflowCommentCreatePayload.__name__]) + @console_ns.response(201, "Comment created successfully", workflow_comment_create_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_create_model) + @edit_permission_required + def post(self, app_model: App): + """Create a new workflow comment.""" + payload = WorkflowCommentCreatePayload.model_validate(console_ns.payload or {}) + + result = WorkflowCommentService.create_comment( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + created_by=current_user.id, + content=payload.content, + position_x=payload.position_x, + position_y=payload.position_y, + mentioned_user_ids=payload.mentioned_user_ids, + ) + + return result, 201 + + +@console_ns.route("/apps//workflow/comments/") +class WorkflowCommentDetailApi(Resource): + """API for managing individual workflow comments.""" + + @console_ns.doc("get_workflow_comment") + @console_ns.doc(description="Get a specific workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.response(200, "Comment retrieved successfully", workflow_comment_detail_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_detail_model) + def get(self, app_model: App, comment_id: str): + """Get a specific workflow comment.""" + comment = WorkflowCommentService.get_comment( + tenant_id=current_user.current_tenant_id, app_id=app_model.id, comment_id=comment_id + ) + + return comment + + @console_ns.doc("update_workflow_comment") + @console_ns.doc(description="Update a workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.expect(console_ns.models[WorkflowCommentUpdatePayload.__name__]) + @console_ns.response(200, "Comment updated successfully", workflow_comment_update_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_update_model) + @edit_permission_required + def put(self, app_model: App, comment_id: str): + """Update a workflow comment.""" + payload = WorkflowCommentUpdatePayload.model_validate(console_ns.payload or {}) + + result = WorkflowCommentService.update_comment( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + user_id=current_user.id, + content=payload.content, + position_x=payload.position_x, + position_y=payload.position_y, + mentioned_user_ids=payload.mentioned_user_ids, + ) + + return result + + @console_ns.doc("delete_workflow_comment") + @console_ns.doc(description="Delete a workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.response(204, "Comment deleted successfully") + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @edit_permission_required + def delete(self, app_model: App, comment_id: str): + """Delete a workflow comment.""" + WorkflowCommentService.delete_comment( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + user_id=current_user.id, + ) + + return {"result": "success"}, 204 + + +@console_ns.route("/apps//workflow/comments//resolve") +class WorkflowCommentResolveApi(Resource): + """API for resolving and reopening workflow comments.""" + + @console_ns.doc("resolve_workflow_comment") + @console_ns.doc(description="Resolve a workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.response(200, "Comment resolved successfully", workflow_comment_resolve_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_resolve_model) + @edit_permission_required + def post(self, app_model: App, comment_id: str): + """Resolve a workflow comment.""" + comment = WorkflowCommentService.resolve_comment( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + user_id=current_user.id, + ) + + return comment + + +@console_ns.route("/apps//workflow/comments//replies") +class WorkflowCommentReplyApi(Resource): + """API for managing comment replies.""" + + @console_ns.doc("create_workflow_comment_reply") + @console_ns.doc(description="Add a reply to a workflow comment") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID"}) + @console_ns.expect(console_ns.models[WorkflowCommentReplyPayload.__name__]) + @console_ns.response(201, "Reply created successfully", workflow_comment_reply_create_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_reply_create_model) + @edit_permission_required + def post(self, app_model: App, comment_id: str): + """Add a reply to a workflow comment.""" + # Validate comment access first + WorkflowCommentService.validate_comment_access( + comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id + ) + + payload = WorkflowCommentReplyPayload.model_validate(console_ns.payload or {}) + + result = WorkflowCommentService.create_reply( + comment_id=comment_id, + content=payload.content, + created_by=current_user.id, + mentioned_user_ids=payload.mentioned_user_ids, + ) + + return result, 201 + + +@console_ns.route("/apps//workflow/comments//replies/") +class WorkflowCommentReplyDetailApi(Resource): + """API for managing individual comment replies.""" + + @console_ns.doc("update_workflow_comment_reply") + @console_ns.doc(description="Update a comment reply") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID", "reply_id": "Reply ID"}) + @console_ns.expect(console_ns.models[WorkflowCommentReplyPayload.__name__]) + @console_ns.response(200, "Reply updated successfully", workflow_comment_reply_update_model) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @marshal_with(workflow_comment_reply_update_model) + @edit_permission_required + def put(self, app_model: App, comment_id: str, reply_id: str): + """Update a comment reply.""" + # Validate comment access first + WorkflowCommentService.validate_comment_access( + comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id + ) + + payload = WorkflowCommentReplyPayload.model_validate(console_ns.payload or {}) + + reply = WorkflowCommentService.update_reply( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + reply_id=reply_id, + user_id=current_user.id, + content=payload.content, + mentioned_user_ids=payload.mentioned_user_ids, + ) + + return reply + + @console_ns.doc("delete_workflow_comment_reply") + @console_ns.doc(description="Delete a comment reply") + @console_ns.doc(params={"app_id": "Application ID", "comment_id": "Comment ID", "reply_id": "Reply ID"}) + @console_ns.response(204, "Reply deleted successfully") + @login_required + @setup_required + @account_initialization_required + @get_app_model() + @edit_permission_required + def delete(self, app_model: App, comment_id: str, reply_id: str): + """Delete a comment reply.""" + # Validate comment access first + WorkflowCommentService.validate_comment_access( + comment_id=comment_id, tenant_id=current_user.current_tenant_id, app_id=app_model.id + ) + + WorkflowCommentService.delete_reply( + tenant_id=current_user.current_tenant_id, + app_id=app_model.id, + comment_id=comment_id, + reply_id=reply_id, + user_id=current_user.id, + ) + + return {"result": "success"}, 204 + + +@console_ns.route("/apps//workflow/comments/mention-users") +class WorkflowCommentMentionUsersApi(Resource): + """API for getting mentionable users for workflow comments.""" + + @console_ns.doc("workflow_comment_mention_users") + @console_ns.doc(description="Get all users in current tenant for mentions") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response( + 200, "Mentionable users retrieved successfully", console_ns.models[WorkflowCommentMentionUsersPayload.__name__] + ) + @login_required + @setup_required + @account_initialization_required + @get_app_model() + def get(self, app_model: App): + """Get all users in current tenant for mentions.""" + members = TenantService.get_tenant_members(current_user.current_tenant) + users = TypeAdapter(list[AccountWithRole]).validate_python(members, from_attributes=True) + response = WorkflowCommentMentionUsersPayload(users=users) + return response.model_dump(mode="json"), 200 diff --git a/api/controllers/console/app/workflow_draft_variable.py b/api/controllers/console/app/workflow_draft_variable.py index 657e794ac4..f6319573e0 100644 --- a/api/controllers/console/app/workflow_draft_variable.py +++ b/api/controllers/console/app/workflow_draft_variable.py @@ -5,10 +5,6 @@ from typing import Any, TypedDict from flask import Response, request from flask_restx import Resource, fields, marshal, marshal_with -from graphon.file import helpers as file_helpers -from graphon.variables.segment_group import SegmentGroup -from graphon.variables.segments import ArrayFileSegment, FileSegment, Segment -from graphon.variables.types import SegmentType from pydantic import BaseModel, Field from sqlalchemy.orm import sessionmaker @@ -22,8 +18,13 @@ from controllers.web.error import InvalidArgumentError, NotFoundError from core.app.file_access import DatabaseFileAccessController from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID from extensions.ext_database import db +from factories import variable_factory from factories.file_factory import build_from_mapping, build_from_mappings from factories.variable_factory import build_segment_with_type +from graphon.file import helpers as file_helpers +from graphon.variables.segment_group import SegmentGroup +from graphon.variables.segments import ArrayFileSegment, FileSegment, Segment +from graphon.variables.types import SegmentType from libs.login import current_user, login_required from models import App, AppMode from models.workflow import WorkflowDraftVariable @@ -45,6 +46,16 @@ class WorkflowDraftVariableUpdatePayload(BaseModel): value: Any | None = Field(default=None, description="Variable value") +class ConversationVariableUpdatePayload(BaseModel): + conversation_variables: list[dict[str, Any]] = Field( + ..., description="Conversation variables for the draft workflow" + ) + + +class EnvironmentVariableUpdatePayload(BaseModel): + environment_variables: list[dict[str, Any]] = Field(..., description="Environment variables for the draft workflow") + + console_ns.schema_model( WorkflowDraftVariableListQuery.__name__, WorkflowDraftVariableListQuery.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), @@ -53,6 +64,14 @@ console_ns.schema_model( WorkflowDraftVariableUpdatePayload.__name__, WorkflowDraftVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), ) +console_ns.schema_model( + ConversationVariableUpdatePayload.__name__, + ConversationVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), +) +console_ns.schema_model( + EnvironmentVariableUpdatePayload.__name__, + EnvironmentVariableUpdatePayload.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), +) def _convert_values_to_json_serializable_object(value: Segment): @@ -510,6 +529,34 @@ class ConversationVariableCollectionApi(Resource): db.session.commit() return _get_variable_list(app_model, CONVERSATION_VARIABLE_NODE_ID) + @console_ns.expect(console_ns.models[ConversationVariableUpdatePayload.__name__]) + @console_ns.doc("update_conversation_variables") + @console_ns.doc(description="Update conversation variables for workflow draft") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response(200, "Conversation variables updated successfully") + @setup_required + @login_required + @account_initialization_required + @edit_permission_required + @get_app_model(mode=AppMode.ADVANCED_CHAT) + def post(self, app_model: App): + payload = ConversationVariableUpdatePayload.model_validate(console_ns.payload or {}) + + workflow_service = WorkflowService() + + conversation_variables_list = payload.conversation_variables + conversation_variables = [ + variable_factory.build_conversation_variable_from_mapping(obj) for obj in conversation_variables_list + ] + + workflow_service.update_draft_workflow_conversation_variables( + app_model=app_model, + account=current_user, + conversation_variables=conversation_variables, + ) + + return {"result": "success"} + @console_ns.route("/apps//workflows/draft/system-variables") class SystemVariableCollectionApi(Resource): @@ -561,3 +608,31 @@ class EnvironmentVariableCollectionApi(Resource): ) return {"items": env_vars_list} + + @console_ns.expect(console_ns.models[EnvironmentVariableUpdatePayload.__name__]) + @console_ns.doc("update_environment_variables") + @console_ns.doc(description="Update environment variables for workflow draft") + @console_ns.doc(params={"app_id": "Application ID"}) + @console_ns.response(200, "Environment variables updated successfully") + @setup_required + @login_required + @account_initialization_required + @edit_permission_required + @get_app_model(mode=[AppMode.ADVANCED_CHAT, AppMode.WORKFLOW]) + def post(self, app_model: App): + payload = EnvironmentVariableUpdatePayload.model_validate(console_ns.payload or {}) + + workflow_service = WorkflowService() + + environment_variables_list = payload.environment_variables + environment_variables = [ + variable_factory.build_environment_variable_from_mapping(obj) for obj in environment_variables_list + ] + + workflow_service.update_draft_workflow_environment_variables( + app_model=app_model, + account=current_user, + environment_variables=environment_variables, + ) + + return {"result": "success"} diff --git a/api/controllers/console/app/workflow_run.py b/api/controllers/console/app/workflow_run.py index a1a075be71..6748d95d6b 100644 --- a/api/controllers/console/app/workflow_run.py +++ b/api/controllers/console/app/workflow_run.py @@ -3,8 +3,6 @@ from typing import Literal, TypedDict, cast from flask import request from flask_restx import Resource, fields, marshal_with -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus from pydantic import BaseModel, Field, field_validator from sqlalchemy import select from sqlalchemy.orm import sessionmaker @@ -28,6 +26,8 @@ from fields.workflow_run_fields import ( workflow_run_node_execution_list_fields, workflow_run_pagination_fields, ) +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus from libs.archive_storage import ArchiveStorageNotConfiguredError, get_archive_storage from libs.custom_inputs import time_duration from libs.helper import uuid_value diff --git a/api/controllers/console/app/workflow_trigger.py b/api/controllers/console/app/workflow_trigger.py index c457684c15..a6715fa200 100644 --- a/api/controllers/console/app/workflow_trigger.py +++ b/api/controllers/console/app/workflow_trigger.py @@ -1,16 +1,17 @@ import logging +from datetime import datetime from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel +from flask_restx import Resource +from pydantic import BaseModel, field_validator from sqlalchemy import select from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import NotFound from configs import dify_config -from controllers.common.schema import get_or_create_model +from controllers.common.schema import register_schema_models from extensions.ext_database import db -from fields.workflow_trigger_fields import trigger_fields, triggers_list_fields, webhook_trigger_fields +from fields.base import ResponseModel from libs.login import current_user, login_required from models.enums import AppTriggerStatus from models.model import Account, App, AppMode @@ -21,15 +22,6 @@ from ..app.wraps import get_app_model from ..wraps import account_initialization_required, edit_permission_required, setup_required logger = logging.getLogger(__name__) -DEFAULT_REF_TEMPLATE_SWAGGER_2_0 = "#/definitions/{model}" - -trigger_model = get_or_create_model("WorkflowTrigger", trigger_fields) - -triggers_list_fields_copy = triggers_list_fields.copy() -triggers_list_fields_copy["data"] = fields.List(fields.Nested(trigger_model)) -triggers_list_model = get_or_create_model("WorkflowTriggerList", triggers_list_fields_copy) - -webhook_trigger_model = get_or_create_model("WebhookTrigger", webhook_trigger_fields) class Parser(BaseModel): @@ -41,10 +33,52 @@ class ParserEnable(BaseModel): enable_trigger: bool -console_ns.schema_model(Parser.__name__, Parser.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) +class WorkflowTriggerResponse(ResponseModel): + id: str + trigger_type: str + title: str + node_id: str + provider_name: str + icon: str + status: str + created_at: datetime | None = None + updated_at: datetime | None = None -console_ns.schema_model( - ParserEnable.__name__, ParserEnable.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0) + @field_validator("id", "trigger_type", "title", "node_id", "provider_name", "icon", "status", mode="before") + @classmethod + def _normalize_string_fields(cls, value: object) -> str: + if isinstance(value, str): + return value + return str(value) + + +class WorkflowTriggerListResponse(ResponseModel): + data: list[WorkflowTriggerResponse] + + +class WebhookTriggerResponse(ResponseModel): + id: str + webhook_id: str + webhook_url: str + webhook_debug_url: str + node_id: str + created_at: datetime | None = None + + @field_validator("id", "webhook_id", "webhook_url", "webhook_debug_url", "node_id", mode="before") + @classmethod + def _normalize_string_fields(cls, value: object) -> str: + if isinstance(value, str): + return value + return str(value) + + +register_schema_models( + console_ns, + Parser, + ParserEnable, + WorkflowTriggerResponse, + WorkflowTriggerListResponse, + WebhookTriggerResponse, ) @@ -57,7 +91,7 @@ class WebhookTriggerApi(Resource): @login_required @account_initialization_required @get_app_model(mode=AppMode.WORKFLOW) - @marshal_with(webhook_trigger_model) + @console_ns.response(200, "Success", console_ns.models[WebhookTriggerResponse.__name__]) def get(self, app_model: App): """Get webhook trigger for a node""" args = Parser.model_validate(request.args.to_dict(flat=True)) # type: ignore @@ -78,7 +112,7 @@ class WebhookTriggerApi(Resource): if not webhook_trigger: raise NotFound("Webhook trigger not found for this node") - return webhook_trigger + return WebhookTriggerResponse.model_validate(webhook_trigger, from_attributes=True).model_dump(mode="json") @console_ns.route("/apps//triggers") @@ -89,7 +123,7 @@ class AppTriggersApi(Resource): @login_required @account_initialization_required @get_app_model(mode=AppMode.WORKFLOW) - @marshal_with(triggers_list_model) + @console_ns.response(200, "Success", console_ns.models[WorkflowTriggerListResponse.__name__]) def get(self, app_model: App): """Get app triggers list""" assert isinstance(current_user, Account) @@ -118,7 +152,9 @@ class AppTriggersApi(Resource): else: trigger.icon = "" # type: ignore - return {"data": triggers} + return WorkflowTriggerListResponse.model_validate({"data": triggers}, from_attributes=True).model_dump( + mode="json" + ) @console_ns.route("/apps//trigger-enable") @@ -129,7 +165,7 @@ class AppTriggerEnableApi(Resource): @account_initialization_required @edit_permission_required @get_app_model(mode=AppMode.WORKFLOW) - @marshal_with(trigger_model) + @console_ns.response(200, "Success", console_ns.models[WorkflowTriggerResponse.__name__]) def post(self, app_model: App): """Update app trigger (enable/disable)""" args = ParserEnable.model_validate(console_ns.payload) @@ -160,4 +196,4 @@ class AppTriggerEnableApi(Resource): else: trigger.icon = "" # type: ignore - return trigger + return WorkflowTriggerResponse.model_validate(trigger, from_attributes=True).model_dump(mode="json") diff --git a/api/controllers/console/auth/activate.py b/api/controllers/console/auth/activate.py index e6316bfd62..f7061f820f 100644 --- a/api/controllers/console/auth/activate.py +++ b/api/controllers/console/auth/activate.py @@ -1,3 +1,5 @@ +from typing import Any + from flask import request from flask_restx import Resource from pydantic import BaseModel, Field, field_validator @@ -40,7 +42,7 @@ class ActivatePayload(BaseModel): class ActivationCheckResponse(BaseModel): is_valid: bool = Field(description="Whether token is valid") - data: dict | None = Field(default=None, description="Activation data if valid") + data: dict[str, Any] | None = Field(default=None, description="Activation data if valid") class ActivationResponse(BaseModel): diff --git a/api/controllers/console/auth/oauth_server.py b/api/controllers/console/auth/oauth_server.py index b55cda4244..727428c8e7 100644 --- a/api/controllers/console/auth/oauth_server.py +++ b/api/controllers/console/auth/oauth_server.py @@ -5,11 +5,11 @@ from typing import Concatenate from flask import jsonify, request from flask.typing import ResponseReturnValue from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel from werkzeug.exceptions import BadRequest, NotFound from controllers.console.wraps import account_initialization_required, setup_required +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from models import Account from models.model import OAuthProviderApp diff --git a/api/controllers/console/datasets/datasets.py b/api/controllers/console/datasets/datasets.py index b2a905366a..ea0fdef0a7 100644 --- a/api/controllers/console/datasets/datasets.py +++ b/api/controllers/console/datasets/datasets.py @@ -2,7 +2,6 @@ from typing import Any, cast from flask import request from flask_restx import Resource, fields, marshal, marshal_with -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field, field_validator from sqlalchemy import func, select from werkzeug.exceptions import Forbidden, NotFound @@ -49,6 +48,7 @@ from fields.dataset_fields import ( weighted_score_fields, ) from fields.document_fields import document_status_fields +from graphon.model_runtime.entities.model_entities import ModelType from libs.login import current_account_with_tenant, login_required from models import ApiToken, Dataset, Document, DocumentSegment, UploadFile from models.dataset import DatasetPermission, DatasetPermissionEnum diff --git a/api/controllers/console/datasets/datasets_document.py b/api/controllers/console/datasets/datasets_document.py index de8fe1c0e2..3372a967d9 100644 --- a/api/controllers/console/datasets/datasets_document.py +++ b/api/controllers/console/datasets/datasets_document.py @@ -3,20 +3,19 @@ import logging from argparse import ArgumentTypeError from collections.abc import Sequence from contextlib import ExitStack +from datetime import datetime from typing import Any, Literal, cast import sqlalchemy as sa from flask import request, send_file -from flask_restx import Resource, fields, marshal, marshal_with -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError -from pydantic import BaseModel, Field +from flask_restx import Resource, marshal +from pydantic import BaseModel, Field, field_validator from sqlalchemy import asc, desc, func, select from werkzeug.exceptions import Forbidden, NotFound import services from controllers.common.controller_schemas import DocumentBatchDownloadZipPayload -from controllers.common.schema import get_or_create_model, register_schema_models +from controllers.common.schema import register_schema_models from controllers.console import console_ns from core.errors.error import ( LLMBadRequestError, @@ -31,14 +30,14 @@ from core.rag.extractor.entity.datasource_type import DatasourceType from core.rag.extractor.entity.extract_setting import ExtractSetting, NotionInfo, WebsiteInfo from core.rag.index_processor.constant.index_type import IndexTechniqueType from extensions.ext_database import db -from fields.dataset_fields import dataset_fields +from fields.base import ResponseModel from fields.document_fields import ( - dataset_and_document_fields, document_fields, - document_metadata_fields, document_status_fields, document_with_segments_fields, ) +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from libs.datetime_utils import naive_utc_now from libs.login import current_account_with_tenant, login_required from models import DatasetProcessRule, Document, DocumentSegment, UploadFile @@ -72,27 +71,100 @@ from ..wraps import ( logger = logging.getLogger(__name__) -# Register models for flask_restx to avoid dict type issues in Swagger -dataset_model = get_or_create_model("Dataset", dataset_fields) +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -document_metadata_model = get_or_create_model("DocumentMetadata", document_metadata_fields) -document_fields_copy = document_fields.copy() -document_fields_copy["doc_metadata"] = fields.List( - fields.Nested(document_metadata_model), attribute="doc_metadata_details" -) -document_model = get_or_create_model("Document", document_fields_copy) +def _normalize_enum(value: Any) -> Any: + if isinstance(value, str) or value is None: + return value + return getattr(value, "value", value) -document_with_segments_fields_copy = document_with_segments_fields.copy() -document_with_segments_fields_copy["doc_metadata"] = fields.List( - fields.Nested(document_metadata_model), attribute="doc_metadata_details" -) -document_with_segments_model = get_or_create_model("DocumentWithSegments", document_with_segments_fields_copy) -dataset_and_document_fields_copy = dataset_and_document_fields.copy() -dataset_and_document_fields_copy["dataset"] = fields.Nested(dataset_model) -dataset_and_document_fields_copy["documents"] = fields.List(fields.Nested(document_model)) -dataset_and_document_model = get_or_create_model("DatasetAndDocument", dataset_and_document_fields_copy) +class DatasetResponse(ResponseModel): + id: str + name: str + description: str | None = None + permission: str | None = None + data_source_type: str | None = None + indexing_technique: str | None = None + created_by: str | None = None + created_at: int | None = None + + @field_validator("data_source_type", "indexing_technique", mode="before") + @classmethod + def _normalize_enum_fields(cls, value: Any) -> Any: + return _normalize_enum(value) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class DocumentMetadataResponse(ResponseModel): + id: str + name: str + type: str + value: str | None = None + + +class DocumentResponse(ResponseModel): + id: str + position: int | None = None + data_source_type: str | None = None + data_source_info: Any = Field(default=None, validation_alias="data_source_info_dict") + data_source_detail_dict: Any = None + dataset_process_rule_id: str | None = None + name: str + created_from: str | None = None + created_by: str | None = None + created_at: int | None = None + tokens: int | None = None + indexing_status: str | None = None + error: str | None = None + enabled: bool | None = None + disabled_at: int | None = None + disabled_by: str | None = None + archived: bool | None = None + display_status: str | None = None + word_count: int | None = None + hit_count: int | None = None + doc_form: str | None = None + doc_metadata: list[DocumentMetadataResponse] = Field(default_factory=list, validation_alias="doc_metadata_details") + summary_index_status: str | None = None + need_summary: bool | None = None + + @field_validator("data_source_type", "indexing_status", "display_status", "doc_form", mode="before") + @classmethod + def _normalize_enum_fields(cls, value: Any) -> Any: + return _normalize_enum(value) + + @field_validator("doc_metadata", mode="before") + @classmethod + def _normalize_doc_metadata(cls, value: Any) -> list[Any]: + if value is None: + return [] + return value + + @field_validator("created_at", "disabled_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class DocumentWithSegmentsResponse(DocumentResponse): + process_rule_dict: Any = None + completed_segments: int | None = None + total_segments: int | None = None + + +class DatasetAndDocumentResponse(ResponseModel): + dataset: DatasetResponse + documents: list[DocumentResponse] + batch: str class DocumentRetryPayload(BaseModel): @@ -107,6 +179,11 @@ class GenerateSummaryPayload(BaseModel): document_list: list[str] +class DocumentMetadataUpdatePayload(BaseModel): + doc_type: str | None = None + doc_metadata: Any = None + + class DocumentDatasetListParam(BaseModel): page: int = Field(1, title="Page", description="Page number.") limit: int = Field(20, title="Limit", description="Page size.") @@ -124,7 +201,13 @@ register_schema_models( DocumentRetryPayload, DocumentRenamePayload, GenerateSummaryPayload, + DocumentMetadataUpdatePayload, DocumentBatchDownloadZipPayload, + DatasetResponse, + DocumentMetadataResponse, + DocumentResponse, + DocumentWithSegmentsResponse, + DatasetAndDocumentResponse, ) @@ -357,10 +440,10 @@ class DatasetDocumentListApi(Resource): @setup_required @login_required @account_initialization_required - @marshal_with(dataset_and_document_model) @cloud_edition_billing_resource_check("vector_space") @cloud_edition_billing_rate_limit_check("knowledge") @console_ns.expect(console_ns.models[KnowledgeConfig.__name__]) + @console_ns.response(200, "Documents created successfully", console_ns.models[DatasetAndDocumentResponse.__name__]) def post(self, dataset_id): current_user, _ = current_account_with_tenant() dataset_id = str(dataset_id) @@ -398,7 +481,9 @@ class DatasetDocumentListApi(Resource): except ModelCurrentlyNotSupportError: raise ProviderModelCurrentlyNotSupportError() - return {"dataset": dataset, "documents": documents, "batch": batch} + return DatasetAndDocumentResponse.model_validate( + {"dataset": dataset, "documents": documents, "batch": batch}, from_attributes=True + ).model_dump(mode="json") @setup_required @login_required @@ -426,12 +511,13 @@ class DatasetInitApi(Resource): @console_ns.doc("init_dataset") @console_ns.doc(description="Initialize dataset with documents") @console_ns.expect(console_ns.models[KnowledgeConfig.__name__]) - @console_ns.response(201, "Dataset initialized successfully", dataset_and_document_model) + @console_ns.response( + 201, "Dataset initialized successfully", console_ns.models[DatasetAndDocumentResponse.__name__] + ) @console_ns.response(400, "Invalid request parameters") @setup_required @login_required @account_initialization_required - @marshal_with(dataset_and_document_model) @cloud_edition_billing_resource_check("vector_space") @cloud_edition_billing_rate_limit_check("knowledge") def post(self): @@ -479,9 +565,9 @@ class DatasetInitApi(Resource): except ModelCurrentlyNotSupportError: raise ProviderModelCurrentlyNotSupportError() - response = {"dataset": dataset, "documents": documents, "batch": batch} - - return response + return DatasetAndDocumentResponse.model_validate( + {"dataset": dataset, "documents": documents, "batch": batch}, from_attributes=True + ).model_dump(mode="json") @console_ns.route("/datasets//documents//indexing-estimate") @@ -988,15 +1074,7 @@ class DocumentMetadataApi(DocumentResource): @console_ns.doc("update_document_metadata") @console_ns.doc(description="Update document metadata") @console_ns.doc(params={"dataset_id": "Dataset ID", "document_id": "Document ID"}) - @console_ns.expect( - console_ns.model( - "UpdateDocumentMetadataRequest", - { - "doc_type": fields.String(description="Document type"), - "doc_metadata": fields.Raw(description="Document metadata"), - }, - ) - ) + @console_ns.expect(console_ns.models[DocumentMetadataUpdatePayload.__name__]) @console_ns.response(200, "Document metadata updated successfully") @console_ns.response(404, "Document not found") @console_ns.response(403, "Permission denied") @@ -1009,10 +1087,10 @@ class DocumentMetadataApi(DocumentResource): document_id = str(document_id) document = self.get_document(dataset_id, document_id) - req_data = request.get_json() + req_data = DocumentMetadataUpdatePayload.model_validate(request.get_json() or {}) - doc_type = req_data.get("doc_type") - doc_metadata = req_data.get("doc_metadata") + doc_type = req_data.doc_type + doc_metadata = req_data.doc_metadata # The role of the current user in the ta table must be admin, owner, dataset_operator, or editor if not current_user.is_dataset_editor: @@ -1026,7 +1104,7 @@ class DocumentMetadataApi(DocumentResource): if not isinstance(doc_metadata, dict): raise ValueError("doc_metadata must be a dictionary.") - metadata_schema: dict = cast(dict, DocumentService.DOCUMENT_METADATA_SCHEMA[doc_type]) + metadata_schema: dict[str, Any] = cast(dict[str, Any], DocumentService.DOCUMENT_METADATA_SCHEMA[doc_type]) document.doc_metadata = {} if doc_type == "others": @@ -1194,7 +1272,7 @@ class DocumentRenameApi(DocumentResource): @setup_required @login_required @account_initialization_required - @marshal_with(document_model) + @console_ns.response(200, "Document renamed successfully", console_ns.models[DocumentResponse.__name__]) @console_ns.expect(console_ns.models[DocumentRenamePayload.__name__]) def post(self, dataset_id, document_id): # The role of the current user in the ta table must be admin, owner, editor, or dataset_operator @@ -1212,7 +1290,7 @@ class DocumentRenameApi(DocumentResource): except services.errors.document.DocumentIndexingError: raise DocumentIndexingError("Cannot delete document during indexing.") - return document + return DocumentResponse.model_validate(document, from_attributes=True).model_dump(mode="json") @console_ns.route("/datasets//documents//website-sync") diff --git a/api/controllers/console/datasets/datasets_segments.py b/api/controllers/console/datasets/datasets_segments.py index 354c299bef..2647bb1f5a 100644 --- a/api/controllers/console/datasets/datasets_segments.py +++ b/api/controllers/console/datasets/datasets_segments.py @@ -2,7 +2,6 @@ import uuid from flask import request from flask_restx import Resource, marshal -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field from sqlalchemy import String, cast, func, or_, select from sqlalchemy.dialects.postgresql import JSONB @@ -32,6 +31,7 @@ from core.rag.index_processor.constant.index_type import IndexTechniqueType from extensions.ext_database import db from extensions.ext_redis import redis_client from fields.segment_fields import child_chunk_fields, segment_fields +from graphon.model_runtime.entities.model_entities import ModelType from libs.helper import escape_like_pattern from libs.login import current_account_with_tenant, login_required from models.dataset import ChildChunk, DocumentSegment diff --git a/api/controllers/console/datasets/hit_testing.py b/api/controllers/console/datasets/hit_testing.py index e62be13c2f..36a7a4bb0e 100644 --- a/api/controllers/console/datasets/hit_testing.py +++ b/api/controllers/console/datasets/hit_testing.py @@ -1,13 +1,13 @@ -from flask_restx import Resource, fields +from __future__ import annotations -from controllers.common.schema import register_schema_model -from fields.hit_testing_fields import ( - child_chunk_fields, - document_fields, - files_fields, - hit_testing_record_fields, - segment_fields, -) +from datetime import datetime +from typing import Any + +from flask_restx import Resource +from pydantic import Field, field_validator + +from controllers.common.schema import register_schema_models +from fields.base import ResponseModel from libs.login import login_required from .. import console_ns @@ -18,39 +18,92 @@ from ..wraps import ( setup_required, ) -register_schema_model(console_ns, HitTestingPayload) + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -def _get_or_create_model(model_name: str, field_def): - """Get or create a flask_restx model to avoid dict type issues in Swagger.""" - existing = console_ns.models.get(model_name) - if existing is None: - existing = console_ns.model(model_name, field_def) - return existing +class HitTestingDocument(ResponseModel): + id: str | None = None + data_source_type: str | None = None + name: str | None = None + doc_type: str | None = None + doc_metadata: Any | None = None -# Register models for flask_restx to avoid dict type issues in Swagger -document_model = _get_or_create_model("HitTestingDocument", document_fields) +class HitTestingSegment(ResponseModel): + id: str | None = None + position: int | None = None + document_id: str | None = None + content: str | None = None + sign_content: str | None = None + answer: str | None = None + word_count: int | None = None + tokens: int | None = None + keywords: list[str] = Field(default_factory=list) + index_node_id: str | None = None + index_node_hash: str | None = None + hit_count: int | None = None + enabled: bool | None = None + disabled_at: int | None = None + disabled_by: str | None = None + status: str | None = None + created_by: str | None = None + created_at: int | None = None + indexing_at: int | None = None + completed_at: int | None = None + error: str | None = None + stopped_at: int | None = None + document: HitTestingDocument | None = None -segment_fields_copy = segment_fields.copy() -segment_fields_copy["document"] = fields.Nested(document_model) -segment_model = _get_or_create_model("HitTestingSegment", segment_fields_copy) + @field_validator("disabled_at", "created_at", "indexing_at", "completed_at", "stopped_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) -child_chunk_model = _get_or_create_model("HitTestingChildChunk", child_chunk_fields) -files_model = _get_or_create_model("HitTestingFile", files_fields) -hit_testing_record_fields_copy = hit_testing_record_fields.copy() -hit_testing_record_fields_copy["segment"] = fields.Nested(segment_model) -hit_testing_record_fields_copy["child_chunks"] = fields.List(fields.Nested(child_chunk_model)) -hit_testing_record_fields_copy["files"] = fields.List(fields.Nested(files_model)) -hit_testing_record_model = _get_or_create_model("HitTestingRecord", hit_testing_record_fields_copy) +class HitTestingChildChunk(ResponseModel): + id: str | None = None + content: str | None = None + position: int | None = None + score: float | None = None -# Response model for hit testing API -hit_testing_response_fields = { - "query": fields.String, - "records": fields.List(fields.Nested(hit_testing_record_model)), -} -hit_testing_response_model = _get_or_create_model("HitTestingResponse", hit_testing_response_fields) + +class HitTestingFile(ResponseModel): + id: str | None = None + name: str | None = None + size: int | None = None + extension: str | None = None + mime_type: str | None = None + source_url: str | None = None + + +class HitTestingRecord(ResponseModel): + segment: HitTestingSegment | None = None + child_chunks: list[HitTestingChildChunk] = Field(default_factory=list) + score: float | None = None + tsne_position: Any | None = None + files: list[HitTestingFile] = Field(default_factory=list) + summary: str | None = None + + +class HitTestingResponse(ResponseModel): + query: str + records: list[HitTestingRecord] = Field(default_factory=list) + + +register_schema_models( + console_ns, + HitTestingPayload, + HitTestingDocument, + HitTestingSegment, + HitTestingChildChunk, + HitTestingFile, + HitTestingRecord, + HitTestingResponse, +) @console_ns.route("/datasets//hit-testing") @@ -59,7 +112,11 @@ class HitTestingApi(Resource, DatasetsHitTestingBase): @console_ns.doc(description="Test dataset knowledge retrieval") @console_ns.doc(params={"dataset_id": "Dataset ID"}) @console_ns.expect(console_ns.models[HitTestingPayload.__name__]) - @console_ns.response(200, "Hit testing completed successfully", model=hit_testing_response_model) + @console_ns.response( + 200, + "Hit testing completed successfully", + model=console_ns.models[HitTestingResponse.__name__], + ) @console_ns.response(404, "Dataset not found") @console_ns.response(400, "Invalid parameters") @setup_required @@ -74,4 +131,4 @@ class HitTestingApi(Resource, DatasetsHitTestingBase): args = payload.model_dump(exclude_none=True) self.hit_testing_args_check(args) - return self.perform_hit_testing(dataset, args) + return HitTestingResponse.model_validate(self.perform_hit_testing(dataset, args)).model_dump(mode="json") diff --git a/api/controllers/console/datasets/hit_testing_base.py b/api/controllers/console/datasets/hit_testing_base.py index 8fb3699849..699fa599c8 100644 --- a/api/controllers/console/datasets/hit_testing_base.py +++ b/api/controllers/console/datasets/hit_testing_base.py @@ -2,7 +2,6 @@ import logging from typing import Any from flask_restx import marshal -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field from werkzeug.exceptions import Forbidden, InternalServerError, NotFound @@ -21,6 +20,7 @@ from core.errors.error import ( QuotaExceededError, ) from fields.hit_testing_fields import hit_testing_record_fields +from graphon.model_runtime.errors.invoke import InvokeError from libs.login import current_user from models.account import Account from services.dataset_service import DatasetService diff --git a/api/controllers/console/datasets/rag_pipeline/datasource_auth.py b/api/controllers/console/datasets/rag_pipeline/datasource_auth.py index bdf83b991e..fd0a8b33bc 100644 --- a/api/controllers/console/datasets/rag_pipeline/datasource_auth.py +++ b/api/controllers/console/datasets/rag_pipeline/datasource_auth.py @@ -2,8 +2,6 @@ from typing import Any from flask import make_response, redirect, request from flask_restx import Resource -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field from werkzeug.exceptions import Forbidden, NotFound @@ -12,6 +10,8 @@ from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required from core.plugin.impl.oauth import OAuthHandler +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from models.provider_ids import DatasourceProviderID from services.datasource_provider_service import DatasourceProviderService diff --git a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_draft_variable.py b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_draft_variable.py index 3549f9542d..b31d73f27d 100644 --- a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_draft_variable.py +++ b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_draft_variable.py @@ -4,7 +4,6 @@ from typing import Any, NoReturn from flask import Response, request from flask_restx import Resource, marshal, marshal_with -from graphon.variables.types import SegmentType from pydantic import BaseModel, Field from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import Forbidden @@ -28,6 +27,7 @@ from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTE from extensions.ext_database import db from factories.file_factory import build_from_mapping, build_from_mappings from factories.variable_factory import build_segment_with_type +from graphon.variables.types import SegmentType from libs.login import current_user, login_required from models import Account from models.dataset import Pipeline diff --git a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_workflow.py b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_workflow.py index a8077d9eb0..ee146e8287 100644 --- a/api/controllers/console/datasets/rag_pipeline/rag_pipeline_workflow.py +++ b/api/controllers/console/datasets/rag_pipeline/rag_pipeline_workflow.py @@ -4,7 +4,6 @@ from typing import Any, Literal, cast from flask import abort, request from flask_restx import Resource, marshal_with # type: ignore -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, ValidationError from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import BadRequest, Forbidden, InternalServerError, NotFound @@ -41,6 +40,7 @@ from core.app.apps.pipeline.pipeline_generator import PipelineGenerator from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_database import db from factories import variable_factory +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs import helper from libs.helper import TimestampField, UUIDStrOrEmpty from libs.login import current_account_with_tenant, current_user, login_required diff --git a/api/controllers/console/explore/audio.py b/api/controllers/console/explore/audio.py index a37077af42..ab660d9dc3 100644 --- a/api/controllers/console/explore/audio.py +++ b/api/controllers/console/explore/audio.py @@ -1,7 +1,6 @@ import logging from flask import request -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError import services @@ -20,6 +19,7 @@ from controllers.console.app.error import ( ) from controllers.console.explore.wraps import InstalledAppResource from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from services.audio_service import AudioService from services.errors.audio import ( AudioTooLargeServiceError, diff --git a/api/controllers/console/explore/completion.py b/api/controllers/console/explore/completion.py index eacd7332fe..ccdccceaa6 100644 --- a/api/controllers/console/explore/completion.py +++ b/api/controllers/console/explore/completion.py @@ -2,7 +2,6 @@ import logging from typing import Any, Literal from uuid import UUID -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import InternalServerError, NotFound @@ -26,6 +25,7 @@ from core.errors.error import ( QuotaExceededError, ) from extensions.ext_database import db +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.datetime_utils import naive_utc_now from libs.login import current_user diff --git a/api/controllers/console/explore/installed_app.py b/api/controllers/console/explore/installed_app.py index 0740dd0e24..2d9a997fbf 100644 --- a/api/controllers/console/explore/installed_app.py +++ b/api/controllers/console/explore/installed_app.py @@ -1,21 +1,24 @@ import logging +from datetime import datetime from typing import Any from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, computed_field, field_validator from sqlalchemy import and_, select from werkzeug.exceptions import BadRequest, Forbidden, NotFound -from controllers.common.schema import get_or_create_model +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.explore.wraps import InstalledAppResource from controllers.console.wraps import account_initialization_required, cloud_edition_billing_resource_check from extensions.ext_database import db -from fields.installed_app_fields import app_fields, installed_app_fields, installed_app_list_fields +from fields.base import ResponseModel +from graphon.file import helpers as file_helpers from libs.datetime_utils import naive_utc_now from libs.login import current_account_with_tenant, login_required from models import App, InstalledApp, RecommendedApp +from models.model import IconType from services.account_service import TenantService from services.enterprise.enterprise_service import EnterpriseService from services.feature_service import FeatureService @@ -36,22 +39,97 @@ class InstalledAppsListQuery(BaseModel): logger = logging.getLogger(__name__) -app_model = get_or_create_model("InstalledAppInfo", app_fields) +def _build_icon_url(icon_type: str | IconType | None, icon: str | None) -> str | None: + if icon is None or icon_type is None: + return None + icon_type_value = icon_type.value if isinstance(icon_type, IconType) else str(icon_type) + if icon_type_value.lower() != IconType.IMAGE: + return None + return file_helpers.get_signed_file_url(icon) -installed_app_fields_copy = installed_app_fields.copy() -installed_app_fields_copy["app"] = fields.Nested(app_model) -installed_app_model = get_or_create_model("InstalledApp", installed_app_fields_copy) -installed_app_list_fields_copy = installed_app_list_fields.copy() -installed_app_list_fields_copy["installed_apps"] = fields.List(fields.Nested(installed_app_model)) -installed_app_list_model = get_or_create_model("InstalledAppList", installed_app_list_fields_copy) +def _safe_primitive(value: Any) -> Any: + if value is None or isinstance(value, (str, int, float, bool, datetime)): + return value + return None + + +class InstalledAppInfoResponse(ResponseModel): + id: str + name: str | None = None + mode: str | None = None + icon_type: str | None = None + icon: str | None = None + icon_background: str | None = None + use_icon_as_answer_icon: bool | None = None + + @field_validator("mode", "icon_type", mode="before") + @classmethod + def _normalize_enum_like(cls, value: Any) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @computed_field(return_type=str | None) # type: ignore[prop-decorator] + @property + def icon_url(self) -> str | None: + return _build_icon_url(self.icon_type, self.icon) + + +class InstalledAppResponse(ResponseModel): + id: str + app: InstalledAppInfoResponse + app_owner_tenant_id: str + is_pinned: bool + last_used_at: int | None = None + editable: bool + uninstallable: bool + + @field_validator("app", mode="before") + @classmethod + def _normalize_app(cls, value: Any) -> Any: + if isinstance(value, dict): + return value + return { + "id": _safe_primitive(getattr(value, "id", "")) or "", + "name": _safe_primitive(getattr(value, "name", None)), + "mode": _safe_primitive(getattr(value, "mode", None)), + "icon_type": _safe_primitive(getattr(value, "icon_type", None)), + "icon": _safe_primitive(getattr(value, "icon", None)), + "icon_background": _safe_primitive(getattr(value, "icon_background", None)), + "use_icon_as_answer_icon": _safe_primitive(getattr(value, "use_icon_as_answer_icon", None)), + } + + @field_validator("last_used_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class InstalledAppListResponse(ResponseModel): + installed_apps: list[InstalledAppResponse] + + +register_schema_models( + console_ns, + InstalledAppCreatePayload, + InstalledAppUpdatePayload, + InstalledAppsListQuery, + InstalledAppInfoResponse, + InstalledAppResponse, + InstalledAppListResponse, +) @console_ns.route("/installed-apps") class InstalledAppsListApi(Resource): @login_required @account_initialization_required - @marshal_with(installed_app_list_model) + @console_ns.response(200, "Success", console_ns.models[InstalledAppListResponse.__name__]) def get(self): query = InstalledAppsListQuery.model_validate(request.args.to_dict()) current_user, current_tenant_id = current_account_with_tenant() @@ -125,7 +203,9 @@ class InstalledAppsListApi(Resource): ) ) - return {"installed_apps": installed_app_list} + return InstalledAppListResponse.model_validate( + {"installed_apps": installed_app_list}, from_attributes=True + ).model_dump(mode="json") @login_required @account_initialization_required diff --git a/api/controllers/console/explore/message.py b/api/controllers/console/explore/message.py index 64d55d7ca3..209667d1d0 100644 --- a/api/controllers/console/explore/message.py +++ b/api/controllers/console/explore/message.py @@ -2,7 +2,6 @@ import logging from typing import Literal from flask import request -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, TypeAdapter from werkzeug.exceptions import InternalServerError, NotFound @@ -25,6 +24,7 @@ from core.app.entities.app_invoke_entities import InvokeFrom from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError from fields.conversation_fields import ResultResponse from fields.message_fields import MessageInfiniteScrollPagination, MessageListItem, SuggestedQuestionsResponse +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.login import current_account_with_tenant from models.enums import FeedbackRating diff --git a/api/controllers/console/explore/recommended_app.py b/api/controllers/console/explore/recommended_app.py index c9920c97cf..55bd679b48 100644 --- a/api/controllers/console/explore/recommended_app.py +++ b/api/controllers/console/explore/recommended_app.py @@ -1,66 +1,83 @@ +from typing import Any + from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, computed_field, field_validator from constants.languages import languages -from controllers.common.schema import get_or_create_model +from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required -from libs.helper import AppIconUrlField +from fields.base import ResponseModel +from libs.helper import build_icon_url from libs.login import current_user, login_required from services.recommended_app_service import RecommendedAppService -app_fields = { - "id": fields.String, - "name": fields.String, - "mode": fields.String, - "icon": fields.String, - "icon_type": fields.String, - "icon_url": AppIconUrlField, - "icon_background": fields.String, -} - -app_model = get_or_create_model("RecommendedAppInfo", app_fields) - -recommended_app_fields = { - "app": fields.Nested(app_model, attribute="app"), - "app_id": fields.String, - "description": fields.String(attribute="description"), - "copyright": fields.String, - "privacy_policy": fields.String, - "custom_disclaimer": fields.String, - "category": fields.String, - "position": fields.Integer, - "is_listed": fields.Boolean, - "can_trial": fields.Boolean, -} - -recommended_app_model = get_or_create_model("RecommendedApp", recommended_app_fields) - -recommended_app_list_fields = { - "recommended_apps": fields.List(fields.Nested(recommended_app_model)), - "categories": fields.List(fields.String), -} - -recommended_app_list_model = get_or_create_model("RecommendedAppList", recommended_app_list_fields) - class RecommendedAppsQuery(BaseModel): language: str | None = Field(default=None) -console_ns.schema_model( - RecommendedAppsQuery.__name__, - RecommendedAppsQuery.model_json_schema(ref_template="#/definitions/{model}"), +class RecommendedAppInfoResponse(ResponseModel): + id: str + name: str | None = None + mode: str | None = None + icon: str | None = None + icon_type: str | None = None + icon_background: str | None = None + + @staticmethod + def _normalize_enum_like(value: Any) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("mode", "icon_type", mode="before") + @classmethod + def _normalize_enum_fields(cls, value: Any) -> str | None: + return cls._normalize_enum_like(value) + + @computed_field(return_type=str | None) # type: ignore[prop-decorator] + @property + def icon_url(self) -> str | None: + return build_icon_url(self.icon_type, self.icon) + + +class RecommendedAppResponse(ResponseModel): + app: RecommendedAppInfoResponse | None = None + app_id: str + description: str | None = None + copyright: str | None = None + privacy_policy: str | None = None + custom_disclaimer: str | None = None + category: str | None = None + position: int | None = None + is_listed: bool | None = None + can_trial: bool | None = None + + +class RecommendedAppListResponse(ResponseModel): + recommended_apps: list[RecommendedAppResponse] + categories: list[str] + + +register_schema_models( + console_ns, + RecommendedAppsQuery, + RecommendedAppInfoResponse, + RecommendedAppResponse, + RecommendedAppListResponse, ) @console_ns.route("/explore/apps") class RecommendedAppListApi(Resource): @console_ns.expect(console_ns.models[RecommendedAppsQuery.__name__]) + @console_ns.response(200, "Success", console_ns.models[RecommendedAppListResponse.__name__]) @login_required @account_initialization_required - @marshal_with(recommended_app_list_model) def get(self): # language args args = RecommendedAppsQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore @@ -72,7 +89,10 @@ class RecommendedAppListApi(Resource): else: language_prefix = languages[0] - return RecommendedAppService.get_recommended_apps_and_categories(language_prefix) + return RecommendedAppListResponse.model_validate( + RecommendedAppService.get_recommended_apps_and_categories(language_prefix), + from_attributes=True, + ).model_dump(mode="json") @console_ns.route("/explore/apps/") diff --git a/api/controllers/console/explore/trial.py b/api/controllers/console/explore/trial.py index e432574434..1456301a24 100644 --- a/api/controllers/console/explore/trial.py +++ b/api/controllers/console/explore/trial.py @@ -3,8 +3,6 @@ from typing import Any, Literal, cast from flask import request from flask_restx import Resource, fields, marshal, marshal_with -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel from sqlalchemy import select from werkzeug.exceptions import Forbidden, InternalServerError, NotFound @@ -61,6 +59,8 @@ from fields.workflow_fields import ( workflow_fields, workflow_partial_fields, ) +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.helper import uuid_value from libs.login import current_user @@ -169,6 +169,7 @@ console_ns.schema_model( class TrialAppWorkflowRunApi(TrialAppResource): + @trial_feature_enable @console_ns.expect(console_ns.models[WorkflowRunRequest.__name__]) def post(self, trial_app): """ @@ -210,6 +211,7 @@ class TrialAppWorkflowRunApi(TrialAppResource): class TrialAppWorkflowTaskStopApi(TrialAppResource): + @trial_feature_enable def post(self, trial_app, task_id: str): """ Stop workflow task @@ -290,7 +292,6 @@ class TrialChatApi(TrialAppResource): class TrialMessageSuggestedQuestionApi(TrialAppResource): - @trial_feature_enable def get(self, trial_app, message_id): app_model = trial_app app_mode = AppMode.value_of(app_model.mode) @@ -470,7 +471,6 @@ class TrialCompletionApi(TrialAppResource): class TrialSitApi(Resource): """Resource for trial app sites.""" - @trial_feature_enable @get_app_model_with_trial(None) def get(self, app_model): """Retrieve app site info. @@ -492,7 +492,6 @@ class TrialSitApi(Resource): class TrialAppParameterApi(Resource): """Resource for app variables.""" - @trial_feature_enable @get_app_model_with_trial(None) def get(self, app_model): """Retrieve app parameters.""" @@ -521,7 +520,6 @@ class TrialAppParameterApi(Resource): class AppApi(Resource): - @trial_feature_enable @get_app_model_with_trial(None) @marshal_with(app_detail_with_site_model) def get(self, app_model): @@ -534,7 +532,6 @@ class AppApi(Resource): class AppWorkflowApi(Resource): - @trial_feature_enable @get_app_model_with_trial(None) @marshal_with(workflow_model) def get(self, app_model): @@ -547,7 +544,6 @@ class AppWorkflowApi(Resource): class DatasetListApi(Resource): - @trial_feature_enable @get_app_model_with_trial(None) def get(self, app_model): page = request.args.get("page", default=1, type=int) diff --git a/api/controllers/console/explore/workflow.py b/api/controllers/console/explore/workflow.py index da88de6776..438cce4fd8 100644 --- a/api/controllers/console/explore/workflow.py +++ b/api/controllers/console/explore/workflow.py @@ -1,7 +1,5 @@ import logging -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError from controllers.common.controller_schemas import WorkflowRunPayload @@ -23,6 +21,8 @@ from core.errors.error import ( QuotaExceededError, ) from extensions.ext_redis import redis_client +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.login import current_account_with_tenant from models.model import AppMode, InstalledApp diff --git a/api/controllers/console/extension.py b/api/controllers/console/extension.py index efa46c9779..7a6356d052 100644 --- a/api/controllers/console/extension.py +++ b/api/controllers/console/extension.py @@ -1,15 +1,18 @@ +from datetime import datetime +from typing import Any + from flask import request -from flask_restx import Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, TypeAdapter, field_validator from constants import HIDDEN_VALUE -from fields.api_based_extension_fields import api_based_extension_fields +from fields.base import ResponseModel from libs.login import current_account_with_tenant, login_required from models.api_based_extension import APIBasedExtension from services.api_based_extension_service import APIBasedExtensionService from services.code_based_extension_service import CodeBasedExtensionService -from ..common.schema import register_schema_models +from ..common.schema import DEFAULT_REF_TEMPLATE_SWAGGER_2_0, register_schema_models from . import console_ns from .wraps import account_initialization_required, setup_required @@ -24,12 +27,52 @@ class APIBasedExtensionPayload(BaseModel): api_key: str = Field(description="API key for authentication") -register_schema_models(console_ns, APIBasedExtensionPayload) +class CodeBasedExtensionResponse(ResponseModel): + module: str = Field(description="Module name") + data: Any = Field(description="Extension data") -api_based_extension_model = console_ns.model("ApiBasedExtensionModel", api_based_extension_fields) +def _mask_api_key(api_key: str) -> str: + if not api_key: + return api_key + if len(api_key) <= 8: + return api_key[0] + "******" + api_key[-1] + return api_key[:3] + "******" + api_key[-3:] -api_based_extension_list_model = fields.List(fields.Nested(api_based_extension_model)) + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class APIBasedExtensionResponse(ResponseModel): + id: str + name: str + api_endpoint: str + api_key: str + created_at: int | None = None + + @field_validator("api_key", mode="before") + @classmethod + def _normalize_api_key(cls, value: str) -> str: + return _mask_api_key(value) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +register_schema_models(console_ns, APIBasedExtensionPayload, CodeBasedExtensionResponse, APIBasedExtensionResponse) +console_ns.schema_model( + "APIBasedExtensionListResponse", + TypeAdapter(list[APIBasedExtensionResponse]).json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0), +) + + +def _serialize_api_based_extension(extension: APIBasedExtension) -> dict[str, Any]: + return APIBasedExtensionResponse.model_validate(extension, from_attributes=True).model_dump(mode="json") @console_ns.route("/code-based-extension") @@ -40,10 +83,7 @@ class CodeBasedExtensionAPI(Resource): @console_ns.response( 200, "Success", - console_ns.model( - "CodeBasedExtensionResponse", - {"module": fields.String(description="Module name"), "data": fields.Raw(description="Extension data")}, - ), + console_ns.models[CodeBasedExtensionResponse.__name__], ) @setup_required @login_required @@ -51,30 +91,34 @@ class CodeBasedExtensionAPI(Resource): def get(self): query = CodeBasedExtensionQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore - return {"module": query.module, "data": CodeBasedExtensionService.get_code_based_extension(query.module)} + return CodeBasedExtensionResponse( + module=query.module, + data=CodeBasedExtensionService.get_code_based_extension(query.module), + ).model_dump(mode="json") @console_ns.route("/api-based-extension") class APIBasedExtensionAPI(Resource): @console_ns.doc("get_api_based_extensions") @console_ns.doc(description="Get all API-based extensions for current tenant") - @console_ns.response(200, "Success", api_based_extension_list_model) + @console_ns.response(200, "Success", console_ns.models["APIBasedExtensionListResponse"]) @setup_required @login_required @account_initialization_required - @marshal_with(api_based_extension_model) def get(self): _, tenant_id = current_account_with_tenant() - return APIBasedExtensionService.get_all_by_tenant_id(tenant_id) + return [ + _serialize_api_based_extension(extension) + for extension in APIBasedExtensionService.get_all_by_tenant_id(tenant_id) + ] @console_ns.doc("create_api_based_extension") @console_ns.doc(description="Create a new API-based extension") @console_ns.expect(console_ns.models[APIBasedExtensionPayload.__name__]) - @console_ns.response(201, "Extension created successfully", api_based_extension_model) + @console_ns.response(201, "Extension created successfully", console_ns.models[APIBasedExtensionResponse.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(api_based_extension_model) def post(self): payload = APIBasedExtensionPayload.model_validate(console_ns.payload or {}) _, current_tenant_id = current_account_with_tenant() @@ -86,7 +130,7 @@ class APIBasedExtensionAPI(Resource): api_key=payload.api_key, ) - return APIBasedExtensionService.save(extension_data) + return _serialize_api_based_extension(APIBasedExtensionService.save(extension_data)) @console_ns.route("/api-based-extension/") @@ -94,26 +138,26 @@ class APIBasedExtensionDetailAPI(Resource): @console_ns.doc("get_api_based_extension") @console_ns.doc(description="Get API-based extension by ID") @console_ns.doc(params={"id": "Extension ID"}) - @console_ns.response(200, "Success", api_based_extension_model) + @console_ns.response(200, "Success", console_ns.models[APIBasedExtensionResponse.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(api_based_extension_model) def get(self, id): api_based_extension_id = str(id) _, tenant_id = current_account_with_tenant() - return APIBasedExtensionService.get_with_tenant_id(tenant_id, api_based_extension_id) + return _serialize_api_based_extension( + APIBasedExtensionService.get_with_tenant_id(tenant_id, api_based_extension_id) + ) @console_ns.doc("update_api_based_extension") @console_ns.doc(description="Update API-based extension") @console_ns.doc(params={"id": "Extension ID"}) @console_ns.expect(console_ns.models[APIBasedExtensionPayload.__name__]) - @console_ns.response(200, "Extension updated successfully", api_based_extension_model) + @console_ns.response(200, "Extension updated successfully", console_ns.models[APIBasedExtensionResponse.__name__]) @setup_required @login_required @account_initialization_required - @marshal_with(api_based_extension_model) def post(self, id): api_based_extension_id = str(id) _, current_tenant_id = current_account_with_tenant() @@ -128,7 +172,7 @@ class APIBasedExtensionDetailAPI(Resource): if payload.api_key != HIDDEN_VALUE: extension_data_from_db.api_key = payload.api_key - return APIBasedExtensionService.save(extension_data_from_db) + return _serialize_api_based_extension(APIBasedExtensionService.save(extension_data_from_db)) @console_ns.doc("delete_api_based_extension") @console_ns.doc(description="Delete API-based extension") diff --git a/api/controllers/console/notification.py b/api/controllers/console/notification.py index 180167402a..5d46470173 100644 --- a/api/controllers/console/notification.py +++ b/api/controllers/console/notification.py @@ -1,3 +1,4 @@ +from collections.abc import Mapping from typing import TypedDict from flask import request @@ -13,6 +14,14 @@ from services.billing_service import BillingService _FALLBACK_LANG = "en-US" +class NotificationLangContent(TypedDict, total=False): + lang: str + title: str + subtitle: str + body: str + titlePicUrl: str + + class NotificationItemDict(TypedDict): notification_id: str | None frequency: str | None @@ -28,9 +37,11 @@ class NotificationResponseDict(TypedDict): notifications: list[NotificationItemDict] -def _pick_lang_content(contents: dict, lang: str) -> dict: +def _pick_lang_content(contents: Mapping[str, NotificationLangContent], lang: str) -> NotificationLangContent: """Return the single LangContent for *lang*, falling back to English.""" - return contents.get(lang) or contents.get(_FALLBACK_LANG) or next(iter(contents.values()), {}) + return ( + contents.get(lang) or contents.get(_FALLBACK_LANG) or next(iter(contents.values()), NotificationLangContent()) + ) class DismissNotificationPayload(BaseModel): @@ -71,7 +82,7 @@ class NotificationApi(Resource): notifications: list[NotificationItemDict] = [] for notification in result.get("notifications") or []: - contents: dict = notification.get("contents") or {} + contents: Mapping[str, NotificationLangContent] = notification.get("contents") or {} lang_content = _pick_lang_content(contents, lang) item: NotificationItemDict = { "notification_id": notification.get("notificationId"), diff --git a/api/controllers/console/remote_files.py b/api/controllers/console/remote_files.py index 551c86fd82..2a46d2250a 100644 --- a/api/controllers/console/remote_files.py +++ b/api/controllers/console/remote_files.py @@ -2,7 +2,6 @@ import urllib.parse import httpx from flask_restx import Resource -from graphon.file import helpers as file_helpers from pydantic import BaseModel, Field import services @@ -16,6 +15,7 @@ from controllers.console import console_ns from core.helper import ssrf_proxy from extensions.ext_database import db from fields.file_fields import FileWithSignedUrl, RemoteFileInfo +from graphon.file import helpers as file_helpers from libs.login import current_account_with_tenant, login_required from services.file_service import FileService diff --git a/api/controllers/console/socketio/__init__.py b/api/controllers/console/socketio/__init__.py new file mode 100644 index 0000000000..8b13789179 --- /dev/null +++ b/api/controllers/console/socketio/__init__.py @@ -0,0 +1 @@ + diff --git a/api/controllers/console/socketio/workflow.py b/api/controllers/console/socketio/workflow.py new file mode 100644 index 0000000000..b4f03593fd --- /dev/null +++ b/api/controllers/console/socketio/workflow.py @@ -0,0 +1,108 @@ +import logging +from collections.abc import Callable +from typing import cast + +from flask import Request as FlaskRequest + +from extensions.ext_socketio import sio +from libs.passport import PassportService +from libs.token import extract_access_token +from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository +from services.account_service import AccountService +from services.workflow_collaboration_service import WorkflowCollaborationService + +repository = WorkflowCollaborationRepository() +collaboration_service = WorkflowCollaborationService(repository, sio) + + +def _sio_on(event: str) -> Callable[[Callable[..., object]], Callable[..., object]]: + return cast(Callable[[Callable[..., object]], Callable[..., object]], sio.on(event)) + + +@_sio_on("connect") +def socket_connect(sid, environ, auth): + """ + WebSocket connect event, do authentication here. + """ + try: + request_environ = FlaskRequest(environ) + token = extract_access_token(request_environ) + except Exception: + logging.exception("Failed to extract token") + token = None + + if not token: + logging.warning("Socket connect rejected: missing token (sid=%s)", sid) + return False + + try: + decoded = PassportService().verify(token) + user_id = decoded.get("user_id") + if not user_id: + logging.warning("Socket connect rejected: missing user_id (sid=%s)", sid) + return False + + with sio.app.app_context(): + user = AccountService.load_logged_in_account(account_id=user_id) + if not user: + logging.warning("Socket connect rejected: user not found (user_id=%s, sid=%s)", user_id, sid) + return False + if not user.has_edit_permission: + logging.warning("Socket connect rejected: no edit permission (user_id=%s, sid=%s)", user_id, sid) + return False + + collaboration_service.save_socket_identity(sid, user) + return True + + except Exception: + logging.exception("Socket authentication failed") + return False + + +@_sio_on("user_connect") +def handle_user_connect(sid, data): + """ + Handle user connect event. Each session (tab) is treated as an independent collaborator. + """ + workflow_id = data.get("workflow_id") + if not workflow_id: + return {"msg": "workflow_id is required"}, 400 + + result = collaboration_service.authorize_and_join_workflow_room(workflow_id, sid) + if not result: + return {"msg": "unauthorized"}, 401 + + user_id, is_leader = result + return {"msg": "connected", "user_id": user_id, "sid": sid, "isLeader": is_leader} + + +@_sio_on("disconnect") +def handle_disconnect(sid): + """ + Handle session disconnect event. Remove the specific session from online users. + """ + collaboration_service.disconnect_session(sid) + + +@_sio_on("collaboration_event") +def handle_collaboration_event(sid, data): + """ + Handle general collaboration events, include: + 1. mouse_move + 2. vars_and_features_update + 3. sync_request (ask leader to update graph) + 4. app_state_update + 5. mcp_server_update + 6. workflow_update + 7. comments_update + 8. node_panel_presence + """ + return collaboration_service.relay_collaboration_event(sid, data) + + +@_sio_on("graph_event") +def handle_graph_event(sid, data): + """ + Handle graph events - simple broadcast relay. + """ + return collaboration_service.relay_graph_event(sid, data) diff --git a/api/controllers/console/tag/tags.py b/api/controllers/console/tag/tags.py index 39b84d3869..614bf03ea5 100644 --- a/api/controllers/console/tag/tags.py +++ b/api/controllers/console/tag/tags.py @@ -1,13 +1,14 @@ from typing import Literal from flask import request -from flask_restx import Namespace, Resource, fields, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource +from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import Forbidden from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, edit_permission_required, setup_required +from fields.base import ResponseModel from libs.login import current_account_with_tenant, login_required from models.enums import TagType from services.tag_service import ( @@ -18,17 +19,6 @@ from services.tag_service import ( UpdateTagPayload, ) -dataset_tag_fields = { - "id": fields.String, - "name": fields.String, - "type": fields.String, - "binding_count": fields.String, -} - - -def build_dataset_tag_fields(api_or_ns: Namespace): - return api_or_ns.model("DataSetTag", dataset_tag_fields) - class TagBasePayload(BaseModel): name: str = Field(description="Tag name", min_length=1, max_length=50) @@ -52,12 +42,36 @@ class TagListQueryParam(BaseModel): keyword: str | None = Field(None, description="Search keyword") +class TagResponse(ResponseModel): + id: str + name: str + type: str | None = None + binding_count: str | None = None + + @field_validator("type", mode="before") + @classmethod + def normalize_type(cls, value: TagType | str | None) -> str | None: + if value is None: + return None + if isinstance(value, TagType): + return value.value + return value + + @field_validator("binding_count", mode="before") + @classmethod + def normalize_binding_count(cls, value: int | str | None) -> str | None: + if value is None: + return None + return str(value) + + register_schema_models( console_ns, TagBasePayload, TagBindingPayload, TagBindingRemovePayload, TagListQueryParam, + TagResponse, ) @@ -69,14 +83,18 @@ class TagListApi(Resource): @console_ns.doc( params={"type": 'Tag type filter. Can be "knowledge" or "app".', "keyword": "Search keyword for tag name."} ) - @marshal_with(dataset_tag_fields) + @console_ns.doc(responses={200: ("Success", [console_ns.models[TagResponse.__name__]])}) def get(self): _, current_tenant_id = current_account_with_tenant() raw_args = request.args.to_dict() param = TagListQueryParam.model_validate(raw_args) tags = TagService.get_tags(param.type, current_tenant_id, param.keyword) - return tags, 200 + serialized_tags = [ + TagResponse.model_validate(tag, from_attributes=True).model_dump(mode="json") for tag in tags + ] + + return serialized_tags, 200 @console_ns.expect(console_ns.models[TagBasePayload.__name__]) @setup_required @@ -91,7 +109,9 @@ class TagListApi(Resource): payload = TagBasePayload.model_validate(console_ns.payload or {}) tag = TagService.save_tags(SaveTagPayload(name=payload.name, type=payload.type)) - response = {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": 0} + response = TagResponse.model_validate( + {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": 0} + ).model_dump(mode="json") return response, 200 @@ -114,7 +134,9 @@ class TagUpdateDeleteApi(Resource): binding_count = TagService.get_tag_binding_count(tag_id) - response = {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": binding_count} + response = TagResponse.model_validate( + {"id": tag.id, "name": tag.name, "type": tag.type, "binding_count": binding_count} + ).model_dump(mode="json") return response, 200 diff --git a/api/controllers/console/workspace/__init__.py b/api/controllers/console/workspace/__init__.py index 60f712e476..59dd29fdac 100644 --- a/api/controllers/console/workspace/__init__.py +++ b/api/controllers/console/workspace/__init__.py @@ -35,22 +35,24 @@ def plugin_permission_required( return view(*args, **kwargs) if install_required: - if permission.install_permission == TenantPluginPermission.InstallPermission.NOBODY: - raise Forbidden() - if permission.install_permission == TenantPluginPermission.InstallPermission.ADMINS: - if not user.is_admin_or_owner: + match permission.install_permission: + case TenantPluginPermission.InstallPermission.NOBODY: raise Forbidden() - if permission.install_permission == TenantPluginPermission.InstallPermission.EVERYONE: - pass + case TenantPluginPermission.InstallPermission.ADMINS: + if not user.is_admin_or_owner: + raise Forbidden() + case TenantPluginPermission.InstallPermission.EVERYONE: + pass if debug_required: - if permission.debug_permission == TenantPluginPermission.DebugPermission.NOBODY: - raise Forbidden() - if permission.debug_permission == TenantPluginPermission.DebugPermission.ADMINS: - if not user.is_admin_or_owner: + match permission.debug_permission: + case TenantPluginPermission.DebugPermission.NOBODY: raise Forbidden() - if permission.debug_permission == TenantPluginPermission.DebugPermission.EVERYONE: - pass + case TenantPluginPermission.DebugPermission.ADMINS: + if not user.is_admin_or_owner: + raise Forbidden() + case TenantPluginPermission.DebugPermission.EVERYONE: + pass return view(*args, **kwargs) diff --git a/api/controllers/console/workspace/account.py b/api/controllers/console/workspace/account.py index af25669ae0..44404005b2 100644 --- a/api/controllers/console/workspace/account.py +++ b/api/controllers/console/workspace/account.py @@ -1,11 +1,11 @@ from __future__ import annotations from datetime import datetime -from typing import Literal +from typing import Any, Literal import pytz from flask import request -from flask_restx import Resource, fields, marshal_with +from flask_restx import Resource from pydantic import BaseModel, Field, field_validator, model_validator from sqlalchemy import select @@ -37,9 +37,11 @@ from controllers.console.wraps import ( setup_required, ) from extensions.ext_database import db +from fields.base import ResponseModel from fields.member_fields import Account as AccountResponse +from graphon.file import helpers as file_helpers from libs.datetime_utils import naive_utc_now -from libs.helper import EmailStr, TimestampField, extract_remote_ip, timezone +from libs.helper import EmailStr, extract_remote_ip, timezone from libs.login import current_account_with_tenant, login_required from models import AccountIntegrate, InvitationCode from models.account import AccountStatus, InvitationCodeStatus @@ -74,6 +76,10 @@ class AccountAvatarPayload(BaseModel): avatar: str +class AccountAvatarQuery(BaseModel): + avatar: str = Field(..., description="Avatar file ID") + + class AccountInterfaceLanguagePayload(BaseModel): interface_language: str @@ -159,6 +165,7 @@ def reg(cls: type[BaseModel]): reg(AccountInitPayload) reg(AccountNamePayload) reg(AccountAvatarPayload) +reg(AccountAvatarQuery) reg(AccountInterfaceLanguagePayload) reg(AccountInterfaceThemePayload) reg(AccountTimezonePayload) @@ -174,21 +181,61 @@ reg(CheckEmailUniquePayload) register_schema_models(console_ns, AccountResponse) -def _serialize_account(account) -> dict: +def _serialize_account(account) -> dict[str, Any]: return AccountResponse.model_validate(account, from_attributes=True).model_dump(mode="json") -integrate_fields = { - "provider": fields.String, - "created_at": TimestampField, - "is_bound": fields.Boolean, - "link": fields.String, -} +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value -integrate_model = console_ns.model("AccountIntegrate", integrate_fields) -integrate_list_model = console_ns.model( - "AccountIntegrateList", - {"data": fields.List(fields.Nested(integrate_model))}, + +class AccountIntegrateResponse(ResponseModel): + provider: str + created_at: int | None = None + is_bound: bool + link: str | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class AccountIntegrateListResponse(ResponseModel): + data: list[AccountIntegrateResponse] + + +class EducationVerifyResponse(ResponseModel): + token: str | None = None + + +class EducationStatusResponse(ResponseModel): + result: bool | None = None + is_student: bool | None = None + expire_at: int | None = None + allow_refresh: bool | None = None + + @field_validator("expire_at", mode="before") + @classmethod + def _normalize_expire_at(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class EducationAutocompleteResponse(ResponseModel): + data: list[str] = Field(default_factory=list) + curr_page: int | None = None + has_next: bool | None = None + + +register_schema_models( + console_ns, + AccountIntegrateResponse, + AccountIntegrateListResponse, + EducationVerifyResponse, + EducationStatusResponse, + EducationAutocompleteResponse, ) @@ -268,6 +315,18 @@ class AccountNameApi(Resource): @console_ns.route("/account/avatar") class AccountAvatarApi(Resource): + @console_ns.expect(console_ns.models[AccountAvatarQuery.__name__]) + @console_ns.doc("get_account_avatar") + @console_ns.doc(description="Get account avatar url") + @setup_required + @login_required + @account_initialization_required + def get(self): + args = AccountAvatarQuery.model_validate(request.args.to_dict(flat=True)) # type: ignore + + avatar_url = file_helpers.get_signed_file_url(args.avatar) + return {"avatar_url": avatar_url} + @console_ns.expect(console_ns.models[AccountAvatarPayload.__name__]) @setup_required @login_required @@ -359,7 +418,7 @@ class AccountIntegrateApi(Resource): @setup_required @login_required @account_initialization_required - @marshal_with(integrate_list_model) + @console_ns.response(200, "Success", console_ns.models[AccountIntegrateListResponse.__name__]) def get(self): account, _ = current_account_with_tenant() @@ -395,7 +454,9 @@ class AccountIntegrateApi(Resource): } ) - return {"data": integrate_data} + return AccountIntegrateListResponse( + data=[AccountIntegrateResponse.model_validate(item) for item in integrate_data] + ).model_dump(mode="json") @console_ns.route("/account/delete/verify") @@ -447,31 +508,22 @@ class AccountDeleteUpdateFeedbackApi(Resource): @console_ns.route("/account/education/verify") class EducationVerifyApi(Resource): - verify_fields = { - "token": fields.String, - } - @setup_required @login_required @account_initialization_required @only_edition_cloud @cloud_edition_billing_enabled - @marshal_with(verify_fields) + @console_ns.response(200, "Success", console_ns.models[EducationVerifyResponse.__name__]) def get(self): account, _ = current_account_with_tenant() - return BillingService.EducationIdentity.verify(account.id, account.email) + return EducationVerifyResponse.model_validate( + BillingService.EducationIdentity.verify(account.id, account.email) or {} + ).model_dump(mode="json") @console_ns.route("/account/education") class EducationApi(Resource): - status_fields = { - "result": fields.Boolean, - "is_student": fields.Boolean, - "expire_at": TimestampField, - "allow_refresh": fields.Boolean, - } - @console_ns.expect(console_ns.models[EducationActivatePayload.__name__]) @setup_required @login_required @@ -491,37 +543,33 @@ class EducationApi(Resource): @account_initialization_required @only_edition_cloud @cloud_edition_billing_enabled - @marshal_with(status_fields) + @console_ns.response(200, "Success", console_ns.models[EducationStatusResponse.__name__]) def get(self): account, _ = current_account_with_tenant() - res = BillingService.EducationIdentity.status(account.id) + res = BillingService.EducationIdentity.status(account.id) or {} # convert expire_at to UTC timestamp from isoformat if res and "expire_at" in res: res["expire_at"] = datetime.fromisoformat(res["expire_at"]).astimezone(pytz.utc) - return res + return EducationStatusResponse.model_validate(res).model_dump(mode="json") @console_ns.route("/account/education/autocomplete") class EducationAutoCompleteApi(Resource): - data_fields = { - "data": fields.List(fields.String), - "curr_page": fields.Integer, - "has_next": fields.Boolean, - } - @console_ns.expect(console_ns.models[EducationAutocompleteQuery.__name__]) @setup_required @login_required @account_initialization_required @only_edition_cloud @cloud_edition_billing_enabled - @marshal_with(data_fields) + @console_ns.response(200, "Success", console_ns.models[EducationAutocompleteResponse.__name__]) def get(self): payload = request.args.to_dict(flat=True) args = EducationAutocompleteQuery.model_validate(payload) - return BillingService.EducationIdentity.autocomplete(args.keywords, args.page, args.limit) + return EducationAutocompleteResponse.model_validate( + BillingService.EducationIdentity.autocomplete(args.keywords, args.page, args.limit) or {} + ).model_dump(mode="json") @console_ns.route("/account/change-email") diff --git a/api/controllers/console/workspace/agent_providers.py b/api/controllers/console/workspace/agent_providers.py index 3fdcbc4710..764f488755 100644 --- a/api/controllers/console/workspace/agent_providers.py +++ b/api/controllers/console/workspace/agent_providers.py @@ -1,8 +1,8 @@ from flask_restx import Resource, fields -from graphon.model_runtime.utils.encoders import jsonable_encoder from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, setup_required +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from services.agent_service import AgentService diff --git a/api/controllers/console/workspace/endpoint.py b/api/controllers/console/workspace/endpoint.py index b6b9deb1f9..f45b72f390 100644 --- a/api/controllers/console/workspace/endpoint.py +++ b/api/controllers/console/workspace/endpoint.py @@ -2,13 +2,13 @@ from typing import Any from flask import request from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required from core.plugin.impl.exc import PluginPermissionDeniedError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from services.plugin.endpoint_service import EndpointService diff --git a/api/controllers/console/workspace/load_balancing_config.py b/api/controllers/console/workspace/load_balancing_config.py index e4cfca9fa4..2a6f37aec8 100644 --- a/api/controllers/console/workspace/load_balancing_config.py +++ b/api/controllers/console/workspace/load_balancing_config.py @@ -1,12 +1,12 @@ from flask_restx import Resource -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from pydantic import BaseModel from werkzeug.exceptions import Forbidden from controllers.common.schema import register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, setup_required +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from libs.login import current_account_with_tenant, login_required from models import TenantAccountRole from services.model_load_balancing_service import ModelLoadBalancingService diff --git a/api/controllers/console/workspace/model_providers.py b/api/controllers/console/workspace/model_providers.py index cbb9677309..4b10561fdb 100644 --- a/api/controllers/console/workspace/model_providers.py +++ b/api/controllers/console/workspace/model_providers.py @@ -3,13 +3,13 @@ from typing import Any, Literal from flask import request, send_file from flask_restx import Resource -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, field_validator from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.helper import uuid_value from libs.login import current_account_with_tenant, login_required from services.billing_service import BillingService diff --git a/api/controllers/console/workspace/models.py b/api/controllers/console/workspace/models.py index 9182dbb510..b2d07ff8f9 100644 --- a/api/controllers/console/workspace/models.py +++ b/api/controllers/console/workspace/models.py @@ -3,14 +3,14 @@ from typing import Any, cast from flask import request from flask_restx import Resource -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, field_validator from controllers.common.schema import register_enum_models, register_schema_models from controllers.console import console_ns from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.helper import uuid_value from libs.login import current_account_with_tenant, login_required from services.model_load_balancing_service import ModelLoadBalancingService @@ -465,7 +465,7 @@ class ModelProviderModelDisableApi(Resource): class ParserValidate(BaseModel): model: str model_type: ModelType - credentials: dict + credentials: dict[str, Any] console_ns.schema_model( diff --git a/api/controllers/console/workspace/plugin.py b/api/controllers/console/workspace/plugin.py index aa674a63b3..b3e344ccea 100644 --- a/api/controllers/console/workspace/plugin.py +++ b/api/controllers/console/workspace/plugin.py @@ -4,7 +4,6 @@ from typing import Any, Literal from flask import request, send_file from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field from werkzeug.datastructures import FileStorage from werkzeug.exceptions import Forbidden @@ -15,6 +14,7 @@ from controllers.console import console_ns from controllers.console.workspace import plugin_permission_required from controllers.console.wraps import account_initialization_required, is_admin_or_owner_required, setup_required from core.plugin.impl.exc import PluginDaemonClientSideError +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_account_with_tenant, login_required from models.account import TenantPluginAutoUpgradeStrategy, TenantPluginPermission from services.plugin.plugin_auto_upgrade_service import PluginAutoUpgradeService diff --git a/api/controllers/console/workspace/tool_providers.py b/api/controllers/console/workspace/tool_providers.py index c9956501e2..471594f349 100644 --- a/api/controllers/console/workspace/tool_providers.py +++ b/api/controllers/console/workspace/tool_providers.py @@ -5,7 +5,6 @@ from urllib.parse import urlparse from flask import make_response, redirect, request, send_file from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, HttpUrl, field_validator, model_validator from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import Forbidden @@ -28,6 +27,7 @@ from core.plugin.entities.plugin_daemon import CredentialType from core.plugin.impl.oauth import OAuthHandler from core.tools.entities.tool_entities import ApiProviderSchemaType, WorkflowToolParameterConfiguration from extensions.ext_database import db +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.helper import alphanumeric, uuid_value from libs.login import current_account_with_tenant, login_required from models.provider_ids import ToolProviderID diff --git a/api/controllers/console/workspace/trigger_providers.py b/api/controllers/console/workspace/trigger_providers.py index 7a28a09861..d11b66244f 100644 --- a/api/controllers/console/workspace/trigger_providers.py +++ b/api/controllers/console/workspace/trigger_providers.py @@ -3,7 +3,6 @@ from typing import Any from flask import make_response, redirect, request from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, model_validator from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import BadRequest, Forbidden @@ -16,6 +15,7 @@ from core.plugin.impl.oauth import OAuthHandler from core.trigger.entities.entities import SubscriptionBuilderUpdater from core.trigger.trigger_manager import TriggerManager from extensions.ext_database import db +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.login import current_user, login_required from models.account import Account from models.provider_ids import TriggerProviderID diff --git a/api/controllers/console/workspace/workspace.py b/api/controllers/console/workspace/workspace.py index 42874e6033..565099db61 100644 --- a/api/controllers/console/workspace/workspace.py +++ b/api/controllers/console/workspace/workspace.py @@ -1,8 +1,9 @@ import logging +from datetime import datetime from flask import request -from flask_restx import Resource, fields, marshal, marshal_with -from pydantic import BaseModel, Field +from flask_restx import Resource, fields, marshal +from pydantic import BaseModel, Field, field_validator from sqlalchemy import select from werkzeug.exceptions import Unauthorized @@ -26,6 +27,7 @@ from controllers.console.wraps import ( ) from enums.cloud_plan import CloudPlan from extensions.ext_database import db +from fields.base import ResponseModel from libs.helper import TimestampField from libs.login import current_account_with_tenant, login_required from models.account import Tenant, TenantCustomConfigDict, TenantStatus @@ -58,6 +60,37 @@ class WorkspaceInfoPayload(BaseModel): name: str +class TenantInfoResponse(ResponseModel): + id: str + name: str | None = None + plan: str | None = None + status: str | None = None + created_at: int | None = None + role: str | None = None + in_trial: bool | None = None + trial_end_reason: str | None = None + custom_config: dict | None = None + trial_credits: int | None = None + trial_credits_used: int | None = None + next_credit_reset_date: int | None = None + + @field_validator("plan", "status", "trial_end_reason", mode="before") + @classmethod + def _normalize_enum_like(cls, value): + if value is None: + return None + if isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None): + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + def reg(cls: type[BaseModel]): console_ns.schema_model(cls.__name__, cls.model_json_schema(ref_template=DEFAULT_REF_TEMPLATE_SWAGGER_2_0)) @@ -66,6 +99,7 @@ reg(WorkspaceListQuery) reg(SwitchWorkspacePayload) reg(WorkspaceCustomConfigPayload) reg(WorkspaceInfoPayload) +reg(TenantInfoResponse) provider_fields = { "provider_name": fields.String, @@ -180,7 +214,7 @@ class TenantApi(Resource): @setup_required @login_required @account_initialization_required - @marshal_with(tenant_fields) + @console_ns.response(200, "Success", console_ns.models[TenantInfoResponse.__name__]) def post(self): if request.path == "/info": logger.warning("Deprecated URL /info was used.") @@ -200,7 +234,13 @@ class TenantApi(Resource): else: raise Unauthorized("workspace is archived") - return WorkspaceService.get_tenant_info(tenant), 200 + return ( + TenantInfoResponse.model_validate( + WorkspaceService.get_tenant_info(tenant), + from_attributes=True, + ).model_dump(mode="json"), + 200, + ) @console_ns.route("/workspaces/switch") diff --git a/api/controllers/inner_api/app/dsl.py b/api/controllers/inner_api/app/dsl.py index 6c15f9aa8b..915a11dcdd 100644 --- a/api/controllers/inner_api/app/dsl.py +++ b/api/controllers/inner_api/app/dsl.py @@ -9,7 +9,7 @@ from flask import request from flask_restx import Resource from pydantic import BaseModel, Field from sqlalchemy import select -from sqlalchemy.orm import sessionmaker +from sqlalchemy.orm import Session from controllers.common.schema import register_schema_model from controllers.console.wraps import setup_required @@ -56,7 +56,7 @@ class EnterpriseAppDSLImport(Resource): account.set_tenant_id(workspace_id) - with sessionmaker(db.engine).begin() as session: + with Session(db.engine, expire_on_commit=False) as session: dsl_service = AppDslService(session) result = dsl_service.import_app( account=account, @@ -65,6 +65,10 @@ class EnterpriseAppDSLImport(Resource): name=args.name, description=args.description, ) + if result.status == ImportStatus.FAILED: + session.rollback() + else: + session.commit() if result.status == ImportStatus.FAILED: return result.model_dump(mode="json"), 400 diff --git a/api/controllers/inner_api/plugin/plugin.py b/api/controllers/inner_api/plugin/plugin.py index 83c8fa02fe..72cab3de73 100644 --- a/api/controllers/inner_api/plugin/plugin.py +++ b/api/controllers/inner_api/plugin/plugin.py @@ -1,5 +1,4 @@ from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from controllers.console.wraps import setup_required from controllers.inner_api import inner_api_ns @@ -30,6 +29,7 @@ from core.plugin.entities.request import ( ) from core.tools.entities.tool_entities import ToolProviderType from core.tools.signature import get_signed_file_url_for_plugin +from graphon.model_runtime.utils.encoders import jsonable_encoder from libs.helper import length_prefixed_response from models import Account, Tenant from models.model import EndUser diff --git a/api/controllers/mcp/mcp.py b/api/controllers/mcp/mcp.py index d2ce0ea543..f652bbc581 100644 --- a/api/controllers/mcp/mcp.py +++ b/api/controllers/mcp/mcp.py @@ -2,7 +2,6 @@ from typing import Any, Union from flask import Response from flask_restx import Resource -from graphon.variables.input_entities import VariableEntity from pydantic import BaseModel, Field, ValidationError from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -12,6 +11,7 @@ from controllers.mcp import mcp_ns from core.mcp import types as mcp_types from core.mcp.server.streamable_http import handle_mcp_request from extensions.ext_database import db +from graphon.variables.input_entities import VariableEntity, VariableEntityType from libs import helper from models.enums import AppMCPServerStatus from models.model import App, AppMCPServer, AppMode, EndUser @@ -158,14 +158,20 @@ class MCPAppApi(Resource): except ValidationError as e: raise MCPRequestError(mcp_types.INVALID_PARAMS, f"Invalid user_input_form: {str(e)}") - def _convert_user_input_form(self, raw_form: list[dict]) -> list[VariableEntity]: + def _convert_user_input_form(self, raw_form: list[dict[str, Any]]) -> list[VariableEntity]: """Convert raw user input form to VariableEntity objects""" return [self._create_variable_entity(item) for item in raw_form] - def _create_variable_entity(self, item: dict) -> VariableEntity: + def _create_variable_entity(self, item: dict[str, Any]) -> VariableEntity: """Create a single VariableEntity from raw form item""" - variable_type = item.get("type", "") or list(item.keys())[0] - variable = item[variable_type] + variable_type_raw: str = item.get("type", "") or list(item.keys())[0] + try: + variable_type = VariableEntityType(variable_type_raw) + except ValueError as e: + raise MCPRequestError( + mcp_types.INVALID_PARAMS, f"Invalid user_input_form variable type: {variable_type_raw}" + ) from e + variable = item[variable_type_raw] return VariableEntity( type=variable_type, @@ -178,7 +184,7 @@ class MCPAppApi(Resource): json_schema=variable.get("json_schema"), ) - def _parse_mcp_request(self, args: dict) -> mcp_types.ClientRequest | mcp_types.ClientNotification: + def _parse_mcp_request(self, args: dict[str, Any]) -> mcp_types.ClientRequest | mcp_types.ClientNotification: """Parse and validate MCP request""" try: return mcp_types.ClientRequest.model_validate(args) diff --git a/api/controllers/service_api/app/audio.py b/api/controllers/service_api/app/audio.py index 907dd1b06d..e818573b8f 100644 --- a/api/controllers/service_api/app/audio.py +++ b/api/controllers/service_api/app/audio.py @@ -2,7 +2,6 @@ import logging from flask import request from flask_restx import Resource -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError import services @@ -22,6 +21,7 @@ from controllers.service_api.app.error import ( ) from controllers.service_api.wraps import FetchUserArg, WhereisUserArg, validate_app_token from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from models.model import App, EndUser from services.audio_service import AudioService from services.errors.audio import ( diff --git a/api/controllers/service_api/app/completion.py b/api/controllers/service_api/app/completion.py index 3142e5118e..31f2797d66 100644 --- a/api/controllers/service_api/app/completion.py +++ b/api/controllers/service_api/app/completion.py @@ -4,7 +4,6 @@ from uuid import UUID from flask import request from flask_restx import Resource -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import BadRequest, InternalServerError, NotFound @@ -29,6 +28,7 @@ from core.errors.error import ( QuotaExceededError, ) from core.helper.trace_id_helper import get_external_trace_id +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.helper import UUIDStrOrEmpty from models.model import App, AppMode, EndUser diff --git a/api/controllers/service_api/app/conversation.py b/api/controllers/service_api/app/conversation.py index 1ec289e2a2..c4353ca7b8 100644 --- a/api/controllers/service_api/app/conversation.py +++ b/api/controllers/service_api/app/conversation.py @@ -1,3 +1,4 @@ +from datetime import datetime from typing import Any, Literal from flask import request @@ -14,14 +15,13 @@ from controllers.service_api.app.error import NotChatAppError from controllers.service_api.wraps import FetchUserArg, WhereisUserArg, validate_app_token from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_database import db +from fields._value_type_serializer import serialize_value_type +from fields.base import ResponseModel from fields.conversation_fields import ( ConversationInfiniteScrollPagination, SimpleConversation, ) -from fields.conversation_variable_fields import ( - build_conversation_variable_infinite_scroll_pagination_model, - build_conversation_variable_model, -) +from graphon.variables.types import SegmentType from libs.helper import UUIDStrOrEmpty from models.model import App, AppMode, EndUser from services.conversation_service import ConversationService @@ -70,12 +70,70 @@ class ConversationVariableUpdatePayload(BaseModel): value: Any +class ConversationVariableResponse(ResponseModel): + id: str + name: str + value_type: str + value: str | None = None + description: str | None = None + created_at: int | None = None + updated_at: int | None = None + + @field_validator("value_type", mode="before") + @classmethod + def normalize_value_type(cls, value: Any) -> str: + exposed_type = getattr(value, "exposed_type", None) + if callable(exposed_type): + return str(exposed_type().value) + if isinstance(value, str): + try: + return str(SegmentType(value).exposed_type().value) + except ValueError: + return value + try: + return serialize_value_type(value) + except (AttributeError, TypeError, ValueError): + pass + + try: + return serialize_value_type({"value_type": value}) + except (AttributeError, TypeError, ValueError): + value_attr = getattr(value, "value", None) + if value_attr is not None: + return str(value_attr) + return str(value) + + @field_validator("value", mode="before") + @classmethod + def normalize_value(cls, value: Any | None) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(value) + + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class ConversationVariableInfiniteScrollPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[ConversationVariableResponse] + + register_schema_models( service_api_ns, ConversationListQuery, ConversationRenamePayload, ConversationVariablesQuery, ConversationVariableUpdatePayload, + ConversationVariableResponse, + ConversationVariableInfiniteScrollPaginationResponse, ) @@ -204,8 +262,12 @@ class ConversationVariablesApi(Resource): 404: "Conversation not found", } ) + @service_api_ns.response( + 200, + "Variables retrieved successfully", + service_api_ns.models[ConversationVariableInfiniteScrollPaginationResponse.__name__], + ) @validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.QUERY)) - @service_api_ns.marshal_with(build_conversation_variable_infinite_scroll_pagination_model(service_api_ns)) def get(self, app_model: App, end_user: EndUser, c_id): """List all variables for a conversation. @@ -222,9 +284,12 @@ class ConversationVariablesApi(Resource): last_id = str(query_args.last_id) if query_args.last_id else None try: - return ConversationService.get_conversational_variable( + pagination = ConversationService.get_conversational_variable( app_model, conversation_id, end_user, query_args.limit, last_id, query_args.variable_name ) + return ConversationVariableInfiniteScrollPaginationResponse.model_validate( + pagination, from_attributes=True + ).model_dump(mode="json") except services.errors.conversation.ConversationNotExistsError: raise NotFound("Conversation Not Exists.") @@ -243,8 +308,12 @@ class ConversationVariableDetailApi(Resource): 404: "Conversation or variable not found", } ) + @service_api_ns.response( + 200, + "Variable updated successfully", + service_api_ns.models[ConversationVariableResponse.__name__], + ) @validate_app_token(fetch_user_arg=FetchUserArg(fetch_from=WhereisUserArg.JSON)) - @service_api_ns.marshal_with(build_conversation_variable_model(service_api_ns)) def put(self, app_model: App, end_user: EndUser, c_id, variable_id): """Update a conversation variable's value. @@ -261,9 +330,10 @@ class ConversationVariableDetailApi(Resource): payload = ConversationVariableUpdatePayload.model_validate(service_api_ns.payload or {}) try: - return ConversationService.update_conversation_variable( + variable = ConversationService.update_conversation_variable( app_model, conversation_id, variable_id, end_user, payload.value ) + return ConversationVariableResponse.model_validate(variable, from_attributes=True).model_dump(mode="json") except services.errors.conversation.ConversationNotExistsError: raise NotFound("Conversation Not Exists.") except services.errors.conversation.ConversationVariableNotExistsError: diff --git a/api/controllers/service_api/app/workflow.py b/api/controllers/service_api/app/workflow.py index e0a64ffe26..cc763fa89c 100644 --- a/api/controllers/service_api/app/workflow.py +++ b/api/controllers/service_api/app/workflow.py @@ -1,13 +1,12 @@ import logging +from collections.abc import Mapping +from datetime import datetime from typing import Literal from dateutil.parser import isoparse from flask import request -from flask_restx import Namespace, Resource, fields -from graphon.enums import WorkflowExecutionStatus -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.errors.invoke import InvokeError -from pydantic import BaseModel, Field +from flask_restx import Resource, fields +from pydantic import BaseModel, Field, field_validator from sqlalchemy.orm import sessionmaker from werkzeug.exceptions import BadRequest, InternalServerError, NotFound @@ -33,9 +32,13 @@ from core.errors.error import ( from core.helper.trace_id_helper import get_external_trace_id from extensions.ext_database import db from extensions.ext_redis import redis_client -from fields.workflow_app_log_fields import build_workflow_app_log_pagination_model +from fields.base import ResponseModel +from fields.end_user_fields import SimpleEndUser +from fields.member_fields import SimpleAccount +from graphon.enums import WorkflowExecutionStatus +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper -from libs.helper import OptionalTimestampField, TimestampField from models.model import App, AppMode, EndUser from models.workflow import WorkflowRun from repositories.factory import DifyAPIRepositoryFactory @@ -65,38 +68,142 @@ class WorkflowLogQuery(BaseModel): register_schema_models(service_api_ns, WorkflowRunPayload, WorkflowLogQuery) +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +def _enum_value(value): + return getattr(value, "value", value) + + class WorkflowRunStatusField(fields.Raw): def output(self, key, obj: WorkflowRun, **kwargs): - return obj.status.value + return _enum_value(obj.status) class WorkflowRunOutputsField(fields.Raw): def output(self, key, obj: WorkflowRun, **kwargs): - if obj.status == WorkflowExecutionStatus.PAUSED: + status = _enum_value(obj.status) + if status == WorkflowExecutionStatus.PAUSED.value: return {} outputs = obj.outputs_dict return outputs or {} -workflow_run_fields = { - "id": fields.String, - "workflow_id": fields.String, - "status": WorkflowRunStatusField, - "inputs": fields.Raw, - "outputs": WorkflowRunOutputsField, - "error": fields.String, - "total_steps": fields.Integer, - "total_tokens": fields.Integer, - "created_at": TimestampField, - "finished_at": OptionalTimestampField, - "elapsed_time": fields.Float, -} +class WorkflowRunResponse(ResponseModel): + id: str + workflow_id: str + status: str + inputs: dict | list | str | int | float | bool | None = None + outputs: dict = Field(default_factory=dict) + error: str | None = None + total_steps: int | None = None + total_tokens: int | None = None + created_at: int | None = None + finished_at: int | None = None + elapsed_time: float | int | None = None + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) -def build_workflow_run_model(api_or_ns: Namespace): - """Build the workflow run model for the API or Namespace.""" - return api_or_ns.model("WorkflowRun", workflow_run_fields) +class WorkflowRunForLogResponse(ResponseModel): + id: str + version: str | None = None + status: str | None = None + triggered_from: str | None = None + error: str | None = None + elapsed_time: float | int | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None + + @field_validator("status", "triggered_from", mode="before") + @classmethod + def _normalize_enum(cls, value): + return _enum_value(value) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowAppLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForLogResponse | None = None + details: dict | list | str | int | float | bool | None = None + created_from: str | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_from", "created_by_role", mode="before") + @classmethod + def _normalize_enum(cls, value): + return _enum_value(value) + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowAppLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowAppLogPartialResponse] + + +register_schema_models( + service_api_ns, + WorkflowRunResponse, + WorkflowRunForLogResponse, + WorkflowAppLogPartialResponse, + WorkflowAppLogPaginationResponse, +) + + +def _serialize_workflow_run(workflow_run: WorkflowRun) -> dict: + status = _enum_value(workflow_run.status) + raw_outputs = workflow_run.outputs_dict + if status == WorkflowExecutionStatus.PAUSED.value or raw_outputs is None: + outputs: dict = {} + elif isinstance(raw_outputs, dict): + outputs = raw_outputs + elif isinstance(raw_outputs, Mapping): + outputs = dict(raw_outputs) + else: + outputs = {} + return WorkflowRunResponse.model_validate( + { + "id": workflow_run.id, + "workflow_id": workflow_run.workflow_id, + "status": status, + "inputs": workflow_run.inputs, + "outputs": outputs, + "error": workflow_run.error, + "total_steps": workflow_run.total_steps, + "total_tokens": workflow_run.total_tokens, + "created_at": workflow_run.created_at, + "finished_at": workflow_run.finished_at, + "elapsed_time": workflow_run.elapsed_time, + } + ).model_dump(mode="json") + + +def _serialize_workflow_log_pagination(pagination) -> dict: + return WorkflowAppLogPaginationResponse.model_validate(pagination, from_attributes=True).model_dump(mode="json") @service_api_ns.route("/workflows/run/") @@ -112,7 +219,11 @@ class WorkflowRunDetailApi(Resource): } ) @validate_app_token - @service_api_ns.marshal_with(build_workflow_run_model(service_api_ns)) + @service_api_ns.response( + 200, + "Workflow run details retrieved successfully", + service_api_ns.models[WorkflowRunResponse.__name__], + ) def get(self, app_model: App, workflow_run_id: str): """Get a workflow task running detail. @@ -133,7 +244,7 @@ class WorkflowRunDetailApi(Resource): ) if not workflow_run: raise NotFound("Workflow run not found.") - return workflow_run + return _serialize_workflow_run(workflow_run) @service_api_ns.route("/workflows/run") @@ -299,7 +410,11 @@ class WorkflowAppLogApi(Resource): } ) @validate_app_token - @service_api_ns.marshal_with(build_workflow_app_log_pagination_model(service_api_ns)) + @service_api_ns.response( + 200, + "Logs retrieved successfully", + service_api_ns.models[WorkflowAppLogPaginationResponse.__name__], + ) def get(self, app_model: App): """Get workflow app logs. @@ -327,4 +442,4 @@ class WorkflowAppLogApi(Resource): created_by_account=args.created_by_account, ) - return workflow_app_log_pagination + return _serialize_workflow_log_pagination(workflow_app_log_pagination) diff --git a/api/controllers/service_api/dataset/dataset.py b/api/controllers/service_api/dataset/dataset.py index fd954be6b1..76519cad0a 100644 --- a/api/controllers/service_api/dataset/dataset.py +++ b/api/controllers/service_api/dataset/dataset.py @@ -2,7 +2,6 @@ from typing import Any, Literal, cast from flask import request from flask_restx import marshal -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field, TypeAdapter, field_validator from werkzeug.exceptions import Forbidden, NotFound @@ -19,6 +18,7 @@ from core.plugin.impl.model_runtime_factory import create_plugin_provider_manage from core.rag.index_processor.constant.index_type import IndexTechniqueType from fields.dataset_fields import dataset_detail_fields from fields.tag_fields import DataSetTag +from graphon.model_runtime.entities.model_entities import ModelType from libs.login import current_user from models.account import Account from models.dataset import DatasetPermissionEnum diff --git a/api/controllers/service_api/dataset/segment.py b/api/controllers/service_api/dataset/segment.py index 9ad999b93e..5992fa7410 100644 --- a/api/controllers/service_api/dataset/segment.py +++ b/api/controllers/service_api/dataset/segment.py @@ -2,7 +2,6 @@ from typing import Any from flask import request from flask_restx import marshal -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field from sqlalchemy import select from werkzeug.exceptions import NotFound @@ -23,6 +22,7 @@ from core.model_manager import ModelManager from core.rag.index_processor.constant.index_type import IndexTechniqueType from extensions.ext_database import db from fields.segment_fields import child_chunk_fields, segment_fields +from graphon.model_runtime.entities.model_entities import ModelType from libs.login import current_account_with_tenant from models.dataset import Dataset from services.dataset_service import DatasetService, DocumentService, SegmentService @@ -33,25 +33,25 @@ from services.errors.chunk import ChildChunkIndexingError as ChildChunkIndexingS from services.summary_index_service import SummaryIndexService -def _marshal_segment_with_summary(segment, dataset_id: str) -> dict: +def _marshal_segment_with_summary(segment, dataset_id: str) -> dict[str, Any]: """Marshal a single segment and enrich it with summary content.""" - segment_dict = dict(marshal(segment, segment_fields)) # type: ignore[arg-type] + segment_dict: dict[str, Any] = dict(marshal(segment, segment_fields)) # type: ignore[arg-type] summary = SummaryIndexService.get_segment_summary(segment_id=segment.id, dataset_id=dataset_id) segment_dict["summary"] = summary.summary_content if summary else None return segment_dict -def _marshal_segments_with_summary(segments, dataset_id: str) -> list[dict]: +def _marshal_segments_with_summary(segments, dataset_id: str) -> list[dict[str, Any]]: """Marshal multiple segments and enrich them with summary content (batch query).""" segment_ids = [segment.id for segment in segments] - summaries: dict = {} + summaries: dict[str, str | None] = {} if segment_ids: summary_records = SummaryIndexService.get_segments_summaries(segment_ids=segment_ids, dataset_id=dataset_id) summaries = {chunk_id: record.summary_content for chunk_id, record in summary_records.items()} - result = [] + result: list[dict[str, Any]] = [] for segment in segments: - segment_dict = dict(marshal(segment, segment_fields)) # type: ignore[arg-type] + segment_dict: dict[str, Any] = dict(marshal(segment, segment_fields)) # type: ignore[arg-type] segment_dict["summary"] = summaries.get(segment.id) result.append(segment_dict) return result diff --git a/api/controllers/service_api/workspace/models.py b/api/controllers/service_api/workspace/models.py index c0a6cb0a76..5ac65fc4e6 100644 --- a/api/controllers/service_api/workspace/models.py +++ b/api/controllers/service_api/workspace/models.py @@ -1,9 +1,9 @@ from flask_login import current_user from flask_restx import Resource -from graphon.model_runtime.utils.encoders import jsonable_encoder from controllers.service_api import service_api_ns from controllers.service_api.wraps import validate_dataset_token +from graphon.model_runtime.utils.encoders import jsonable_encoder from services.model_provider_service import ModelProviderService diff --git a/api/controllers/web/audio.py b/api/controllers/web/audio.py index 0ef4471018..3ad595f1f4 100644 --- a/api/controllers/web/audio.py +++ b/api/controllers/web/audio.py @@ -2,7 +2,6 @@ import logging from flask import request from flask_restx import fields, marshal_with -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import field_validator from werkzeug.exceptions import InternalServerError @@ -22,6 +21,7 @@ from controllers.web.error import ( ) from controllers.web.wraps import WebApiResource from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from libs.helper import uuid_value from models.model import App from services.audio_service import AudioService diff --git a/api/controllers/web/completion.py b/api/controllers/web/completion.py index e37f9af5f0..0528184d79 100644 --- a/api/controllers/web/completion.py +++ b/api/controllers/web/completion.py @@ -1,7 +1,6 @@ import logging from typing import Any, Literal -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, field_validator from werkzeug.exceptions import InternalServerError, NotFound @@ -26,6 +25,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from libs.helper import uuid_value from models.model import AppMode diff --git a/api/controllers/web/human_input_form.py b/api/controllers/web/human_input_form.py index 2ce96abd52..44876f8303 100644 --- a/api/controllers/web/human_input_form.py +++ b/api/controllers/web/human_input_form.py @@ -5,6 +5,7 @@ Web App Human Input Form APIs. import json import logging from datetime import datetime +from typing import Any, NotRequired, TypedDict from flask import Response, request from flask_restx import Resource @@ -58,10 +59,19 @@ def _to_timestamp(value: datetime) -> int: return int(value.timestamp()) +class FormDefinitionPayload(TypedDict): + form_content: Any + inputs: Any + resolved_default_values: dict[str, str] + user_actions: Any + expiration_time: int + site: NotRequired[dict] + + def _jsonify_form_definition(form: Form, site_payload: dict | None = None) -> Response: """Return the form payload (optionally with site) as a JSON response.""" definition_payload = form.get_definition().model_dump() - payload = { + payload: FormDefinitionPayload = { "form_content": definition_payload["rendered_content"], "inputs": definition_payload["inputs"], "resolved_default_values": _stringify_default_values(definition_payload["default_values"]), diff --git a/api/controllers/web/message.py b/api/controllers/web/message.py index 39afdd843f..07ecf8035b 100644 --- a/api/controllers/web/message.py +++ b/api/controllers/web/message.py @@ -2,7 +2,6 @@ import logging from typing import Literal from flask import request -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import BaseModel, Field, TypeAdapter from werkzeug.exceptions import InternalServerError, NotFound @@ -24,6 +23,7 @@ from core.app.entities.app_invoke_entities import InvokeFrom from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError from fields.conversation_fields import ResultResponse from fields.message_fields import SuggestedQuestionsResponse, WebMessageInfiniteScrollPagination, WebMessageListItem +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from models.enums import FeedbackRating from models.model import AppMode diff --git a/api/controllers/web/passport.py b/api/controllers/web/passport.py index 66082893b8..0293df74b0 100644 --- a/api/controllers/web/passport.py +++ b/api/controllers/web/passport.py @@ -1,5 +1,6 @@ import uuid from datetime import UTC, datetime, timedelta +from typing import Any from flask import make_response, request from flask_restx import Resource @@ -103,21 +104,23 @@ class PassportResource(Resource): return response -def decode_enterprise_webapp_user_id(jwt_token: str | None): +def decode_enterprise_webapp_user_id(jwt_token: str | None) -> dict[str, Any] | None: """ Decode the enterprise user session from the Authorization header. """ if not jwt_token: return None - decoded = PassportService().verify(jwt_token) + decoded: dict[str, Any] = PassportService().verify(jwt_token) source = decoded.get("token_source") if not source or source != "webapp_login_token": raise Unauthorized("Invalid token source. Expected 'webapp_login_token'.") return decoded -def exchange_token_for_existing_web_user(app_code: str, enterprise_user_decoded: dict, auth_type: WebAppAuthType): +def exchange_token_for_existing_web_user( + app_code: str, enterprise_user_decoded: dict[str, Any], auth_type: WebAppAuthType +): """ Exchange a token for an existing web user session. """ diff --git a/api/controllers/web/remote_files.py b/api/controllers/web/remote_files.py index 38aeccc642..fe31e9d4ac 100644 --- a/api/controllers/web/remote_files.py +++ b/api/controllers/web/remote_files.py @@ -1,7 +1,6 @@ import urllib.parse import httpx -from graphon.file import helpers as file_helpers from pydantic import BaseModel, Field, HttpUrl import services @@ -14,6 +13,7 @@ from controllers.common.errors import ( from core.helper import ssrf_proxy from extensions.ext_database import db from fields.file_fields import FileWithSignedUrl, RemoteFileInfo +from graphon.file import helpers as file_helpers from services.file_service import FileService from ..common.schema import register_schema_models diff --git a/api/controllers/web/site.py b/api/controllers/web/site.py index 1a0c6d4252..7d2080dd91 100644 --- a/api/controllers/web/site.py +++ b/api/controllers/web/site.py @@ -1,4 +1,4 @@ -from typing import cast +from typing import Any, cast from flask_restx import fields, marshal, marshal_with from sqlalchemy import select @@ -113,12 +113,12 @@ class AppSiteInfo: } -def serialize_site(site: Site) -> dict: +def serialize_site(site: Site) -> dict[str, Any]: """Serialize Site model using the same schema as AppSiteApi.""" - return cast(dict, marshal(site, AppSiteApi.site_fields)) + return cast(dict[str, Any], marshal(site, AppSiteApi.site_fields)) -def serialize_app_site_payload(app_model: App, site: Site, end_user_id: str | None) -> dict: +def serialize_app_site_payload(app_model: App, site: Site, end_user_id: str | None) -> dict[str, Any]: can_replace_logo = FeatureService.get_features(app_model.tenant_id).can_replace_logo app_site_info = AppSiteInfo(app_model.tenant, app_model, site, end_user_id, can_replace_logo) - return cast(dict, marshal(app_site_info, AppSiteApi.app_fields)) + return cast(dict[str, Any], marshal(app_site_info, AppSiteApi.app_fields)) diff --git a/api/controllers/web/workflow.py b/api/controllers/web/workflow.py index 796e090976..98211193a0 100644 --- a/api/controllers/web/workflow.py +++ b/api/controllers/web/workflow.py @@ -1,7 +1,5 @@ import logging -from graphon.graph_engine.manager import GraphEngineManager -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError from controllers.common.controller_schemas import WorkflowRunPayload @@ -24,6 +22,8 @@ from core.errors.error import ( QuotaExceededError, ) from extensions.ext_redis import redis_client +from graphon.graph_engine.manager import GraphEngineManager +from graphon.model_runtime.errors.invoke import InvokeError from libs import helper from models.model import App, AppMode, EndUser from services.app_generate_service import AppGenerateService diff --git a/api/core/agent/base_agent_runner.py b/api/core/agent/base_agent_runner.py index 06c746990d..790602ef5d 100644 --- a/api/core/agent/base_agent_runner.py +++ b/api/core/agent/base_agent_runner.py @@ -4,20 +4,6 @@ import uuid from decimal import Decimal from typing import Union, cast -from graphon.file import file_manager -from graphon.model_runtime.entities import ( - AssistantPromptMessage, - LLMUsage, - PromptMessage, - PromptMessageTool, - SystemPromptMessage, - TextPromptMessageContent, - ToolPromptMessage, - UserPromptMessage, -) -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes -from graphon.model_runtime.entities.model_entities import ModelFeature -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import func, select from core.agent.entities import AgentEntity, AgentToolEntity @@ -43,6 +29,20 @@ from core.tools.tool_manager import ToolManager from core.tools.utils.dataset_retriever_tool import DatasetRetrieverTool from extensions.ext_database import db from factories import file_factory +from graphon.file import file_manager +from graphon.model_runtime.entities import ( + AssistantPromptMessage, + LLMUsage, + PromptMessage, + PromptMessageTool, + SystemPromptMessage, + TextPromptMessageContent, + ToolPromptMessage, + UserPromptMessage, +) +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes +from graphon.model_runtime.entities.model_entities import ModelFeature +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from models.enums import CreatorUserRole from models.model import Conversation, Message, MessageAgentThought, MessageFile diff --git a/api/core/agent/cot_agent_runner.py b/api/core/agent/cot_agent_runner.py index f07ac64498..0bc93ad34d 100644 --- a/api/core/agent/cot_agent_runner.py +++ b/api/core/agent/cot_agent_runner.py @@ -4,15 +4,6 @@ from abc import ABC, abstractmethod from collections.abc import Generator, Mapping, Sequence from typing import Any, TypedDict -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - PromptMessage, - PromptMessageTool, - ToolPromptMessage, - UserPromptMessage, -) - from core.agent.base_agent_runner import BaseAgentRunner from core.agent.entities import AgentScratchpadUnit from core.agent.errors import AgentMaxIterationError @@ -24,6 +15,14 @@ from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransfo from core.tools.__base.tool import Tool from core.tools.entities.tool_entities import ToolInvokeMeta from core.tools.tool_engine import ToolEngine +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + PromptMessage, + PromptMessageTool, + ToolPromptMessage, + UserPromptMessage, +) from models.model import Message logger = logging.getLogger(__name__) diff --git a/api/core/agent/cot_chat_agent_runner.py b/api/core/agent/cot_chat_agent_runner.py index 2b2e26987e..a2186be100 100644 --- a/api/core/agent/cot_chat_agent_runner.py +++ b/api/core/agent/cot_chat_agent_runner.py @@ -1,5 +1,6 @@ import json +from core.agent.cot_agent_runner import CotAgentRunner from graphon.file import file_manager from graphon.model_runtime.entities import ( AssistantPromptMessage, @@ -11,8 +12,6 @@ from graphon.model_runtime.entities import ( from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes from graphon.model_runtime.utils.encoders import jsonable_encoder -from core.agent.cot_agent_runner import CotAgentRunner - class CotChatAgentRunner(CotAgentRunner): def _organize_system_prompt(self) -> SystemPromptMessage: diff --git a/api/core/agent/cot_completion_agent_runner.py b/api/core/agent/cot_completion_agent_runner.py index d4c52a8eb1..51a30998ae 100644 --- a/api/core/agent/cot_completion_agent_runner.py +++ b/api/core/agent/cot_completion_agent_runner.py @@ -1,5 +1,6 @@ import json +from core.agent.cot_agent_runner import CotAgentRunner from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, PromptMessage, @@ -8,8 +9,6 @@ from graphon.model_runtime.entities.message_entities import ( ) from graphon.model_runtime.utils.encoders import jsonable_encoder -from core.agent.cot_agent_runner import CotAgentRunner - class CotCompletionAgentRunner(CotAgentRunner): def _organize_instruction_prompt(self) -> str: diff --git a/api/core/agent/fc_agent_runner.py b/api/core/agent/fc_agent_runner.py index fdffde85d0..d38d24d1e7 100644 --- a/api/core/agent/fc_agent_runner.py +++ b/api/core/agent/fc_agent_runner.py @@ -4,6 +4,13 @@ from collections.abc import Generator from copy import deepcopy from typing import Any, Union +from core.agent.base_agent_runner import BaseAgentRunner +from core.agent.errors import AgentMaxIterationError +from core.app.apps.base_app_queue_manager import PublishFrom +from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent +from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform +from core.tools.entities.tool_entities import ToolInvokeMeta +from core.tools.tool_engine import ToolEngine from graphon.file import file_manager from graphon.model_runtime.entities import ( AssistantPromptMessage, @@ -19,14 +26,6 @@ from graphon.model_runtime.entities import ( UserPromptMessage, ) from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes - -from core.agent.base_agent_runner import BaseAgentRunner -from core.agent.errors import AgentMaxIterationError -from core.app.apps.base_app_queue_manager import PublishFrom -from core.app.entities.queue_entities import QueueAgentThoughtEvent, QueueMessageEndEvent, QueueMessageFileEvent -from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform -from core.tools.entities.tool_entities import ToolInvokeMeta -from core.tools.tool_engine import ToolEngine from models.model import Message logger = logging.getLogger(__name__) diff --git a/api/core/agent/output_parser/cot_output_parser.py b/api/core/agent/output_parser/cot_output_parser.py index 46c1f1230d..f341ca5a0b 100644 --- a/api/core/agent/output_parser/cot_output_parser.py +++ b/api/core/agent/output_parser/cot_output_parser.py @@ -1,17 +1,16 @@ import json import re from collections.abc import Generator -from typing import Union - -from graphon.model_runtime.entities.llm_entities import LLMResultChunk +from typing import Any, Union from core.agent.entities import AgentScratchpadUnit +from graphon.model_runtime.entities.llm_entities import LLMResultChunk class CotAgentOutputParser: @classmethod def handle_react_stream_output( - cls, llm_response: Generator[LLMResultChunk, None, None], usage_dict: dict + cls, llm_response: Generator[LLMResultChunk, None, None], usage_dict: dict[str, Any] ) -> Generator[Union[str, AgentScratchpadUnit.Action], None, None]: def parse_action(action) -> Union[str, AgentScratchpadUnit.Action]: action_name = None diff --git a/api/core/agent/plugin_entities.py b/api/core/agent/plugin_entities.py index 90aa7b5fd4..8d25863a91 100644 --- a/api/core/agent/plugin_entities.py +++ b/api/core/agent/plugin_entities.py @@ -84,7 +84,7 @@ class AgentStrategyEntity(BaseModel): identity: AgentStrategyIdentity parameters: list[AgentStrategyParameter] = Field(default_factory=list) description: I18nObject = Field(..., description="The description of the agent strategy") - output_schema: dict | None = None + output_schema: dict[str, Any] | None = None features: list[AgentFeature] | None = None meta_version: str | None = None # pydantic configs diff --git a/api/core/app/app_config/common/sensitive_word_avoidance/manager.py b/api/core/app/app_config/common/sensitive_word_avoidance/manager.py index 7d1b11c008..c8ec7cb44d 100644 --- a/api/core/app/app_config/common/sensitive_word_avoidance/manager.py +++ b/api/core/app/app_config/common/sensitive_word_avoidance/manager.py @@ -22,8 +22,8 @@ class SensitiveWordAvoidanceConfigManager: @classmethod def validate_and_set_defaults( - cls, tenant_id: str, config: dict, only_structure_validate: bool = False - ) -> tuple[dict, list[str]]: + cls, tenant_id: str, config: dict[str, Any], only_structure_validate: bool = False + ) -> tuple[dict[str, Any], list[str]]: if not config.get("sensitive_word_avoidance"): config["sensitive_word_avoidance"] = {"enabled": False} diff --git a/api/core/app/app_config/easy_ui_based_app/dataset/manager.py b/api/core/app/app_config/easy_ui_based_app/dataset/manager.py index f04a8df119..3d857a4e9c 100644 --- a/api/core/app/app_config/easy_ui_based_app/dataset/manager.py +++ b/api/core/app/app_config/easy_ui_based_app/dataset/manager.py @@ -138,7 +138,9 @@ class DatasetConfigManager: ) @classmethod - def validate_and_set_defaults(cls, tenant_id: str, app_mode: AppMode, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults( + cls, tenant_id: str, app_mode: AppMode, config: dict[str, Any] + ) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for dataset feature @@ -172,7 +174,7 @@ class DatasetConfigManager: return config, ["agent_mode", "dataset_configs", "dataset_query_variable"] @classmethod - def extract_dataset_config_for_legacy_compatibility(cls, tenant_id: str, app_mode: AppMode, config: dict): + def extract_dataset_config_for_legacy_compatibility(cls, tenant_id: str, app_mode: AppMode, config: dict[str, Any]): """ Extract dataset config for legacy compatibility diff --git a/api/core/app/app_config/easy_ui_based_app/model_config/converter.py b/api/core/app/app_config/easy_ui_based_app/model_config/converter.py index b7dd55632e..dbd7527fc6 100644 --- a/api/core/app/app_config/easy_ui_based_app/model_config/converter.py +++ b/api/core/app/app_config/easy_ui_based_app/model_config/converter.py @@ -1,14 +1,13 @@ from typing import cast -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel - from core.app.app_config.entities import EasyUIBasedAppConfig from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.entities.model_entities import ModelStatus from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError from core.plugin.impl.model_runtime_factory import create_plugin_provider_manager +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel class ModelConfigConverter: diff --git a/api/core/app/app_config/easy_ui_based_app/model_config/manager.py b/api/core/app/app_config/easy_ui_based_app/model_config/manager.py index 5cc385c378..02498c23e1 100644 --- a/api/core/app/app_config/easy_ui_based_app/model_config/manager.py +++ b/api/core/app/app_config/easy_ui_based_app/model_config/manager.py @@ -1,10 +1,9 @@ from collections.abc import Mapping from typing import Any -from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType - from core.app.app_config.entities import ModelConfigEntity from core.plugin.impl.model_runtime_factory import create_plugin_model_assembly +from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType from models.model import AppModelConfigDict from models.provider_ids import ModelProviderID @@ -41,7 +40,7 @@ class ModelConfigManager: ) @classmethod - def validate_and_set_defaults(cls, tenant_id: str, config: Mapping[str, Any]) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, tenant_id: str, config: Mapping[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for model config @@ -108,7 +107,7 @@ class ModelConfigManager: return dict(config), ["model"] @classmethod - def validate_model_completion_params(cls, cp: dict): + def validate_model_completion_params(cls, cp: dict[str, Any]): # model.completion_params if not isinstance(cp, dict): raise ValueError("model.completion_params must be of object type") diff --git a/api/core/app/app_config/easy_ui_based_app/prompt_template/manager.py b/api/core/app/app_config/easy_ui_based_app/prompt_template/manager.py index 76196e7034..4c07445df3 100644 --- a/api/core/app/app_config/easy_ui_based_app/prompt_template/manager.py +++ b/api/core/app/app_config/easy_ui_based_app/prompt_template/manager.py @@ -1,7 +1,5 @@ from typing import Any -from graphon.model_runtime.entities.message_entities import PromptMessageRole - from core.app.app_config.entities import ( AdvancedChatMessageEntity, AdvancedChatPromptTemplateEntity, @@ -9,6 +7,7 @@ from core.app.app_config.entities import ( PromptTemplateEntity, ) from core.prompt.simple_prompt_transform import ModelMode +from graphon.model_runtime.entities.message_entities import PromptMessageRole from models.model import AppMode, AppModelConfigDict @@ -65,7 +64,7 @@ class PromptTemplateConfigManager: ) @classmethod - def validate_and_set_defaults(cls, app_mode: AppMode, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, app_mode: AppMode, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate pre_prompt and set defaults for prompt feature depending on the config['model'] @@ -130,7 +129,7 @@ class PromptTemplateConfigManager: return config, ["prompt_type", "pre_prompt", "chat_prompt_config", "completion_prompt_config"] @classmethod - def validate_post_prompt_and_set_defaults(cls, config: dict): + def validate_post_prompt_and_set_defaults(cls, config: dict[str, Any]): """ Validate post_prompt and set defaults for prompt feature diff --git a/api/core/app/app_config/easy_ui_based_app/variables/manager.py b/api/core/app/app_config/easy_ui_based_app/variables/manager.py index f0b71c5801..ddb500cccf 100644 --- a/api/core/app/app_config/easy_ui_based_app/variables/manager.py +++ b/api/core/app/app_config/easy_ui_based_app/variables/manager.py @@ -1,10 +1,9 @@ import re -from typing import cast - -from graphon.variables.input_entities import VariableEntity, VariableEntityType +from typing import Any, cast from core.app.app_config.entities import ExternalDataVariableEntity from core.external_data_tool.factory import ExternalDataToolFactory +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models.model import AppModelConfigDict _ALLOWED_VARIABLE_ENTITY_TYPE = frozenset( @@ -82,7 +81,7 @@ class BasicVariablesConfigManager: return variable_entities, external_data_variables @classmethod - def validate_and_set_defaults(cls, tenant_id: str, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, tenant_id: str, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for user input form @@ -99,7 +98,7 @@ class BasicVariablesConfigManager: return config, related_config_keys @classmethod - def validate_variables_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_variables_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for user input form @@ -164,7 +163,9 @@ class BasicVariablesConfigManager: return config, ["user_input_form"] @classmethod - def validate_external_data_tools_and_set_defaults(cls, tenant_id: str, config: dict) -> tuple[dict, list[str]]: + def validate_external_data_tools_and_set_defaults( + cls, tenant_id: str, config: dict[str, Any] + ) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for external data fetch feature diff --git a/api/core/app/app_config/entities.py b/api/core/app/app_config/entities.py index 819aca864c..53563dc5da 100644 --- a/api/core/app/app_config/entities.py +++ b/api/core/app/app_config/entities.py @@ -1,14 +1,14 @@ from enum import StrEnum, auto from typing import Any, Literal -from graphon.file import FileUploadConfig -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.message_entities import PromptMessageRole -from graphon.variables.input_entities import VariableEntity as WorkflowVariableEntity from pydantic import BaseModel, Field from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict from core.rag.entities import MetadataFilteringCondition +from graphon.file import FileUploadConfig +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.message_entities import PromptMessageRole +from graphon.variables.input_entities import VariableEntity as WorkflowVariableEntity from models.model import AppMode diff --git a/api/core/app/app_config/features/file_upload/manager.py b/api/core/app/app_config/features/file_upload/manager.py index e96517c426..8f20ef2ff9 100644 --- a/api/core/app/app_config/features/file_upload/manager.py +++ b/api/core/app/app_config/features/file_upload/manager.py @@ -1,9 +1,8 @@ from collections.abc import Mapping from typing import Any -from graphon.file import FileUploadConfig - from constants import DEFAULT_FILE_NUMBER_LIMITS +from graphon.file import FileUploadConfig class FileUploadConfigManager: @@ -30,7 +29,7 @@ class FileUploadConfigManager: return FileUploadConfig.model_validate(file_upload_dict) @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for file upload feature diff --git a/api/core/app/app_config/features/more_like_this/manager.py b/api/core/app/app_config/features/more_like_this/manager.py index ef71bb348a..b167c04ab5 100644 --- a/api/core/app/app_config/features/more_like_this/manager.py +++ b/api/core/app/app_config/features/more_like_this/manager.py @@ -1,3 +1,5 @@ +from typing import Any + from pydantic import BaseModel, ConfigDict, Field, ValidationError @@ -13,7 +15,7 @@ class AppConfigModel(BaseModel): class MoreLikeThisConfigManager: @classmethod - def convert(cls, config: dict) -> bool: + def convert(cls, config: dict[str, Any]) -> bool: """ Convert model config to model config @@ -23,7 +25,7 @@ class MoreLikeThisConfigManager: return AppConfigModel.model_validate(validated_config).more_like_this.enabled @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: try: return AppConfigModel.model_validate(config).model_dump(), ["more_like_this"] except ValidationError: diff --git a/api/core/app/app_config/features/opening_statement/manager.py b/api/core/app/app_config/features/opening_statement/manager.py index 92b4185abf..33f5aec183 100644 --- a/api/core/app/app_config/features/opening_statement/manager.py +++ b/api/core/app/app_config/features/opening_statement/manager.py @@ -1,6 +1,9 @@ +from typing import Any + + class OpeningStatementConfigManager: @classmethod - def convert(cls, config: dict) -> tuple[str, list]: + def convert(cls, config: dict[str, Any]) -> tuple[str, list[str]]: """ Convert model config to model config @@ -15,7 +18,7 @@ class OpeningStatementConfigManager: return opening_statement, suggested_questions_list @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for opening statement feature diff --git a/api/core/app/app_config/features/retrieval_resource/manager.py b/api/core/app/app_config/features/retrieval_resource/manager.py index d098abac2f..8157fb41db 100644 --- a/api/core/app/app_config/features/retrieval_resource/manager.py +++ b/api/core/app/app_config/features/retrieval_resource/manager.py @@ -1,6 +1,9 @@ +from typing import Any + + class RetrievalResourceConfigManager: @classmethod - def convert(cls, config: dict) -> bool: + def convert(cls, config: dict[str, Any]) -> bool: show_retrieve_source = False retriever_resource_dict = config.get("retriever_resource") if retriever_resource_dict: @@ -10,7 +13,7 @@ class RetrievalResourceConfigManager: return show_retrieve_source @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for retriever resource feature diff --git a/api/core/app/app_config/features/speech_to_text/manager.py b/api/core/app/app_config/features/speech_to_text/manager.py index e10ae03e04..679b8c343b 100644 --- a/api/core/app/app_config/features/speech_to_text/manager.py +++ b/api/core/app/app_config/features/speech_to_text/manager.py @@ -1,6 +1,9 @@ +from typing import Any + + class SpeechToTextConfigManager: @classmethod - def convert(cls, config: dict) -> bool: + def convert(cls, config: dict[str, Any]) -> bool: """ Convert model config to model config @@ -15,7 +18,7 @@ class SpeechToTextConfigManager: return speech_to_text @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for speech to text feature diff --git a/api/core/app/app_config/features/suggested_questions_after_answer/manager.py b/api/core/app/app_config/features/suggested_questions_after_answer/manager.py index 9ac5114d12..2dddce349c 100644 --- a/api/core/app/app_config/features/suggested_questions_after_answer/manager.py +++ b/api/core/app/app_config/features/suggested_questions_after_answer/manager.py @@ -1,6 +1,9 @@ +from typing import Any + + class SuggestedQuestionsAfterAnswerConfigManager: @classmethod - def convert(cls, config: dict) -> bool: + def convert(cls, config: dict[str, Any]) -> bool: """ Convert model config to model config @@ -15,7 +18,7 @@ class SuggestedQuestionsAfterAnswerConfigManager: return suggested_questions_after_answer @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for suggested questions feature diff --git a/api/core/app/app_config/features/text_to_speech/manager.py b/api/core/app/app_config/features/text_to_speech/manager.py index 1c75981785..ca84ec9c3b 100644 --- a/api/core/app/app_config/features/text_to_speech/manager.py +++ b/api/core/app/app_config/features/text_to_speech/manager.py @@ -1,9 +1,11 @@ +from typing import Any + from core.app.app_config.entities import TextToSpeechEntity class TextToSpeechConfigManager: @classmethod - def convert(cls, config: dict): + def convert(cls, config: dict[str, Any]): """ Convert model config to model config @@ -22,7 +24,7 @@ class TextToSpeechConfigManager: return text_to_speech @classmethod - def validate_and_set_defaults(cls, config: dict) -> tuple[dict, list[str]]: + def validate_and_set_defaults(cls, config: dict[str, Any]) -> tuple[dict[str, Any], list[str]]: """ Validate and set defaults for text to speech feature diff --git a/api/core/app/app_config/workflow_ui_based_app/variables/manager.py b/api/core/app/app_config/workflow_ui_based_app/variables/manager.py index 62e0c31d1a..13ace32fd6 100644 --- a/api/core/app/app_config/workflow_ui_based_app/variables/manager.py +++ b/api/core/app/app_config/workflow_ui_based_app/variables/manager.py @@ -1,8 +1,7 @@ import re -from graphon.variables.input_entities import VariableEntity - from core.app.app_config.entities import RagPipelineVariableEntity +from graphon.variables.input_entities import VariableEntity from models.workflow import Workflow diff --git a/api/core/app/apps/advanced_chat/app_generator.py b/api/core/app/apps/advanced_chat/app_generator.py index 985ded0f74..9e64b471cb 100644 --- a/api/core/app/apps/advanced_chat/app_generator.py +++ b/api/core/app/apps/advanced_chat/app_generator.py @@ -18,11 +18,6 @@ from constants import UUID_NIL if TYPE_CHECKING: from controllers.console.app.workflow import LoopNodeRunPayload -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError -from graphon.runtime import GraphRuntimeState -from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader - from core.app.app_config.features.file_upload.manager import FileUploadConfigManager from core.app.apps.advanced_chat.app_config_manager import AdvancedChatAppConfigManager from core.app.apps.advanced_chat.app_runner import AdvancedChatAppRunner @@ -48,6 +43,10 @@ from core.repositories import DifyCoreRepositoryFactory from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository from extensions.ext_database import db from factories import file_factory +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError +from graphon.runtime import GraphRuntimeState +from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from libs.flask_utils import preserve_flask_contexts from models import Account, App, Conversation, EndUser, Message, Workflow, WorkflowNodeExecutionTriggeredFrom from models.enums import WorkflowRunTriggeredFrom diff --git a/api/core/app/apps/advanced_chat/app_runner.py b/api/core/app/apps/advanced_chat/app_runner.py index 7b4cb98bd4..4e57b4dedc 100644 --- a/api/core/app/apps/advanced_chat/app_runner.py +++ b/api/core/app/apps/advanced_chat/app_runner.py @@ -3,12 +3,6 @@ import time from collections.abc import Mapping, Sequence from typing import Any, cast -from graphon.enums import WorkflowType -from graphon.graph_engine.command_channels import RedisChannel -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variable_loader import VariableLoader -from graphon.variables.variables import Variable from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -43,6 +37,12 @@ from core.workflow.workflow_entry import WorkflowEntry from extensions.ext_database import db from extensions.ext_redis import redis_client from extensions.otel import WorkflowAppRunnerHandler, trace_span +from graphon.enums import WorkflowType +from graphon.graph_engine.command_channels import RedisChannel +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variable_loader import VariableLoader +from graphon.variables.variables import Variable from models import Workflow from models.model import App, Conversation, Message, MessageAnnotation from models.workflow import ConversationVariable diff --git a/api/core/app/apps/advanced_chat/generate_response_converter.py b/api/core/app/apps/advanced_chat/generate_response_converter.py index 5c9bc43992..fe2702ed69 100644 --- a/api/core/app/apps/advanced_chat/generate_response_converter.py +++ b/api/core/app/apps/advanced_chat/generate_response_converter.py @@ -57,7 +57,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, Any, None]: + ) -> Generator[dict[str, Any] | str, Any, None]: """ Convert stream full response. :param stream_response: stream response @@ -88,7 +88,7 @@ class AdvancedChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, Any, None]: + ) -> Generator[dict[str, Any] | str, Any, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/advanced_chat/generate_task_pipeline.py b/api/core/app/apps/advanced_chat/generate_task_pipeline.py index 0ce9ddce9e..78b582bdf5 100644 --- a/api/core/app/apps/advanced_chat/generate_task_pipeline.py +++ b/api/core/app/apps/advanced_chat/generate_task_pipeline.py @@ -9,12 +9,6 @@ from datetime import datetime from threading import Thread from typing import Any, Union -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.nodes import BuiltinNodeTypes -from graphon.runtime import GraphRuntimeState from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -77,6 +71,12 @@ from core.repositories.human_input_repository import HumanInputFormRepositoryImp from core.workflow.file_reference import resolve_file_record_id from core.workflow.system_variables import build_system_variables from extensions.ext_database import db +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.nodes import BuiltinNodeTypes +from graphon.runtime import GraphRuntimeState from libs.datetime_utils import naive_utc_now from models import Account, Conversation, EndUser, Message, MessageFile from models.enums import CreatorUserRole, MessageFileBelongsTo, MessageStatus diff --git a/api/core/app/apps/agent_chat/app_generator.py b/api/core/app/apps/agent_chat/app_generator.py index 5872f6b264..5cdc477028 100644 --- a/api/core/app/apps/agent_chat/app_generator.py +++ b/api/core/app/apps/agent_chat/app_generator.py @@ -6,7 +6,6 @@ from collections.abc import Generator, Mapping from typing import Any, Literal, overload from flask import Flask, current_app -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError from configs import dify_config @@ -24,6 +23,7 @@ from core.app.entities.app_invoke_entities import AgentChatAppGenerateEntity, In from core.ops.ops_trace_manager import TraceQueueManager from extensions.ext_database import db from factories import file_factory +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from libs.flask_utils import preserve_flask_contexts from models import Account, App, EndUser from services.conversation_service import ConversationService diff --git a/api/core/app/apps/agent_chat/app_runner.py b/api/core/app/apps/agent_chat/app_runner.py index a20d3f3c38..09ddce327e 100644 --- a/api/core/app/apps/agent_chat/app_runner.py +++ b/api/core/app/apps/agent_chat/app_runner.py @@ -1,9 +1,6 @@ import logging from typing import cast -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import select from core.agent.cot_chat_agent_runner import CotChatAgentRunner @@ -19,6 +16,9 @@ from core.memory.token_buffer_memory import TokenBufferMemory from core.model_manager import ModelInstance from core.moderation.base import ModerationError from extensions.ext_database import db +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from models.model import App, Conversation, Message logger = logging.getLogger(__name__) diff --git a/api/core/app/apps/agent_chat/generate_response_converter.py b/api/core/app/apps/agent_chat/generate_response_converter.py index 0c146c388f..731c6ee12e 100644 --- a/api/core/app/apps/agent_chat/generate_response_converter.py +++ b/api/core/app/apps/agent_chat/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -56,7 +56,7 @@ class AgentChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -87,7 +87,7 @@ class AgentChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/base_app_generate_response_converter.py b/api/core/app/apps/base_app_generate_response_converter.py index 6e5a86505c..d5edfaeb25 100644 --- a/api/core/app/apps/base_app_generate_response_converter.py +++ b/api/core/app/apps/base_app_generate_response_converter.py @@ -3,11 +3,10 @@ from abc import ABC, abstractmethod from collections.abc import Generator, Mapping from typing import Any, Union -from graphon.model_runtime.errors.invoke import InvokeError - from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.task_entities import AppBlockingResponse, AppStreamResponse from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError logger = logging.getLogger(__name__) @@ -24,7 +23,7 @@ class AppGenerateResponseConverter(ABC): return cls.convert_blocking_full_response(response) else: - def _generate_full_response() -> Generator[dict | str, Any, None]: + def _generate_full_response() -> Generator[dict[str, Any] | str, Any, None]: yield from cls.convert_stream_full_response(response) return _generate_full_response() @@ -33,7 +32,7 @@ class AppGenerateResponseConverter(ABC): return cls.convert_blocking_simple_response(response) else: - def _generate_simple_response() -> Generator[dict | str, Any, None]: + def _generate_simple_response() -> Generator[dict[str, Any] | str, Any, None]: yield from cls.convert_stream_simple_response(response) return _generate_simple_response() @@ -52,14 +51,14 @@ class AppGenerateResponseConverter(ABC): @abstractmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: raise NotImplementedError @classmethod @abstractmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: raise NotImplementedError @classmethod diff --git a/api/core/app/apps/base_app_generator.py b/api/core/app/apps/base_app_generator.py index 7eccd59d17..8e8ccf2b90 100644 --- a/api/core/app/apps/base_app_generator.py +++ b/api/core/app/apps/base_app_generator.py @@ -2,9 +2,6 @@ from collections.abc import Generator, Mapping, Sequence from contextlib import AbstractContextManager, nullcontext from typing import TYPE_CHECKING, Any, Union, final -from graphon.enums import NodeType -from graphon.file import File, FileUploadConfig -from graphon.variables.input_entities import VariableEntityType from sqlalchemy.orm import Session from core.app.apps.draft_variable_saver import ( @@ -16,6 +13,9 @@ from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.app.file_access import DatabaseFileAccessController, FileAccessScope, bind_file_access_scope from extensions.ext_database import db from factories import file_factory +from graphon.enums import NodeType +from graphon.file import File, FileUploadConfig +from graphon.variables.input_entities import VariableEntityType from libs.orjson import orjson_dumps from models import Account, EndUser from services.workflow_draft_variable_service import DraftVariableSaver as DraftVariableSaverImpl diff --git a/api/core/app/apps/base_app_queue_manager.py b/api/core/app/apps/base_app_queue_manager.py index 20bf81aeec..d1771452c5 100644 --- a/api/core/app/apps/base_app_queue_manager.py +++ b/api/core/app/apps/base_app_queue_manager.py @@ -7,7 +7,6 @@ from enum import IntEnum, auto from typing import Any from cachetools import TTLCache, cachedmethod -from graphon.runtime import GraphRuntimeState from redis.exceptions import RedisError from sqlalchemy.orm import DeclarativeMeta @@ -22,6 +21,7 @@ from core.app.entities.queue_entities import ( WorkflowQueueMessage, ) from extensions.ext_redis import redis_client +from graphon.runtime import GraphRuntimeState logger = logging.getLogger(__name__) diff --git a/api/core/app/apps/base_app_runner.py b/api/core/app/apps/base_app_runner.py index 4aebc0cb30..1251b397e2 100644 --- a/api/core/app/apps/base_app_runner.py +++ b/api/core/app/apps/base_app_runner.py @@ -5,17 +5,6 @@ from collections.abc import Generator, Mapping, Sequence from mimetypes import guess_extension from typing import TYPE_CHECKING, Any, Union -from graphon.file import FileTransferMethod, FileType -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - ImagePromptMessageContent, - PromptMessage, - TextPromptMessageContent, -) -from graphon.model_runtime.entities.model_entities import ModelPropertyKey -from graphon.model_runtime.errors.invoke import InvokeBadRequestError - from core.app.app_config.entities import ExternalDataVariableEntity, PromptTemplateEntity from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom from core.app.entities.app_invoke_entities import ( @@ -41,6 +30,16 @@ from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, Comp from core.prompt.simple_prompt_transform import ModelMode, SimplePromptTransform from core.tools.tool_file_manager import ToolFileManager from extensions.ext_database import db +from graphon.file import FileTransferMethod, FileType +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + ImagePromptMessageContent, + PromptMessage, + TextPromptMessageContent, +) +from graphon.model_runtime.entities.model_entities import ModelPropertyKey +from graphon.model_runtime.errors.invoke import InvokeBadRequestError from models.enums import CreatorUserRole, MessageFileBelongsTo from models.model import App, AppMode, Message, MessageAnnotation, MessageFile diff --git a/api/core/app/apps/chat/app_generator.py b/api/core/app/apps/chat/app_generator.py index 891dcece73..58afefe296 100644 --- a/api/core/app/apps/chat/app_generator.py +++ b/api/core/app/apps/chat/app_generator.py @@ -6,7 +6,6 @@ from collections.abc import Generator, Mapping from typing import Any, Literal, overload from flask import Flask, copy_current_request_context, current_app -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError from configs import dify_config @@ -24,6 +23,7 @@ from core.app.entities.app_invoke_entities import ChatAppGenerateEntity, InvokeF from core.ops.ops_trace_manager import TraceQueueManager from extensions.ext_database import db from factories import file_factory +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from models import Account from models.model import App, EndUser from services.conversation_service import ConversationService diff --git a/api/core/app/apps/chat/app_runner.py b/api/core/app/apps/chat/app_runner.py index 050f763e95..077c5239f3 100644 --- a/api/core/app/apps/chat/app_runner.py +++ b/api/core/app/apps/chat/app_runner.py @@ -1,8 +1,6 @@ import logging from typing import cast -from graphon.file import File -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from sqlalchemy import select from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom @@ -18,6 +16,8 @@ from core.model_manager import ModelInstance from core.moderation.base import ModerationError from core.rag.retrieval.dataset_retrieval import DatasetRetrieval from extensions.ext_database import db +from graphon.file import File +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from models.model import App, Conversation, Message logger = logging.getLogger(__name__) diff --git a/api/core/app/apps/chat/generate_response_converter.py b/api/core/app/apps/chat/generate_response_converter.py index f23ee7f89f..3d0375151d 100644 --- a/api/core/app/apps/chat/generate_response_converter.py +++ b/api/core/app/apps/chat/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -56,7 +56,7 @@ class ChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -87,7 +87,7 @@ class ChatAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/common/graph_runtime_state_support.py b/api/core/app/apps/common/graph_runtime_state_support.py index ab277857fe..2a90fbdad0 100644 --- a/api/core/app/apps/common/graph_runtime_state_support.py +++ b/api/core/app/apps/common/graph_runtime_state_support.py @@ -4,9 +4,8 @@ from __future__ import annotations from typing import TYPE_CHECKING -from graphon.runtime import GraphRuntimeState - from core.workflow.system_variables import SystemVariableKey, get_system_text +from graphon.runtime import GraphRuntimeState if TYPE_CHECKING: from core.app.task_pipeline.based_generate_task_pipeline import BasedGenerateTaskPipeline diff --git a/api/core/app/apps/common/workflow_response_converter.py b/api/core/app/apps/common/workflow_response_converter.py index a515531616..bd685d5189 100644 --- a/api/core/app/apps/common/workflow_response_converter.py +++ b/api/core/app/apps/common/workflow_response_converter.py @@ -6,19 +6,6 @@ from dataclasses import dataclass from datetime import datetime from typing import Any, NewType, TypedDict, Union -from graphon.entities import WorkflowStartReason -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import ( - BuiltinNodeTypes, - WorkflowExecutionStatus, - WorkflowNodeExecutionMetadataKey, - WorkflowNodeExecutionStatus, -) -from graphon.file import FILE_MODEL_IDENTITY, File -from graphon.runtime import GraphRuntimeState -from graphon.variables.segments import ArrayFileSegment, FileSegment, Segment -from graphon.variables.variables import Variable -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy import select from sqlalchemy.orm import Session @@ -68,6 +55,19 @@ from core.workflow.human_input_forms import load_form_tokens_by_form_id from core.workflow.system_variables import SystemVariableKey, system_variables_to_mapping from core.workflow.workflow_entry import WorkflowEntry from extensions.ext_database import db +from graphon.entities import WorkflowStartReason +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import ( + BuiltinNodeTypes, + WorkflowExecutionStatus, + WorkflowNodeExecutionMetadataKey, + WorkflowNodeExecutionStatus, +) +from graphon.file import FILE_MODEL_IDENTITY, File +from graphon.runtime import GraphRuntimeState +from graphon.variables.segments import ArrayFileSegment, FileSegment, Segment +from graphon.variables.variables import Variable +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.datetime_utils import naive_utc_now from models import Account, EndUser from models.human_input import HumanInputForm diff --git a/api/core/app/apps/completion/app_generator.py b/api/core/app/apps/completion/app_generator.py index 61339b316a..423bfdac51 100644 --- a/api/core/app/apps/completion/app_generator.py +++ b/api/core/app/apps/completion/app_generator.py @@ -6,7 +6,6 @@ from collections.abc import Generator, Mapping from typing import Any, Literal, overload from flask import Flask, copy_current_request_context, current_app -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError from sqlalchemy import select @@ -24,6 +23,7 @@ from core.app.entities.app_invoke_entities import CompletionAppGenerateEntity, I from core.ops.ops_trace_manager import TraceQueueManager from extensions.ext_database import db from factories import file_factory +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from models import Account, App, EndUser, Message from services.errors.app import MoreLikeThisDisabledError from services.errors.message import MessageNotExistsError diff --git a/api/core/app/apps/completion/app_runner.py b/api/core/app/apps/completion/app_runner.py index b216f7cf7b..6bb1ecdcb1 100644 --- a/api/core/app/apps/completion/app_runner.py +++ b/api/core/app/apps/completion/app_runner.py @@ -1,8 +1,6 @@ import logging from typing import cast -from graphon.file import File -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from sqlalchemy import select from core.app.apps.base_app_queue_manager import AppQueueManager @@ -16,6 +14,8 @@ from core.model_manager import ModelInstance from core.moderation.base import ModerationError from core.rag.retrieval.dataset_retrieval import DatasetRetrieval from extensions.ext_database import db +from graphon.file import File +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from models.model import App, Message logger = logging.getLogger(__name__) diff --git a/api/core/app/apps/completion/generate_response_converter.py b/api/core/app/apps/completion/generate_response_converter.py index a4f574642d..71886b39ba 100644 --- a/api/core/app/apps/completion/generate_response_converter.py +++ b/api/core/app/apps/completion/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -55,7 +55,7 @@ class CompletionAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -85,7 +85,7 @@ class CompletionAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/pipeline/generate_response_converter.py b/api/core/app/apps/pipeline/generate_response_converter.py index cfacd8640d..02b3160b7c 100644 --- a/api/core/app/apps/pipeline/generate_response_converter.py +++ b/api/core/app/apps/pipeline/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -17,7 +17,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): _blocking_response_type = WorkflowAppBlockingResponse @classmethod - def convert_blocking_full_response(cls, blocking_response: WorkflowAppBlockingResponse) -> dict: # type: ignore[override] + def convert_blocking_full_response(cls, blocking_response: WorkflowAppBlockingResponse) -> dict[str, Any]: # type: ignore[override] """ Convert blocking full response. :param blocking_response: blocking response @@ -26,7 +26,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): return dict(blocking_response.model_dump()) @classmethod - def convert_blocking_simple_response(cls, blocking_response: WorkflowAppBlockingResponse) -> dict: # type: ignore[override] + def convert_blocking_simple_response(cls, blocking_response: WorkflowAppBlockingResponse) -> dict[str, Any]: # type: ignore[override] """ Convert blocking simple response. :param blocking_response: blocking response @@ -37,7 +37,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -66,7 +66,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/pipeline/pipeline_config_manager.py b/api/core/app/apps/pipeline/pipeline_config_manager.py index 72b7f4bef6..8bbd745538 100644 --- a/api/core/app/apps/pipeline/pipeline_config_manager.py +++ b/api/core/app/apps/pipeline/pipeline_config_manager.py @@ -1,3 +1,5 @@ +from typing import Any + from core.app.app_config.base_app_config_manager import BaseAppConfigManager from core.app.app_config.common.sensitive_word_avoidance.manager import SensitiveWordAvoidanceConfigManager from core.app.app_config.entities import RagPipelineVariableEntity, WorkflowUIBasedAppConfig @@ -34,7 +36,9 @@ class PipelineConfigManager(BaseAppConfigManager): return pipeline_config @classmethod - def config_validate(cls, tenant_id: str, config: dict, only_structure_validate: bool = False) -> dict: + def config_validate( + cls, tenant_id: str, config: dict[str, Any], only_structure_validate: bool = False + ) -> dict[str, Any]: """ Validate for pipeline config diff --git a/api/core/app/apps/pipeline/pipeline_generator.py b/api/core/app/apps/pipeline/pipeline_generator.py index 139c7e73e0..4b2f17189b 100644 --- a/api/core/app/apps/pipeline/pipeline_generator.py +++ b/api/core/app/apps/pipeline/pipeline_generator.py @@ -10,8 +10,6 @@ from collections.abc import Generator, Mapping from typing import Any, Literal, cast, overload from flask import Flask, current_app -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError -from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from pydantic import ValidationError from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -43,6 +41,8 @@ from core.repositories.factory import ( WorkflowNodeExecutionRepository, ) from extensions.ext_database import db +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError +from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from libs.flask_utils import preserve_flask_contexts from models import Account, EndUser, Workflow, WorkflowNodeExecutionTriggeredFrom from models.dataset import Document, DocumentPipelineExecutionLog, Pipeline @@ -782,7 +782,7 @@ class PipelineGenerator(BaseAppGenerator): user_id: str, all_files: list, datasource_info: Mapping[str, Any], - next_page_parameters: dict | None = None, + next_page_parameters: dict[str, Any] | None = None, ): """ Get files in a folder. diff --git a/api/core/app/apps/pipeline/pipeline_runner.py b/api/core/app/apps/pipeline/pipeline_runner.py index 36daaf09e9..2ee0ae27eb 100644 --- a/api/core/app/apps/pipeline/pipeline_runner.py +++ b/api/core/app/apps/pipeline/pipeline_runner.py @@ -2,12 +2,6 @@ import logging import time from typing import cast -from graphon.enums import WorkflowType -from graphon.graph import Graph -from graphon.graph_events import GraphEngineEvent, GraphRunFailedEvent -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variable_loader import VariableLoader -from graphon.variables.variables import RAGPipelineVariable, RAGPipelineVariableInput from sqlalchemy import select from core.app.apps.base_app_queue_manager import AppQueueManager @@ -26,6 +20,12 @@ from core.workflow.system_variables import build_bootstrap_variables, build_syst from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool from core.workflow.workflow_entry import WorkflowEntry from extensions.ext_database import db +from graphon.enums import WorkflowType +from graphon.graph import Graph +from graphon.graph_events import GraphEngineEvent, GraphRunFailedEvent +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variable_loader import VariableLoader +from graphon.variables.variables import RAGPipelineVariable, RAGPipelineVariableInput from models.dataset import Document, Pipeline from models.model import EndUser from models.workflow import Workflow diff --git a/api/core/app/apps/workflow/app_generator.py b/api/core/app/apps/workflow/app_generator.py index 6074e81d1e..6937014a06 100644 --- a/api/core/app/apps/workflow/app_generator.py +++ b/api/core/app/apps/workflow/app_generator.py @@ -8,10 +8,6 @@ from collections.abc import Generator, Mapping, Sequence from typing import TYPE_CHECKING, Any, Literal, overload from flask import Flask, current_app -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError -from graphon.runtime import GraphRuntimeState -from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from pydantic import ValidationError from sqlalchemy import select from sqlalchemy.orm import sessionmaker @@ -38,6 +34,10 @@ from core.repositories import DifyCoreRepositoryFactory from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository from extensions.ext_database import db from factories import file_factory +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError +from graphon.runtime import GraphRuntimeState +from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader from libs.flask_utils import preserve_flask_contexts from models.account import Account from models.enums import WorkflowRunTriggeredFrom diff --git a/api/core/app/apps/workflow/app_runner.py b/api/core/app/apps/workflow/app_runner.py index 2cb8088971..cfb9208486 100644 --- a/api/core/app/apps/workflow/app_runner.py +++ b/api/core/app/apps/workflow/app_runner.py @@ -3,12 +3,6 @@ import time from collections.abc import Sequence from typing import cast -from graphon.enums import WorkflowType -from graphon.graph_engine.command_channels import RedisChannel -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variable_loader import VariableLoader - from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.apps.workflow.app_config_manager import WorkflowAppConfig from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner @@ -21,6 +15,11 @@ from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add from core.workflow.workflow_entry import WorkflowEntry from extensions.ext_redis import redis_client from extensions.otel import WorkflowAppRunnerHandler, trace_span +from graphon.enums import WorkflowType +from graphon.graph_engine.command_channels import RedisChannel +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variable_loader import VariableLoader from libs.datetime_utils import naive_utc_now from models.workflow import Workflow diff --git a/api/core/app/apps/workflow/generate_response_converter.py b/api/core/app/apps/workflow/generate_response_converter.py index c64f44a603..c69826cbef 100644 --- a/api/core/app/apps/workflow/generate_response_converter.py +++ b/api/core/app/apps/workflow/generate_response_converter.py @@ -1,5 +1,5 @@ from collections.abc import Generator -from typing import cast +from typing import Any, cast from core.app.apps.base_app_generate_response_converter import AppGenerateResponseConverter from core.app.entities.task_entities import ( @@ -37,7 +37,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_full_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream full response. :param stream_response: stream response @@ -66,7 +66,7 @@ class WorkflowAppGenerateResponseConverter(AppGenerateResponseConverter): @classmethod def convert_stream_simple_response( cls, stream_response: Generator[AppStreamResponse, None, None] - ) -> Generator[dict | str, None, None]: + ) -> Generator[dict[str, Any] | str, None, None]: """ Convert stream simple response. :param stream_response: stream response diff --git a/api/core/app/apps/workflow/generate_task_pipeline.py b/api/core/app/apps/workflow/generate_task_pipeline.py index f1b8b08eaa..15645add57 100644 --- a/api/core/app/apps/workflow/generate_task_pipeline.py +++ b/api/core/app/apps/workflow/generate_task_pipeline.py @@ -4,9 +4,6 @@ from collections.abc import Callable, Generator from contextlib import contextmanager from typing import Union -from graphon.entities import WorkflowStartReason -from graphon.enums import WorkflowExecutionStatus -from graphon.runtime import GraphRuntimeState from sqlalchemy.orm import Session, sessionmaker from constants.tts_auto_play_timeout import TTS_AUTO_PLAY_TIMEOUT, TTS_AUTO_PLAY_YIELD_CPU_TIME @@ -61,6 +58,9 @@ from core.base.tts import AppGeneratorTTSPublisher, AudioTrunk from core.ops.ops_trace_manager import TraceQueueManager from core.workflow.system_variables import build_system_variables from extensions.ext_database import db +from graphon.entities import WorkflowStartReason +from graphon.enums import WorkflowExecutionStatus +from graphon.runtime import GraphRuntimeState from models import Account from models.enums import CreatorUserRole from models.model import EndUser @@ -682,15 +682,16 @@ class WorkflowAppGenerateTaskPipeline(GraphRuntimeStateSupport): def _save_workflow_app_log(self, *, session: Session, workflow_run_id: str | None): invoke_from = self._application_generate_entity.invoke_from - if invoke_from == InvokeFrom.SERVICE_API: - created_from = WorkflowAppLogCreatedFrom.SERVICE_API - elif invoke_from == InvokeFrom.EXPLORE: - created_from = WorkflowAppLogCreatedFrom.INSTALLED_APP - elif invoke_from == InvokeFrom.WEB_APP: - created_from = WorkflowAppLogCreatedFrom.WEB_APP - else: - # not save log for debugging - return + match invoke_from: + case InvokeFrom.SERVICE_API: + created_from = WorkflowAppLogCreatedFrom.SERVICE_API + case InvokeFrom.EXPLORE: + created_from = WorkflowAppLogCreatedFrom.INSTALLED_APP + case InvokeFrom.WEB_APP: + created_from = WorkflowAppLogCreatedFrom.WEB_APP + case InvokeFrom.DEBUGGER | InvokeFrom.TRIGGER | InvokeFrom.PUBLISHED_PIPELINE | InvokeFrom.VALIDATION: + # not save log for debugging + return if not workflow_run_id: return diff --git a/api/core/app/apps/workflow_app_runner.py b/api/core/app/apps/workflow_app_runner.py index 437432611d..047b54c86c 100644 --- a/api/core/app/apps/workflow_app_runner.py +++ b/api/core/app/apps/workflow_app_runner.py @@ -3,39 +3,6 @@ import time from collections.abc import Mapping, Sequence from typing import Any, cast -from graphon.entities.graph_config import NodeConfigDictAdapter -from graphon.entities.pause_reason import HumanInputRequired -from graphon.graph import Graph -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import ( - GraphEngineEvent, - GraphRunAbortedEvent, - GraphRunFailedEvent, - GraphRunPartialSucceededEvent, - GraphRunPausedEvent, - GraphRunStartedEvent, - GraphRunSucceededEvent, - NodeRunAgentLogEvent, - NodeRunExceptionEvent, - NodeRunFailedEvent, - NodeRunHumanInputFormFilledEvent, - NodeRunHumanInputFormTimeoutEvent, - NodeRunIterationFailedEvent, - NodeRunIterationNextEvent, - NodeRunIterationStartedEvent, - NodeRunIterationSucceededEvent, - NodeRunLoopFailedEvent, - NodeRunLoopNextEvent, - NodeRunLoopStartedEvent, - NodeRunLoopSucceededEvent, - NodeRunRetrieverResourceEvent, - NodeRunRetryEvent, - NodeRunStartedEvent, - NodeRunStreamChunkEvent, - NodeRunSucceededEvent, -) -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader, load_into_variable_pool from pydantic import ValidationError from core.app.apps.base_app_queue_manager import AppQueueManager, PublishFrom @@ -82,6 +49,39 @@ from core.workflow.system_variables import ( from core.workflow.variable_pool_initializer import add_variables_to_pool from core.workflow.workflow_entry import WorkflowEntry from core.workflow.workflow_run_outputs import project_node_outputs_for_workflow_run +from graphon.entities.graph_config import NodeConfigDictAdapter +from graphon.entities.pause_reason import HumanInputRequired +from graphon.graph import Graph +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import ( + GraphEngineEvent, + GraphRunAbortedEvent, + GraphRunFailedEvent, + GraphRunPartialSucceededEvent, + GraphRunPausedEvent, + GraphRunStartedEvent, + GraphRunSucceededEvent, + NodeRunAgentLogEvent, + NodeRunExceptionEvent, + NodeRunFailedEvent, + NodeRunHumanInputFormFilledEvent, + NodeRunHumanInputFormTimeoutEvent, + NodeRunIterationFailedEvent, + NodeRunIterationNextEvent, + NodeRunIterationStartedEvent, + NodeRunIterationSucceededEvent, + NodeRunLoopFailedEvent, + NodeRunLoopNextEvent, + NodeRunLoopStartedEvent, + NodeRunLoopSucceededEvent, + NodeRunRetrieverResourceEvent, + NodeRunRetryEvent, + NodeRunStartedEvent, + NodeRunStreamChunkEvent, + NodeRunSucceededEvent, +) +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader, load_into_variable_pool from models.workflow import Workflow from tasks.mail_human_input_delivery_task import dispatch_human_input_email_task diff --git a/api/core/app/entities/app_invoke_entities.py b/api/core/app/entities/app_invoke_entities.py index a3fb7b4c5d..09992f4bbf 100644 --- a/api/core/app/entities/app_invoke_entities.py +++ b/api/core/app/entities/app_invoke_entities.py @@ -2,13 +2,13 @@ from collections.abc import Mapping, Sequence from enum import StrEnum from typing import TYPE_CHECKING, Any -from graphon.file import File, FileUploadConfig -from graphon.model_runtime.entities.model_entities import AIModelEntity from pydantic import BaseModel, ConfigDict, Field, ValidationInfo, field_validator from constants import UUID_NIL from core.app.app_config.entities import EasyUIBasedAppConfig, WorkflowUIBasedAppConfig from core.entities.provider_configuration import ProviderModelBundle +from graphon.file import File, FileUploadConfig +from graphon.model_runtime.entities.model_entities import AIModelEntity if TYPE_CHECKING: from core.ops.ops_trace_manager import TraceQueueManager diff --git a/api/core/app/entities/queue_entities.py b/api/core/app/entities/queue_entities.py index 482f995d8e..221b7fb058 100644 --- a/api/core/app/entities/queue_entities.py +++ b/api/core/app/entities/queue_entities.py @@ -3,14 +3,14 @@ from datetime import datetime from enum import StrEnum, auto from typing import Any -from graphon.entities import WorkflowStartReason -from graphon.entities.pause_reason import PauseReason -from graphon.enums import NodeType, WorkflowNodeExecutionMetadataKey -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk from pydantic import BaseModel, ConfigDict, Field from core.app.entities.agent_strategy import AgentStrategyInfo from core.rag.entities import RetrievalSourceMetadata +from graphon.entities import WorkflowStartReason +from graphon.entities.pause_reason import PauseReason +from graphon.enums import NodeType, WorkflowNodeExecutionMetadataKey +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk class QueueEvent(StrEnum): diff --git a/api/core/app/entities/task_entities.py b/api/core/app/entities/task_entities.py index 62df85b13f..6e4ca69cf0 100644 --- a/api/core/app/entities/task_entities.py +++ b/api/core/app/entities/task_entities.py @@ -2,14 +2,14 @@ from collections.abc import Mapping, Sequence from enum import StrEnum from typing import Any -from graphon.entities import WorkflowStartReason -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.nodes.human_input.entities import FormInput, UserAction from pydantic import BaseModel, ConfigDict, Field from core.app.entities.agent_strategy import AgentStrategyInfo from core.rag.entities import RetrievalSourceMetadata +from graphon.entities import WorkflowStartReason +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.nodes.human_input.entities import FormInput, UserAction class AnnotationReplyAccount(BaseModel): @@ -521,7 +521,7 @@ class IterationNodeStartStreamResponse(StreamResponse): node_type: str title: str created_at: int - extras: dict = Field(default_factory=dict) + extras: dict[str, Any] = Field(default_factory=dict) metadata: Mapping = {} inputs: Mapping = {} inputs_truncated: bool = False @@ -547,7 +547,7 @@ class IterationNodeNextStreamResponse(StreamResponse): title: str index: int created_at: int - extras: dict = Field(default_factory=dict) + extras: dict[str, Any] = Field(default_factory=dict) event: StreamEvent = StreamEvent.ITERATION_NEXT workflow_run_id: str @@ -571,7 +571,7 @@ class IterationNodeCompletedStreamResponse(StreamResponse): outputs: Mapping | None = None outputs_truncated: bool = False created_at: int - extras: dict | None = None + extras: dict[str, Any] | None = None inputs: Mapping | None = None inputs_truncated: bool = False status: WorkflowNodeExecutionStatus @@ -602,7 +602,7 @@ class LoopNodeStartStreamResponse(StreamResponse): node_type: str title: str created_at: int - extras: dict = Field(default_factory=dict) + extras: dict[str, Any] = Field(default_factory=dict) metadata: Mapping = {} inputs: Mapping = {} inputs_truncated: bool = False @@ -653,7 +653,7 @@ class LoopNodeCompletedStreamResponse(StreamResponse): outputs: Mapping | None = None outputs_truncated: bool = False created_at: int - extras: dict | None = None + extras: dict[str, Any] | None = None inputs: Mapping | None = None inputs_truncated: bool = False status: WorkflowNodeExecutionStatus diff --git a/api/core/app/features/hosting_moderation/hosting_moderation.py b/api/core/app/features/hosting_moderation/hosting_moderation.py index d2d2fea4fb..d59f5125e3 100644 --- a/api/core/app/features/hosting_moderation/hosting_moderation.py +++ b/api/core/app/features/hosting_moderation/hosting_moderation.py @@ -1,9 +1,8 @@ import logging -from graphon.model_runtime.entities.message_entities import PromptMessage - from core.app.entities.app_invoke_entities import EasyUIBasedAppGenerateEntity from core.helper import moderation +from graphon.model_runtime.entities.message_entities import PromptMessage logger = logging.getLogger(__name__) diff --git a/api/core/app/layers/conversation_variable_persist_layer.py b/api/core/app/layers/conversation_variable_persist_layer.py index e09869f5f8..d5e6b04a4a 100644 --- a/api/core/app/layers/conversation_variable_persist_layer.py +++ b/api/core/app/layers/conversation_variable_persist_layer.py @@ -9,11 +9,10 @@ scope updates that matter to chat applications. import logging -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import GraphEngineEvent, NodeRunVariableUpdatedEvent - from core.workflow.system_variables import SystemVariableKey, get_system_text from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import GraphEngineEvent, NodeRunVariableUpdatedEvent from services.conversation_variable_updater import ConversationVariableUpdater logger = logging.getLogger(__name__) diff --git a/api/core/app/layers/pause_state_persist_layer.py b/api/core/app/layers/pause_state_persist_layer.py index c027f42788..9811f9f830 100644 --- a/api/core/app/layers/pause_state_persist_layer.py +++ b/api/core/app/layers/pause_state_persist_layer.py @@ -1,14 +1,14 @@ from dataclasses import dataclass from typing import Annotated, Literal, Self -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import GraphEngineEvent, GraphRunPausedEvent from pydantic import BaseModel, Field from sqlalchemy import Engine from sqlalchemy.orm import Session, sessionmaker from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity from core.workflow.system_variables import SystemVariableKey, get_system_text +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import GraphEngineEvent, GraphRunPausedEvent from models.model import AppMode from repositories.api_workflow_run_repository import APIWorkflowRunRepository from repositories.factory import DifyAPIRepositoryFactory diff --git a/api/core/app/layers/timeslice_layer.py b/api/core/app/layers/timeslice_layer.py index 8c8daf8712..bb9fc1b6fa 100644 --- a/api/core/app/layers/timeslice_layer.py +++ b/api/core/app/layers/timeslice_layer.py @@ -3,10 +3,10 @@ import uuid from typing import ClassVar from apscheduler.schedulers.background import BackgroundScheduler # type: ignore + from graphon.graph_engine.entities.commands import CommandType, GraphEngineCommand from graphon.graph_engine.layers import GraphEngineLayer from graphon.graph_events import GraphEngineEvent - from services.workflow.entities import WorkflowScheduleCFSPlanEntity from services.workflow.scheduler import CFSPlanScheduler, SchedulerCommand diff --git a/api/core/app/layers/trigger_post_layer.py b/api/core/app/layers/trigger_post_layer.py index 77c7bec67e..b60fe82ffe 100644 --- a/api/core/app/layers/trigger_post_layer.py +++ b/api/core/app/layers/trigger_post_layer.py @@ -2,12 +2,12 @@ import logging from datetime import UTC, datetime from typing import Any, ClassVar -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import GraphEngineEvent, GraphRunFailedEvent, GraphRunPausedEvent, GraphRunSucceededEvent from pydantic import TypeAdapter from core.db.session_factory import session_factory from core.workflow.system_variables import SystemVariableKey, get_system_text +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import GraphEngineEvent, GraphRunFailedEvent, GraphRunPausedEvent, GraphRunSucceededEvent from models.enums import WorkflowTriggerStatus from repositories.sqlalchemy_workflow_trigger_log_repository import SQLAlchemyWorkflowTriggerLogRepository from tasks.workflow_cfs_scheduler.cfs_scheduler import AsyncWorkflowCFSPlanEntity diff --git a/api/core/app/llm/model_access.py b/api/core/app/llm/model_access.py index 278d0cb30b..c49c4eb0ac 100644 --- a/api/core/app/llm/model_access.py +++ b/api/core/app/llm/model_access.py @@ -2,16 +2,15 @@ from __future__ import annotations from typing import Any -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.nodes.llm.entities import ModelConfig -from graphon.nodes.llm.exc import LLMModeRequiredError, ModelNotExistError -from graphon.nodes.llm.protocols import CredentialsProvider - from core.app.entities.app_invoke_entities import DifyRunContext, ModelConfigWithCredentialsEntity from core.errors.error import ProviderTokenNotInitError from core.model_manager import ModelInstance, ModelManager from core.plugin.impl.model_runtime_factory import create_plugin_provider_manager from core.provider_manager import ProviderManager +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.nodes.llm.entities import ModelConfig +from graphon.nodes.llm.exc import LLMModeRequiredError, ModelNotExistError +from graphon.nodes.llm.protocols import CredentialsProvider class DifyCredentialsProvider: diff --git a/api/core/app/llm/quota.py b/api/core/app/llm/quota.py index 0bb10190c4..b6039e1e4e 100644 --- a/api/core/app/llm/quota.py +++ b/api/core/app/llm/quota.py @@ -1,4 +1,3 @@ -from graphon.model_runtime.entities.llm_entities import LLMUsage from sqlalchemy import update from sqlalchemy.orm import sessionmaker @@ -8,6 +7,7 @@ from core.entities.provider_entities import ProviderQuotaType, QuotaUnit from core.errors.error import QuotaExceededError from core.model_manager import ModelInstance from extensions.ext_database import db +from graphon.model_runtime.entities.llm_entities import LLMUsage from libs.datetime_utils import naive_utc_now from models.provider import Provider, ProviderType from models.provider_ids import ModelProviderID diff --git a/api/core/app/task_pipeline/based_generate_task_pipeline.py b/api/core/app/task_pipeline/based_generate_task_pipeline.py index 10b9c36d3e..9e688589db 100644 --- a/api/core/app/task_pipeline/based_generate_task_pipeline.py +++ b/api/core/app/task_pipeline/based_generate_task_pipeline.py @@ -1,7 +1,6 @@ import logging import time -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from sqlalchemy import select from sqlalchemy.orm import Session @@ -18,6 +17,7 @@ from core.app.entities.task_entities import ( ) from core.errors.error import QuotaExceededError from core.moderation.output_moderation import ModerationRule, OutputModeration +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from models.enums import MessageStatus from models.model import Message diff --git a/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py b/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py index 6bb177fe02..dfe6133cb6 100644 --- a/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py +++ b/api/core/app/task_pipeline/easy_ui_based_generate_task_pipeline.py @@ -4,13 +4,6 @@ from collections.abc import Generator from threading import Thread from typing import Any, cast -from graphon.file import FileTransferMethod -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - TextPromptMessageContent, -) -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -60,6 +53,13 @@ from core.prompt.utils.prompt_message_util import PromptMessageUtil from core.prompt.utils.prompt_template_parser import PromptTemplateParser from events.message_event import message_was_created from extensions.ext_database import db +from graphon.file import FileTransferMethod +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + TextPromptMessageContent, +) +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from libs.datetime_utils import naive_utc_now from models.model import AppMode, Conversation, Message, MessageAgentThought, MessageFile, UploadFile diff --git a/api/core/app/task_pipeline/message_file_utils.py b/api/core/app/task_pipeline/message_file_utils.py index 77310baf74..1dd713821f 100644 --- a/api/core/app/task_pipeline/message_file_utils.py +++ b/api/core/app/task_pipeline/message_file_utils.py @@ -1,9 +1,8 @@ from typing import TypedDict +from core.tools.signature import sign_tool_file from graphon.file import FileTransferMethod from graphon.file import helpers as file_helpers - -from core.tools.signature import sign_tool_file from models.model import MessageFile, UploadFile MAX_TOOL_FILE_EXTENSION_LENGTH = 10 diff --git a/api/core/app/workflow/file_runtime.py b/api/core/app/workflow/file_runtime.py index 8604235ef2..68e5e5f0c8 100644 --- a/api/core/app/workflow/file_runtime.py +++ b/api/core/app/workflow/file_runtime.py @@ -9,10 +9,6 @@ import urllib.parse from collections.abc import Generator from typing import TYPE_CHECKING, Literal -from graphon.file import FileTransferMethod -from graphon.file.protocols import HttpResponseProtocol, WorkflowFileRuntimeProtocol -from graphon.file.runtime import set_workflow_file_runtime - from configs import dify_config from core.app.file_access import DatabaseFileAccessController, FileAccessControllerProtocol from core.db.session_factory import session_factory @@ -20,6 +16,9 @@ from core.helper.ssrf_proxy import ssrf_proxy from core.tools.signature import sign_tool_file from core.workflow.file_reference import parse_file_reference from extensions.ext_storage import storage +from graphon.file import FileTransferMethod +from graphon.file.protocols import HttpResponseProtocol, WorkflowFileRuntimeProtocol +from graphon.file.runtime import set_workflow_file_runtime if TYPE_CHECKING: from graphon.file import File diff --git a/api/core/app/workflow/layers/llm_quota.py b/api/core/app/workflow/layers/llm_quota.py index c577ce0754..4a7918032e 100644 --- a/api/core/app/workflow/layers/llm_quota.py +++ b/api/core/app/workflow/layers/llm_quota.py @@ -7,17 +7,16 @@ This layer centralizes model-quota deduction outside node implementations. import logging from typing import TYPE_CHECKING, cast, final, override +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.app.llm import deduct_llm_quota, ensure_llm_quota_available +from core.errors.error import QuotaExceededError +from core.model_manager import ModelInstance from graphon.enums import BuiltinNodeTypes from graphon.graph_engine.entities.commands import AbortCommand, CommandType from graphon.graph_engine.layers import GraphEngineLayer from graphon.graph_events import GraphEngineEvent, GraphNodeEventBase, NodeRunSucceededEvent from graphon.nodes.base.node import Node -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.app.llm import deduct_llm_quota, ensure_llm_quota_available -from core.errors.error import QuotaExceededError -from core.model_manager import ModelInstance - if TYPE_CHECKING: from graphon.nodes.llm.node import LLMNode from graphon.nodes.parameter_extractor.parameter_extractor_node import ParameterExtractorNode diff --git a/api/core/app/workflow/layers/observability.py b/api/core/app/workflow/layers/observability.py index 99e8015c0b..8b5a5b9d7f 100644 --- a/api/core/app/workflow/layers/observability.py +++ b/api/core/app/workflow/layers/observability.py @@ -12,10 +12,6 @@ from contextvars import Token from dataclasses import dataclass from typing import cast, final, override -from graphon.enums import BuiltinNodeTypes, NodeType -from graphon.graph_engine.layers import GraphEngineLayer -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node from opentelemetry import context as context_api from opentelemetry.trace import Span, SpanKind, Tracer, get_tracer, set_span_in_context @@ -28,6 +24,10 @@ from extensions.otel.parser import ( ToolNodeOTelParser, ) from extensions.otel.runtime import is_instrument_flag_enabled +from graphon.enums import BuiltinNodeTypes, NodeType +from graphon.graph_engine.layers import GraphEngineLayer +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node logger = logging.getLogger(__name__) diff --git a/api/core/app/workflow/layers/persistence.py b/api/core/app/workflow/layers/persistence.py index ada065a943..87f005a250 100644 --- a/api/core/app/workflow/layers/persistence.py +++ b/api/core/app/workflow/layers/persistence.py @@ -14,6 +14,13 @@ from dataclasses import dataclass from datetime import datetime from typing import Any, Union +from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity +from core.ops.entities.trace_entity import TraceTaskName +from core.ops.ops_trace_manager import TraceQueueManager, TraceTask +from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository +from core.workflow.system_variables import SystemVariableKey +from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID +from core.workflow.workflow_run_outputs import project_node_outputs_for_workflow_run from graphon.entities import WorkflowExecution, WorkflowNodeExecution from graphon.enums import ( WorkflowExecutionStatus, @@ -38,14 +45,6 @@ from graphon.graph_events import ( NodeRunSucceededEvent, ) from graphon.node_events import NodeRunResult - -from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, WorkflowAppGenerateEntity -from core.ops.entities.trace_entity import TraceTaskName -from core.ops.ops_trace_manager import TraceQueueManager, TraceTask -from core.repositories.factory import WorkflowExecutionRepository, WorkflowNodeExecutionRepository -from core.workflow.system_variables import SystemVariableKey -from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID -from core.workflow.workflow_run_outputs import project_node_outputs_for_workflow_run from libs.datetime_utils import naive_utc_now diff --git a/api/core/base/tts/app_generator_tts_publisher.py b/api/core/base/tts/app_generator_tts_publisher.py index 3d8a7a54f3..9e3c187210 100644 --- a/api/core/base/tts/app_generator_tts_publisher.py +++ b/api/core/base/tts/app_generator_tts_publisher.py @@ -6,9 +6,6 @@ import re import threading from collections.abc import Iterable -from graphon.model_runtime.entities.message_entities import TextPromptMessageContent -from graphon.model_runtime.entities.model_entities import ModelType - from core.app.entities.queue_entities import ( MessageQueueMessage, QueueAgentMessageEvent, @@ -18,6 +15,8 @@ from core.app.entities.queue_entities import ( WorkflowQueueMessage, ) from core.model_manager import ModelInstance, ModelManager +from graphon.model_runtime.entities.message_entities import TextPromptMessageContent +from graphon.model_runtime.entities.model_entities import ModelType class AudioTrunk: diff --git a/api/core/datasource/datasource_manager.py b/api/core/datasource/datasource_manager.py index a5297fa33a..dc831e5cac 100644 --- a/api/core/datasource/datasource_manager.py +++ b/api/core/datasource/datasource_manager.py @@ -3,9 +3,6 @@ from collections.abc import Generator from threading import Lock from typing import Any, cast -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.file import File, FileTransferMethod, FileType, get_file_type_by_mime_type -from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent from sqlalchemy import select import contexts @@ -31,6 +28,9 @@ from core.plugin.impl.datasource import PluginDatasourceManager from core.workflow.file_reference import build_file_reference from core.workflow.nodes.datasource.entities import DatasourceParameter, OnlineDriveDownloadFileParam from factories import file_factory +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.file import File, FileTransferMethod, FileType, get_file_type_by_mime_type +from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent from models.model import UploadFile from models.tools import ToolFile from services.datasource_provider_service import DatasourceProviderService diff --git a/api/core/datasource/entities/api_entities.py b/api/core/datasource/entities/api_entities.py index 890f1ca319..352e6bfd49 100644 --- a/api/core/datasource/entities/api_entities.py +++ b/api/core/datasource/entities/api_entities.py @@ -1,10 +1,10 @@ from typing import Any, Literal, TypedDict -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, field_validator from core.datasource.entities.datasource_entities import DatasourceParameter from core.tools.entities.common_entities import I18nObject, I18nObjectDict +from graphon.model_runtime.utils.encoders import jsonable_encoder class DatasourceApiEntity(BaseModel): @@ -14,7 +14,7 @@ class DatasourceApiEntity(BaseModel): description: I18nObject parameters: list[DatasourceParameter] | None = None labels: list[str] = Field(default_factory=list) - output_schema: dict | None = None + output_schema: dict[str, Any] | None = None ToolProviderTypeApiLiteral = Literal["builtin", "api", "workflow"] | None @@ -30,7 +30,7 @@ class DatasourceProviderApiEntityDict(TypedDict): icon: str | dict label: I18nObjectDict type: str - team_credentials: dict | None + team_credentials: dict[str, Any] | None is_team_authorization: bool allow_delete: bool datasources: list[Any] @@ -45,8 +45,8 @@ class DatasourceProviderApiEntity(BaseModel): icon: str | dict label: I18nObject # label type: str - masked_credentials: dict | None = None - original_credentials: dict | None = None + masked_credentials: dict[str, Any] | None = None + original_credentials: dict[str, Any] | None = None is_team_authorization: bool = False allow_delete: bool = True plugin_id: str | None = Field(default="", description="The plugin id of the datasource") diff --git a/api/core/datasource/utils/message_transformer.py b/api/core/datasource/utils/message_transformer.py index c012e128f4..6a3f9e684a 100644 --- a/api/core/datasource/utils/message_transformer.py +++ b/api/core/datasource/utils/message_transformer.py @@ -2,11 +2,10 @@ import logging from collections.abc import Generator from mimetypes import guess_extension, guess_type -from graphon.file import File, FileTransferMethod, FileType - from core.datasource.entities.datasource_entities import DatasourceMessage from core.tools.tool_file_manager import ToolFileManager from core.workflow.file_reference import parse_file_reference +from graphon.file import File, FileTransferMethod, FileType from models.tools import ToolFile logger = logging.getLogger(__name__) diff --git a/api/core/entities/execution_extra_content.py b/api/core/entities/execution_extra_content.py index d304c982cd..04ae193396 100644 --- a/api/core/entities/execution_extra_content.py +++ b/api/core/entities/execution_extra_content.py @@ -3,9 +3,9 @@ from __future__ import annotations from collections.abc import Mapping, Sequence from typing import Any, TypeAlias -from graphon.nodes.human_input.entities import FormInput, UserAction from pydantic import BaseModel, ConfigDict, Field +from graphon.nodes.human_input.entities import FormInput, UserAction from models.execution_extra_content import ExecutionContentType diff --git a/api/core/entities/knowledge_entities.py b/api/core/entities/knowledge_entities.py index b1ba3c3e2a..a13938f3fb 100644 --- a/api/core/entities/knowledge_entities.py +++ b/api/core/entities/knowledge_entities.py @@ -1,3 +1,5 @@ +from typing import Any + from pydantic import BaseModel, Field, field_validator @@ -37,7 +39,7 @@ class PipelineDocument(BaseModel): id: str position: int data_source_type: str - data_source_info: dict | None = None + data_source_info: dict[str, Any] | None = None name: str indexing_status: str error: str | None = None diff --git a/api/core/entities/mcp_provider.py b/api/core/entities/mcp_provider.py index a440829b46..bfa4f56915 100644 --- a/api/core/entities/mcp_provider.py +++ b/api/core/entities/mcp_provider.py @@ -6,7 +6,6 @@ from enum import StrEnum from typing import TYPE_CHECKING, Any from urllib.parse import urlparse -from graphon.file import helpers as file_helpers from pydantic import BaseModel from configs import dify_config @@ -16,6 +15,7 @@ from core.helper.provider_cache import NoOpProviderCredentialCache from core.mcp.types import OAuthClientInformation, OAuthClientMetadata, OAuthTokens from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderType +from graphon.file import helpers as file_helpers if TYPE_CHECKING: from models.tools import MCPToolProvider diff --git a/api/core/entities/model_entities.py b/api/core/entities/model_entities.py index 84d95c38c6..e99a131500 100644 --- a/api/core/entities/model_entities.py +++ b/api/core/entities/model_entities.py @@ -1,10 +1,11 @@ from collections.abc import Sequence from enum import StrEnum, auto +from pydantic import BaseModel, ConfigDict + from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.model_entities import ModelType, ProviderModel from graphon.model_runtime.entities.provider_entities import ProviderEntity -from pydantic import BaseModel, ConfigDict class ModelStatus(StrEnum): diff --git a/api/core/entities/provider_configuration.py b/api/core/entities/provider_configuration.py index f3b2c31465..1ab66cceee 100644 --- a/api/core/entities/provider_configuration.py +++ b/api/core/entities/provider_configuration.py @@ -6,17 +6,8 @@ import re from collections import defaultdict from collections.abc import Iterator, Sequence from json import JSONDecodeError +from typing import Any -from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - CredentialFormSchema, - FormType, - ProviderEntity, -) -from graphon.model_runtime.model_providers.__base.ai_model import AIModel -from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory -from graphon.model_runtime.runtime import ModelRuntime from pydantic import BaseModel, ConfigDict, Field, PrivateAttr, model_validator from sqlalchemy import func, select from sqlalchemy.orm import Session @@ -33,6 +24,16 @@ from core.entities.provider_entities import ( from core.helper import encrypter from core.helper.model_provider_cache import ProviderCredentialsCache, ProviderCredentialsCacheType from core.plugin.impl.model_runtime_factory import create_plugin_model_provider_factory +from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + CredentialFormSchema, + FormType, + ProviderEntity, +) +from graphon.model_runtime.model_providers.__base.ai_model import AIModel +from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory +from graphon.model_runtime.runtime import ModelRuntime from libs.datetime_utils import naive_utc_now from models.engine import db from models.enums import CredentialSourceType @@ -111,7 +112,7 @@ class ProviderConfiguration(BaseModel): return ModelProviderFactory(model_runtime=self._bound_model_runtime) return create_plugin_model_provider_factory(tenant_id=self.tenant_id) - def get_current_credentials(self, model_type: ModelType, model: str) -> dict | None: + def get_current_credentials(self, model_type: ModelType, model: str) -> dict[str, Any] | None: """ Get current credentials. @@ -233,7 +234,7 @@ class ProviderConfiguration(BaseModel): return session.execute(stmt).scalar_one_or_none() - def _get_specific_provider_credential(self, credential_id: str) -> dict | None: + def _get_specific_provider_credential(self, credential_id: str) -> dict[str, Any] | None: """ Get a specific provider credential by ID. :param credential_id: Credential ID @@ -297,7 +298,7 @@ class ProviderConfiguration(BaseModel): stmt = stmt.where(ProviderCredential.id != exclude_id) return session.execute(stmt).scalar_one_or_none() is not None - def get_provider_credential(self, credential_id: str | None = None) -> dict | None: + def get_provider_credential(self, credential_id: str | None = None) -> dict[str, Any] | None: """ Get provider credentials. @@ -317,7 +318,9 @@ class ProviderConfiguration(BaseModel): else [], ) - def validate_provider_credentials(self, credentials: dict, credential_id: str = "", session: Session | None = None): + def validate_provider_credentials( + self, credentials: dict[str, Any], credential_id: str = "", session: Session | None = None + ): """ Validate custom credentials. :param credentials: provider credentials @@ -447,7 +450,7 @@ class ProviderConfiguration(BaseModel): provider_names.append(model_provider_id.provider_name) return provider_names - def create_provider_credential(self, credentials: dict, credential_name: str | None): + def create_provider_credential(self, credentials: dict[str, Any], credential_name: str | None): """ Add custom provider credentials. :param credentials: provider credentials @@ -515,7 +518,7 @@ class ProviderConfiguration(BaseModel): def update_provider_credential( self, - credentials: dict, + credentials: dict[str, Any], credential_id: str, credential_name: str | None, ): @@ -760,7 +763,7 @@ class ProviderConfiguration(BaseModel): def _get_specific_custom_model_credential( self, model_type: ModelType, model: str, credential_id: str - ) -> dict | None: + ) -> dict[str, Any] | None: """ Get a specific provider credential by ID. :param credential_id: Credential ID @@ -832,7 +835,9 @@ class ProviderConfiguration(BaseModel): stmt = stmt.where(ProviderModelCredential.id != exclude_id) return session.execute(stmt).scalar_one_or_none() is not None - def get_custom_model_credential(self, model_type: ModelType, model: str, credential_id: str | None) -> dict | None: + def get_custom_model_credential( + self, model_type: ModelType, model: str, credential_id: str | None + ) -> dict[str, Any] | None: """ Get custom model credentials. @@ -872,7 +877,7 @@ class ProviderConfiguration(BaseModel): self, model_type: ModelType, model: str, - credentials: dict, + credentials: dict[str, Any], credential_id: str = "", session: Session | None = None, ): @@ -939,7 +944,7 @@ class ProviderConfiguration(BaseModel): return _validate(new_session) def create_custom_model_credential( - self, model_type: ModelType, model: str, credentials: dict, credential_name: str | None + self, model_type: ModelType, model: str, credentials: dict[str, Any], credential_name: str | None ) -> None: """ Create a custom model credential. @@ -1002,7 +1007,12 @@ class ProviderConfiguration(BaseModel): raise def update_custom_model_credential( - self, model_type: ModelType, model: str, credentials: dict, credential_name: str | None, credential_id: str + self, + model_type: ModelType, + model: str, + credentials: dict[str, Any], + credential_name: str | None, + credential_id: str, ) -> None: """ Update a custom model credential. @@ -1412,7 +1422,9 @@ class ProviderConfiguration(BaseModel): # Get model instance of LLM return model_provider_factory.get_model_type_instance(provider=self.provider.provider, model_type=model_type) - def get_model_schema(self, model_type: ModelType, model: str, credentials: dict | None) -> AIModelEntity | None: + def get_model_schema( + self, model_type: ModelType, model: str, credentials: dict[str, Any] | None + ) -> AIModelEntity | None: """ Get model schema """ @@ -1471,7 +1483,7 @@ class ProviderConfiguration(BaseModel): return secret_input_form_variables - def obfuscated_credentials(self, credentials: dict, credential_form_schemas: list[CredentialFormSchema]): + def obfuscated_credentials(self, credentials: dict[str, Any], credential_form_schemas: list[CredentialFormSchema]): """ Obfuscated credentials. diff --git a/api/core/entities/provider_entities.py b/api/core/entities/provider_entities.py index 2c8767a32b..72b29c2277 100644 --- a/api/core/entities/provider_entities.py +++ b/api/core/entities/provider_entities.py @@ -1,9 +1,8 @@ from __future__ import annotations from enum import StrEnum, auto -from typing import Union +from typing import Any, Union -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, ConfigDict, Field from core.entities.parameter_entities import ( @@ -13,6 +12,7 @@ from core.entities.parameter_entities import ( ToolSelectorScope, ) from core.tools.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType class ProviderQuotaType(StrEnum): @@ -88,7 +88,7 @@ class SystemConfiguration(BaseModel): enabled: bool current_quota_type: ProviderQuotaType | None = None quota_configurations: list[QuotaConfiguration] = [] - credentials: dict | None = None + credentials: dict[str, Any] | None = None class CustomProviderConfiguration(BaseModel): @@ -96,7 +96,7 @@ class CustomProviderConfiguration(BaseModel): Model class for provider custom configuration. """ - credentials: dict + credentials: dict[str, Any] current_credential_id: str | None = None current_credential_name: str | None = None available_credentials: list[CredentialConfiguration] = [] @@ -109,7 +109,7 @@ class CustomModelConfiguration(BaseModel): model: str model_type: ModelType - credentials: dict | None + credentials: dict[str, Any] | None current_credential_id: str | None = None current_credential_name: str | None = None available_model_credentials: list[CredentialConfiguration] = [] @@ -145,7 +145,7 @@ class ModelLoadBalancingConfiguration(BaseModel): id: str name: str - credentials: dict + credentials: dict[str, Any] credential_source_type: str | None = None credential_id: str | None = None diff --git a/api/core/extension/api_based_extension_requestor.py b/api/core/extension/api_based_extension_requestor.py index f9e6099049..01139d07e2 100644 --- a/api/core/extension/api_based_extension_requestor.py +++ b/api/core/extension/api_based_extension_requestor.py @@ -1,4 +1,4 @@ -from typing import cast +from typing import Any, cast import httpx @@ -14,7 +14,7 @@ class APIBasedExtensionRequestor: self.api_endpoint = api_endpoint self.api_key = api_key - def request(self, point: APIBasedExtensionPoint, params: dict): + def request(self, point: APIBasedExtensionPoint, params: dict[str, Any]) -> dict[str, Any]: """ Request the api. @@ -49,4 +49,4 @@ class APIBasedExtensionRequestor: if response.status_code != 200: raise ValueError(f"request error, status_code: {response.status_code}, content: {response.text[:100]}") - return cast(dict, response.json()) + return cast(dict[str, Any], response.json()) diff --git a/api/core/extension/extensible.py b/api/core/extension/extensible.py index b79dbeb7e0..c08e319aac 100644 --- a/api/core/extension/extensible.py +++ b/api/core/extension/extensible.py @@ -21,8 +21,8 @@ class ExtensionModule(StrEnum): class ModuleExtension(BaseModel): extension_class: Any | None = None name: str - label: dict | None = None - form_schema: list | None = None + label: dict[str, Any] | None = None + form_schema: list[dict[str, Any]] | None = None builtin: bool = True position: int | None = None diff --git a/api/core/external_data_tool/factory.py b/api/core/external_data_tool/factory.py index 6c542d681b..f404aa7286 100644 --- a/api/core/external_data_tool/factory.py +++ b/api/core/external_data_tool/factory.py @@ -6,14 +6,14 @@ from extensions.ext_code_based_extension import code_based_extension class ExternalDataToolFactory: - def __init__(self, name: str, tenant_id: str, app_id: str, variable: str, config: dict): + def __init__(self, name: str, tenant_id: str, app_id: str, variable: str, config: dict[str, Any]): extension_class = code_based_extension.extension_class(ExtensionModule.EXTERNAL_DATA_TOOL, name) self.__extension_instance = extension_class( tenant_id=tenant_id, app_id=app_id, variable=variable, config=config ) @classmethod - def validate_config(cls, name: str, tenant_id: str, config: dict): + def validate_config(cls, name: str, tenant_id: str, config: dict[str, Any]) -> None: """ Validate the incoming form config data. diff --git a/api/core/helper/code_executor/code_executor.py b/api/core/helper/code_executor/code_executor.py index 35bfcfb6a5..951e065b2c 100644 --- a/api/core/helper/code_executor/code_executor.py +++ b/api/core/helper/code_executor/code_executor.py @@ -4,7 +4,6 @@ from threading import Lock from typing import Any import httpx -from graphon.nodes.code.entities import CodeLanguage from pydantic import BaseModel from yarl import URL @@ -14,6 +13,7 @@ from core.helper.code_executor.jinja2.jinja2_transformer import Jinja2TemplateTr from core.helper.code_executor.python3.python3_transformer import Python3TemplateTransformer from core.helper.code_executor.template_transformer import TemplateTransformer from core.helper.http_client_pooling import get_pooled_http_client +from graphon.nodes.code.entities import CodeLanguage logger = logging.getLogger(__name__) code_execution_endpoint_url = URL(str(dify_config.CODE_EXECUTION_ENDPOINT)) diff --git a/api/core/helper/moderation.py b/api/core/helper/moderation.py index a1e782a094..dc37a36943 100644 --- a/api/core/helper/moderation.py +++ b/api/core/helper/moderation.py @@ -2,14 +2,13 @@ import logging import secrets from typing import cast -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeBadRequestError -from graphon.model_runtime.model_providers.__base.moderation_model import ModerationModel - from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.entities import DEFAULT_PLUGIN_ID from core.plugin.impl.model_runtime_factory import create_plugin_model_provider_factory from extensions.ext_hosting_provider import hosting_configuration +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeBadRequestError +from graphon.model_runtime.model_providers.__base.moderation_model import ModerationModel from models.provider import ProviderType logger = logging.getLogger(__name__) diff --git a/api/core/hosting_configuration.py b/api/core/hosting_configuration.py index 60f5434bc1..8bcb899b23 100644 --- a/api/core/hosting_configuration.py +++ b/api/core/hosting_configuration.py @@ -1,10 +1,12 @@ +from typing import Any + from flask import Flask -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel from configs import dify_config from core.entities import DEFAULT_PLUGIN_ID from core.entities.provider_entities import ProviderQuotaType, QuotaUnit, RestrictModel +from graphon.model_runtime.entities.model_entities import ModelType class HostingQuota(BaseModel): @@ -28,7 +30,7 @@ class FreeHostingQuota(HostingQuota): class HostingProvider(BaseModel): enabled: bool = False - credentials: dict | None = None + credentials: dict[str, Any] | None = None quota_unit: QuotaUnit | None = None quotas: list[HostingQuota] = [] diff --git a/api/core/indexing_runner.py b/api/core/indexing_runner.py index b8d5ca2f50..b6e33396d1 100644 --- a/api/core/indexing_runner.py +++ b/api/core/indexing_runner.py @@ -9,7 +9,6 @@ from collections.abc import Mapping from typing import Any from flask import Flask, current_app -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import delete, func, select, update from sqlalchemy.orm.exc import ObjectDeletedError @@ -35,6 +34,7 @@ from core.tools.utils.web_reader_tool import get_image_upload_file_ids from extensions.ext_database import db from extensions.ext_redis import redis_client from extensions.ext_storage import storage +from graphon.model_runtime.entities.model_entities import ModelType from libs import helper from libs.datetime_utils import naive_utc_now from models import Account @@ -735,7 +735,9 @@ class IndexingRunner: @staticmethod def _update_document_index_status( - document_id: str, after_indexing_status: IndexingStatus, extra_update_params: dict | None = None + document_id: str, + after_indexing_status: IndexingStatus, + extra_update_params: Mapping[Any, Any] | None = None, ): """ Update the document indexing status. @@ -762,7 +764,7 @@ class IndexingRunner: db.session.commit() @staticmethod - def _update_segments_by_document(dataset_document_id: str, update_params: dict): + def _update_segments_by_document(dataset_document_id: str, update_params: Mapping[Any, Any]): """ Update the document segment by document id. """ diff --git a/api/core/llm_generator/llm_generator.py b/api/core/llm_generator/llm_generator.py index aa258c9f89..348526b0ef 100644 --- a/api/core/llm_generator/llm_generator.py +++ b/api/core/llm_generator/llm_generator.py @@ -2,14 +2,9 @@ import json import logging import re from collections.abc import Sequence -from typing import Protocol, TypedDict, cast +from typing import Any, Protocol, TypedDict, cast import json_repair -from graphon.enums import WorkflowNodeExecutionMetadataKey -from graphon.model_runtime.entities.llm_entities import LLMResult -from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from sqlalchemy import select from core.app.app_config.entities import ModelConfig @@ -35,6 +30,11 @@ from core.ops.utils import measure_time from core.prompt.utils.prompt_template_parser import PromptTemplateParser from extensions.ext_database import db from extensions.ext_storage import storage +from graphon.enums import WorkflowNodeExecutionMetadataKey +from graphon.model_runtime.entities.llm_entities import LLMResult +from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from models import App, Message, WorkflowNodeExecutionModel from models.workflow import Workflow @@ -533,7 +533,7 @@ class LLMGenerator: def __instruction_modify_common( tenant_id: str, model_config: ModelConfig, - last_run: dict | None, + last_run: dict[str, Any] | None, current: str | None, error_message: str | None, instruction: str, diff --git a/api/core/llm_generator/output_parser/structured_output.py b/api/core/llm_generator/output_parser/structured_output.py index a1710f11ac..d2e375626f 100644 --- a/api/core/llm_generator/output_parser/structured_output.py +++ b/api/core/llm_generator/output_parser/structured_output.py @@ -5,6 +5,11 @@ from enum import StrEnum from typing import Any, Literal, cast, overload import json_repair +from pydantic import TypeAdapter, ValidationError + +from core.llm_generator.output_parser.errors import OutputParserError +from core.llm_generator.prompts import STRUCTURED_OUTPUT_PROMPT +from core.model_manager import ModelInstance from graphon.model_runtime.callbacks.base_callback import Callback from graphon.model_runtime.entities.llm_entities import ( LLMResult, @@ -21,11 +26,6 @@ from graphon.model_runtime.entities.message_entities import ( TextPromptMessageContent, ) from graphon.model_runtime.entities.model_entities import AIModelEntity, ParameterRule -from pydantic import TypeAdapter, ValidationError - -from core.llm_generator.output_parser.errors import OutputParserError -from core.llm_generator.prompts import STRUCTURED_OUTPUT_PROMPT -from core.model_manager import ModelInstance class ResponseFormat(StrEnum): @@ -200,9 +200,9 @@ def _handle_native_json_schema( provider: str, model_schema: AIModelEntity, structured_output_schema: Mapping, - model_parameters: dict, + model_parameters: dict[str, Any], rules: list[ParameterRule], -): +) -> dict[str, Any]: """ Handle structured output for models with native JSON schema support. @@ -224,7 +224,7 @@ def _handle_native_json_schema( return model_parameters -def _set_response_format(model_parameters: dict, rules: list): +def _set_response_format(model_parameters: dict[str, Any], rules: list[ParameterRule]) -> None: """ Set the appropriate response format parameter based on model rules. @@ -326,7 +326,7 @@ def _prepare_schema_for_model(provider: str, model_schema: AIModelEntity, schema return {"schema": processed_schema, "name": "llm_response"} -def remove_additional_properties(schema: dict): +def remove_additional_properties(schema: dict[str, Any]) -> None: """ Remove additionalProperties fields from JSON schema. Used for models like Gemini that don't support this property. @@ -349,7 +349,7 @@ def remove_additional_properties(schema: dict): remove_additional_properties(item) -def convert_boolean_to_string(schema: dict): +def convert_boolean_to_string(schema: dict[str, Any]) -> None: """ Convert boolean type specifications to string in JSON schema. diff --git a/api/core/mcp/server/streamable_http.py b/api/core/mcp/server/streamable_http.py index 72171d1536..884610ca82 100644 --- a/api/core/mcp/server/streamable_http.py +++ b/api/core/mcp/server/streamable_http.py @@ -3,12 +3,11 @@ import logging from collections.abc import Mapping from typing import Any, NotRequired, TypedDict, cast -from graphon.variables.input_entities import VariableEntity, VariableEntityType - from configs import dify_config from core.app.entities.app_invoke_entities import InvokeFrom from core.app.features.rate_limiting.rate_limit import RateLimitGenerator from core.mcp import types as mcp_types +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models.model import App, AppMCPServer, AppMode, EndUser from services.app_generate_service import AppGenerateService diff --git a/api/core/mcp/utils.py b/api/core/mcp/utils.py index 7e35044176..7b5a7635f1 100644 --- a/api/core/mcp/utils.py +++ b/api/core/mcp/utils.py @@ -4,11 +4,11 @@ from contextlib import AbstractContextManager import httpx import httpx_sse -from graphon.model_runtime.utils.encoders import jsonable_encoder from httpx_sse import connect_sse from configs import dify_config from core.mcp.types import ErrorData, JSONRPCError +from graphon.model_runtime.utils.encoders import jsonable_encoder HTTP_REQUEST_NODE_SSL_VERIFY = dify_config.HTTP_REQUEST_NODE_SSL_VERIFY diff --git a/api/core/memory/token_buffer_memory.py b/api/core/memory/token_buffer_memory.py index 5809d6f74a..d840ee213c 100644 --- a/api/core/memory/token_buffer_memory.py +++ b/api/core/memory/token_buffer_memory.py @@ -1,5 +1,14 @@ from collections.abc import Sequence +from sqlalchemy import select +from sqlalchemy.orm import sessionmaker + +from core.app.app_config.features.file_upload.manager import FileUploadConfigManager +from core.app.file_access import DatabaseFileAccessController +from core.model_manager import ModelInstance +from core.prompt.utils.extract_thread_messages import extract_thread_messages +from extensions.ext_database import db +from factories import file_factory from graphon.file import file_manager from graphon.model_runtime.entities import ( AssistantPromptMessage, @@ -10,15 +19,6 @@ from graphon.model_runtime.entities import ( UserPromptMessage, ) from graphon.model_runtime.entities.message_entities import PromptMessageContentUnionTypes -from sqlalchemy import select -from sqlalchemy.orm import sessionmaker - -from core.app.app_config.features.file_upload.manager import FileUploadConfigManager -from core.app.file_access import DatabaseFileAccessController -from core.model_manager import ModelInstance -from core.prompt.utils.extract_thread_messages import extract_thread_messages -from extensions.ext_database import db -from factories import file_factory from models.model import AppMode, Conversation, Message, MessageFile from models.workflow import Workflow from repositories.api_workflow_run_repository import APIWorkflowRunRepository diff --git a/api/core/model_manager.py b/api/core/model_manager.py index 86d042de3e..d8d8dfedd8 100644 --- a/api/core/model_manager.py +++ b/api/core/model_manager.py @@ -2,6 +2,15 @@ import logging from collections.abc import Callable, Generator, Iterable, Mapping, Sequence from typing import IO, Any, Literal, Optional, Union, cast, overload +from configs import dify_config +from core.entities import PluginCredentialType +from core.entities.embedding_type import EmbeddingInputType +from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle +from core.entities.provider_entities import ModelLoadBalancingConfiguration +from core.errors.error import ProviderTokenNotInitError +from core.plugin.impl.model_runtime_factory import create_plugin_provider_manager +from core.provider_manager import ProviderManager +from extensions.ext_redis import redis_client from graphon.model_runtime.callbacks.base_callback import Callback from graphon.model_runtime.entities.llm_entities import LLMResult from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool @@ -15,16 +24,6 @@ from graphon.model_runtime.model_providers.__base.rerank_model import RerankMode from graphon.model_runtime.model_providers.__base.speech2text_model import Speech2TextModel from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from graphon.model_runtime.model_providers.__base.tts_model import TTSModel - -from configs import dify_config -from core.entities import PluginCredentialType -from core.entities.embedding_type import EmbeddingInputType -from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle -from core.entities.provider_entities import ModelLoadBalancingConfiguration -from core.errors.error import ProviderTokenNotInitError -from core.plugin.impl.model_runtime_factory import create_plugin_provider_manager -from core.provider_manager import ProviderManager -from extensions.ext_redis import redis_client from models.provider import ProviderType logger = logging.getLogger(__name__) @@ -77,7 +76,7 @@ class ModelInstance: @staticmethod def _get_load_balancing_manager( - configuration: ProviderConfiguration, model_type: ModelType, model: str, credentials: dict + configuration: ProviderConfiguration, model_type: ModelType, model: str, credentials: dict[str, Any] ) -> Optional["LBModelManager"]: """ Get load balancing model credentials @@ -115,7 +114,7 @@ class ModelInstance: def invoke_llm( self, prompt_messages: Sequence[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: Sequence[PromptMessageTool] | None = None, stop: list[str] | None = None, stream: Literal[True] = True, @@ -126,7 +125,7 @@ class ModelInstance: def invoke_llm( self, prompt_messages: list[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: Sequence[PromptMessageTool] | None = None, stop: list[str] | None = None, stream: Literal[False] = False, @@ -137,7 +136,7 @@ class ModelInstance: def invoke_llm( self, prompt_messages: list[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: Sequence[PromptMessageTool] | None = None, stop: list[str] | None = None, stream: bool = True, @@ -147,7 +146,7 @@ class ModelInstance: def invoke_llm( self, prompt_messages: Sequence[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: Sequence[PromptMessageTool] | None = None, stop: Sequence[str] | None = None, stream: bool = True, @@ -528,7 +527,7 @@ class LBModelManager: model_type: ModelType, model: str, load_balancing_configs: list[ModelLoadBalancingConfiguration], - managed_credentials: dict | None = None, + managed_credentials: dict[str, Any] | None = None, ): """ Load balancing model manager diff --git a/api/core/moderation/api/api.py b/api/core/moderation/api/api.py index 2d72b17a04..28165592fc 100644 --- a/api/core/moderation/api/api.py +++ b/api/core/moderation/api/api.py @@ -1,3 +1,5 @@ +from typing import Any + from pydantic import BaseModel, Field from sqlalchemy import select @@ -10,7 +12,7 @@ from models.api_based_extension import APIBasedExtension class ModerationInputParams(BaseModel): app_id: str = "" - inputs: dict = Field(default_factory=dict) + inputs: dict[str, Any] = Field(default_factory=dict) query: str = "" @@ -23,7 +25,7 @@ class ApiModeration(Moderation): name: str = "api" @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -41,7 +43,7 @@ class ApiModeration(Moderation): if not extension: raise ValueError("API-based Extension not found. Please check it again.") - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: flagged = False preset_response = "" if self.config is None: @@ -73,7 +75,7 @@ class ApiModeration(Moderation): flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response ) - def _get_config_by_requestor(self, extension_point: APIBasedExtensionPoint, params: dict): + def _get_config_by_requestor(self, extension_point: APIBasedExtensionPoint, params: dict[str, Any]): if self.config is None: raise ValueError("The config is not set.") extension = self._get_api_based_extension(self.tenant_id, self.config.get("api_based_extension_id", "")) diff --git a/api/core/moderation/base.py b/api/core/moderation/base.py index 31dd0d5568..e090ee89ad 100644 --- a/api/core/moderation/base.py +++ b/api/core/moderation/base.py @@ -1,5 +1,6 @@ from abc import ABC, abstractmethod from enum import StrEnum, auto +from typing import Any from pydantic import BaseModel, Field @@ -15,7 +16,7 @@ class ModerationInputsResult(BaseModel): flagged: bool = False action: ModerationAction preset_response: str = "" - inputs: dict = Field(default_factory=dict) + inputs: dict[str, Any] = Field(default_factory=dict) query: str = "" @@ -33,13 +34,13 @@ class Moderation(Extensible, ABC): module: ExtensionModule = ExtensionModule.MODERATION - def __init__(self, app_id: str, tenant_id: str, config: dict | None = None): + def __init__(self, app_id: str, tenant_id: str, config: dict[str, Any] | None = None): super().__init__(tenant_id, config) self.app_id = app_id @classmethod @abstractmethod - def validate_config(cls, tenant_id: str, config: dict) -> None: + def validate_config(cls, tenant_id: str, config: dict[str, Any]) -> None: """ Validate the incoming form config data. @@ -50,7 +51,7 @@ class Moderation(Extensible, ABC): raise NotImplementedError @abstractmethod - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: """ Moderation for inputs. After the user inputs, this method will be called to perform sensitive content review @@ -75,7 +76,7 @@ class Moderation(Extensible, ABC): raise NotImplementedError @classmethod - def _validate_inputs_and_outputs_config(cls, config: dict, is_preset_response_required: bool): + def _validate_inputs_and_outputs_config(cls, config: dict[str, Any], is_preset_response_required: bool): # inputs_config inputs_config = config.get("inputs_config") if not isinstance(inputs_config, dict): diff --git a/api/core/moderation/factory.py b/api/core/moderation/factory.py index c2c8be6d6d..c22306ac94 100644 --- a/api/core/moderation/factory.py +++ b/api/core/moderation/factory.py @@ -1,3 +1,5 @@ +from typing import Any + from core.extension.extensible import ExtensionModule from core.moderation.base import Moderation, ModerationInputsResult, ModerationOutputsResult from extensions.ext_code_based_extension import code_based_extension @@ -6,12 +8,12 @@ from extensions.ext_code_based_extension import code_based_extension class ModerationFactory: __extension_instance: Moderation - def __init__(self, name: str, app_id: str, tenant_id: str, config: dict): + def __init__(self, name: str, app_id: str, tenant_id: str, config: dict[str, Any]): extension_class = code_based_extension.extension_class(ExtensionModule.MODERATION, name) self.__extension_instance = extension_class(app_id, tenant_id, config) @classmethod - def validate_config(cls, name: str, tenant_id: str, config: dict): + def validate_config(cls, name: str, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -24,7 +26,7 @@ class ModerationFactory: # FIXME: mypy error, try to fix it instead of using type: ignore extension_class.validate_config(tenant_id, config) # type: ignore - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: """ Moderation for inputs. After the user inputs, this method will be called to perform sensitive content review diff --git a/api/core/moderation/keywords/keywords.py b/api/core/moderation/keywords/keywords.py index 8d8d153743..7d80d3a53c 100644 --- a/api/core/moderation/keywords/keywords.py +++ b/api/core/moderation/keywords/keywords.py @@ -8,7 +8,7 @@ class KeywordsModeration(Moderation): name: str = "keywords" @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -28,7 +28,7 @@ class KeywordsModeration(Moderation): if len(keywords_row_len) > 100: raise ValueError("the number of rows for the keywords must be less than 100") - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: flagged = False preset_response = "" if self.config is None: @@ -66,7 +66,7 @@ class KeywordsModeration(Moderation): flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response ) - def _is_violated(self, inputs: dict, keywords_list: list) -> bool: + def _is_violated(self, inputs: dict[str, Any], keywords_list: list[str]) -> bool: return any(self._check_keywords_in_value(keywords_list, value) for value in inputs.values()) def _check_keywords_in_value(self, keywords_list: Sequence[str], value: Any) -> bool: diff --git a/api/core/moderation/openai_moderation/openai_moderation.py b/api/core/moderation/openai_moderation/openai_moderation.py index dd038c77f1..6e6e94502c 100644 --- a/api/core/moderation/openai_moderation/openai_moderation.py +++ b/api/core/moderation/openai_moderation/openai_moderation.py @@ -1,14 +1,15 @@ -from graphon.model_runtime.entities.model_entities import ModelType +from typing import Any from core.model_manager import ModelManager from core.moderation.base import Moderation, ModerationAction, ModerationInputsResult, ModerationOutputsResult +from graphon.model_runtime.entities.model_entities import ModelType class OpenAIModeration(Moderation): name: str = "openai_moderation" @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): """ Validate the incoming form config data. @@ -18,7 +19,7 @@ class OpenAIModeration(Moderation): """ cls._validate_inputs_and_outputs_config(config, True) - def moderation_for_inputs(self, inputs: dict, query: str = "") -> ModerationInputsResult: + def moderation_for_inputs(self, inputs: dict[str, Any], query: str = "") -> ModerationInputsResult: flagged = False preset_response = "" if self.config is None: @@ -49,7 +50,7 @@ class OpenAIModeration(Moderation): flagged=flagged, action=ModerationAction.DIRECT_OUTPUT, preset_response=preset_response ) - def _is_violated(self, inputs: dict): + def _is_violated(self, inputs: dict[str, Any]): text = "\n".join(str(inputs.values())) model_manager = ModelManager.for_tenant(tenant_id=self.tenant_id) model_instance = model_manager.get_model_instance( diff --git a/api/core/ops/aliyun_trace/aliyun_trace.py b/api/core/ops/aliyun_trace/aliyun_trace.py index 70aaf2a07b..76e81242f4 100644 --- a/api/core/ops/aliyun_trace/aliyun_trace.py +++ b/api/core/ops/aliyun_trace/aliyun_trace.py @@ -1,8 +1,6 @@ import logging from collections.abc import Sequence -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from opentelemetry.trace import SpanKind from sqlalchemy.orm import sessionmaker @@ -60,6 +58,8 @@ from core.ops.entities.trace_entity import ( ) from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.entities import WorkflowNodeExecution +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) diff --git a/api/core/ops/aliyun_trace/utils.py b/api/core/ops/aliyun_trace/utils.py index aa35ac74c2..2e02a186cc 100644 --- a/api/core/ops/aliyun_trace/utils.py +++ b/api/core/ops/aliyun_trace/utils.py @@ -2,8 +2,6 @@ import json from collections.abc import Mapping from typing import Any, TypedDict -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionStatus from opentelemetry.trace import Link, Status, StatusCode from core.ops.aliyun_trace.entities.semconv import ( @@ -17,6 +15,8 @@ from core.ops.aliyun_trace.entities.semconv import ( ) from core.rag.models.document import Document from extensions.ext_database import db +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionStatus from models import EndUser # Constants diff --git a/api/core/ops/arize_phoenix_trace/arize_phoenix_trace.py b/api/core/ops/arize_phoenix_trace/arize_phoenix_trace.py index 66933cea28..78516e1a22 100644 --- a/api/core/ops/arize_phoenix_trace/arize_phoenix_trace.py +++ b/api/core/ops/arize_phoenix_trace/arize_phoenix_trace.py @@ -6,7 +6,6 @@ from datetime import datetime, timedelta from typing import Any, Union, cast from urllib.parse import urlparse -from graphon.enums import WorkflowNodeExecutionStatus from openinference.semconv.trace import ( MessageAttributes, OpenInferenceMimeTypeValues, @@ -41,6 +40,7 @@ from core.ops.entities.trace_entity import ( from core.ops.utils import JSON_DICT_ADAPTER from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import WorkflowNodeExecutionStatus from models.model import EndUser, MessageFile from models.workflow import WorkflowNodeExecutionTriggeredFrom @@ -778,7 +778,7 @@ class ArizePhoenixDataTrace(BaseTraceInstance): logger.info("[Arize/Phoenix] Failed to construct project URL: %s", str(e), exc_info=True) raise ValueError(f"[Arize/Phoenix] Failed to construct project URL: {str(e)}") - def _construct_llm_attributes(self, prompts: dict | list | str | None) -> dict[str, str]: + def _construct_llm_attributes(self, prompts: dict[str, Any] | list[Any] | str | None) -> dict[str, str]: """Construct LLM attributes with passed prompts for Arize/Phoenix.""" attributes: dict[str, str] = {} @@ -797,7 +797,9 @@ class ArizePhoenixDataTrace(BaseTraceInstance): path = f"{SpanAttributes.LLM_INPUT_MESSAGES}.{message_index}.{key}" set_attribute(path, value) - def set_tool_call_attributes(message_index: int, tool_index: int, tool_call: dict | object | None) -> None: + def set_tool_call_attributes( + message_index: int, tool_index: int, tool_call: dict[str, Any] | object | None + ) -> None: """Extract and assign tool call details safely.""" if not tool_call: return diff --git a/api/core/ops/langfuse_trace/langfuse_trace.py b/api/core/ops/langfuse_trace/langfuse_trace.py index d53aa84aed..7eacc2be46 100644 --- a/api/core/ops/langfuse_trace/langfuse_trace.py +++ b/api/core/ops/langfuse_trace/langfuse_trace.py @@ -3,7 +3,6 @@ import os import uuid from datetime import UTC, datetime, timedelta -from graphon.enums import BuiltinNodeTypes from langfuse import Langfuse from langfuse.api import ( CreateGenerationBody, @@ -40,6 +39,7 @@ from core.ops.langfuse_trace.entities.langfuse_trace_entity import ( from core.ops.utils import filter_none_values from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes from models import EndUser, WorkflowNodeExecutionTriggeredFrom from models.enums import MessageStatus diff --git a/api/core/ops/langsmith_trace/langsmith_trace.py b/api/core/ops/langsmith_trace/langsmith_trace.py index 490c64af84..d960038f15 100644 --- a/api/core/ops/langsmith_trace/langsmith_trace.py +++ b/api/core/ops/langsmith_trace/langsmith_trace.py @@ -4,7 +4,6 @@ import uuid from datetime import datetime, timedelta from typing import cast -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from langsmith import Client from langsmith.schemas import RunBase from sqlalchemy.orm import sessionmaker @@ -30,6 +29,7 @@ from core.ops.langsmith_trace.entities.langsmith_trace_entity import ( from core.ops.utils import filter_none_values, generate_dotted_order from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser, MessageFile, WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) diff --git a/api/core/ops/mlflow_trace/mlflow_trace.py b/api/core/ops/mlflow_trace/mlflow_trace.py index 3d8c1dd038..87fcaeabcc 100644 --- a/api/core/ops/mlflow_trace/mlflow_trace.py +++ b/api/core/ops/mlflow_trace/mlflow_trace.py @@ -4,7 +4,6 @@ from datetime import datetime, timedelta from typing import Any, cast import mlflow -from graphon.enums import BuiltinNodeTypes from mlflow.entities import Document, Span, SpanEvent, SpanStatusCode, SpanType from mlflow.tracing.constant import SpanAttributeKey, TokenUsageKey, TraceMetadataKey from mlflow.tracing.fluent import start_span_no_context, update_current_trace @@ -26,6 +25,7 @@ from core.ops.entities.trace_entity import ( ) from core.ops.utils import JSON_DICT_ADAPTER from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes from models import EndUser from models.workflow import WorkflowNodeExecutionModel @@ -242,7 +242,7 @@ class MLflowDataTrace(BaseTraceInstance): return inputs, attributes - def _parse_knowledge_retrieval_outputs(self, outputs: dict): + def _parse_knowledge_retrieval_outputs(self, outputs: dict[str, Any]): """Parse KR outputs and attributes from KR workflow node""" retrieved = outputs.get("result", []) @@ -319,7 +319,7 @@ class MLflowDataTrace(BaseTraceInstance): end_time_ns=datetime_to_nanoseconds(trace_info.end_time), ) - def _get_message_user_id(self, metadata: dict) -> str | None: + def _get_message_user_id(self, metadata: dict[str, Any]) -> str | None: if (end_user_id := metadata.get("from_end_user_id")) and ( end_user_data := db.session.get(EndUser, end_user_id) ): @@ -468,7 +468,7 @@ class MLflowDataTrace(BaseTraceInstance): } return node_type_mapping.get(node_type, "CHAIN") # type: ignore[arg-type,call-overload] - def _set_trace_metadata(self, span: Span, metadata: dict): + def _set_trace_metadata(self, span: Span, metadata: dict[str, Any]): token = None try: # NB: Set span in context such that we can use update_current_trace() API @@ -490,7 +490,7 @@ class MLflowDataTrace(BaseTraceInstance): return messages return prompts # Fallback to original format - def _parse_single_message(self, item: dict): + def _parse_single_message(self, item: dict[str, Any]): """Postprocess single message format to be standard chat message""" role = item.get("role", "user") msg = {"role": role, "content": item.get("text", "")} diff --git a/api/core/ops/opik_trace/opik_trace.py b/api/core/ops/opik_trace/opik_trace.py index 2215bdeb33..672efe45bd 100644 --- a/api/core/ops/opik_trace/opik_trace.py +++ b/api/core/ops/opik_trace/opik_trace.py @@ -3,9 +3,8 @@ import logging import os import uuid from datetime import datetime, timedelta -from typing import cast +from typing import Any, cast -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from opik import Opik, Trace from opik.id_helpers import uuid4_to_uuid7 from sqlalchemy.orm import sessionmaker @@ -25,6 +24,7 @@ from core.ops.entities.trace_entity import ( ) from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser, MessageFile, WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) @@ -436,7 +436,7 @@ class OpikDataTrace(BaseTraceInstance): self.add_span(span_data) - def add_trace(self, opik_trace_data: dict) -> Trace: + def add_trace(self, opik_trace_data: dict[str, Any]) -> Trace: try: trace = self.opik_client.trace(**opik_trace_data) logger.debug("Opik Trace created successfully") @@ -444,7 +444,7 @@ class OpikDataTrace(BaseTraceInstance): except Exception as e: raise ValueError(f"Opik Failed to create trace: {str(e)}") - def add_span(self, opik_span_data: dict): + def add_span(self, opik_span_data: dict[str, Any]): try: self.opik_client.span(**opik_span_data) logger.debug("Opik Span created successfully") diff --git a/api/core/ops/tencent_trace/span_builder.py b/api/core/ops/tencent_trace/span_builder.py index f79095d966..36878dc58f 100644 --- a/api/core/ops/tencent_trace/span_builder.py +++ b/api/core/ops/tencent_trace/span_builder.py @@ -6,8 +6,6 @@ import json import logging from datetime import datetime -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from opentelemetry.trace import Status, StatusCode from core.ops.entities.trace_entity import ( @@ -43,6 +41,8 @@ from core.ops.tencent_trace.entities.semconv import ( from core.ops.tencent_trace.entities.tencent_trace_entity import SpanData from core.ops.tencent_trace.utils import TencentTraceUtils from core.rag.models.document import Document +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus logger = logging.getLogger(__name__) diff --git a/api/core/ops/tencent_trace/tencent_trace.py b/api/core/ops/tencent_trace/tencent_trace.py index 84f54d8a5a..d681b9da80 100644 --- a/api/core/ops/tencent_trace/tencent_trace.py +++ b/api/core/ops/tencent_trace/tencent_trace.py @@ -4,10 +4,6 @@ Tencent APM tracing implementation with separated concerns import logging -from graphon.entities.workflow_node_execution import ( - WorkflowNodeExecution, -) -from graphon.nodes import BuiltinNodeTypes from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -29,6 +25,10 @@ from core.ops.tencent_trace.span_builder import TencentSpanBuilder from core.ops.tencent_trace.utils import TencentTraceUtils from core.repositories import SQLAlchemyWorkflowNodeExecutionRepository from extensions.ext_database import db +from graphon.entities.workflow_node_execution import ( + WorkflowNodeExecution, +) +from graphon.nodes import BuiltinNodeTypes from models import Account, App, TenantAccountJoin, WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) diff --git a/api/core/ops/weave_trace/weave_trace.py b/api/core/ops/weave_trace/weave_trace.py index 8d9ba4694d..f79544f1c7 100644 --- a/api/core/ops/weave_trace/weave_trace.py +++ b/api/core/ops/weave_trace/weave_trace.py @@ -6,7 +6,6 @@ from typing import Any, cast import wandb import weave -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from sqlalchemy.orm import sessionmaker from weave.trace_server.trace_server_interface import ( CallEndReq, @@ -33,6 +32,7 @@ from core.ops.entities.trace_entity import ( from core.ops.weave_trace.entities.weave_trace_entity import WeaveTraceModel from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser, MessageFile, WorkflowNodeExecutionTriggeredFrom logger = logging.getLogger(__name__) diff --git a/api/core/plugin/backwards_invocation/model.py b/api/core/plugin/backwards_invocation/model.py index c715b9171c..c92438960a 100644 --- a/api/core/plugin/backwards_invocation/model.py +++ b/api/core/plugin/backwards_invocation/model.py @@ -1,20 +1,7 @@ import tempfile from binascii import hexlify, unhexlify from collections.abc import Generator - -from graphon.model_runtime.entities.llm_entities import ( - LLMResult, - LLMResultChunk, - LLMResultChunkDelta, - LLMResultChunkWithStructuredOutput, - LLMResultWithStructuredOutput, -) -from graphon.model_runtime.entities.message_entities import ( - PromptMessage, - SystemPromptMessage, - UserPromptMessage, -) -from graphon.model_runtime.entities.model_entities import ModelType +from typing import Any from core.app.llm import deduct_llm_quota from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output @@ -32,6 +19,19 @@ from core.plugin.entities.request import ( ) from core.tools.entities.tool_entities import ToolProviderType from core.tools.utils.model_invocation_utils import ModelInvocationUtils +from graphon.model_runtime.entities.llm_entities import ( + LLMResult, + LLMResultChunk, + LLMResultChunkDelta, + LLMResultChunkWithStructuredOutput, + LLMResultWithStructuredOutput, +) +from graphon.model_runtime.entities.message_entities import ( + PromptMessage, + SystemPromptMessage, + UserPromptMessage, +) +from graphon.model_runtime.entities.model_entities import ModelType from models.account import Tenant @@ -226,7 +226,7 @@ class PluginModelBackwardsInvocation(BaseBackwardsInvocation): # invoke model response = model_instance.invoke_tts(content_text=payload.content_text, voice=payload.voice) - def handle() -> Generator[dict, None, None]: + def handle() -> Generator[dict[str, Any], None, None]: for chunk in response: yield {"result": hexlify(chunk).decode("utf-8")} diff --git a/api/core/plugin/backwards_invocation/node.py b/api/core/plugin/backwards_invocation/node.py index 9478997494..9550e49992 100644 --- a/api/core/plugin/backwards_invocation/node.py +++ b/api/core/plugin/backwards_invocation/node.py @@ -1,3 +1,4 @@ +from core.plugin.backwards_invocation.base import BaseBackwardsInvocation from graphon.enums import BuiltinNodeTypes from graphon.nodes.llm.entities import ModelConfig as LLMModelConfig from graphon.nodes.parameter_extractor.entities import ( @@ -8,8 +9,6 @@ from graphon.nodes.question_classifier.entities import ( ClassConfig, QuestionClassifierNodeData, ) - -from core.plugin.backwards_invocation.base import BaseBackwardsInvocation from services.workflow_service import WorkflowService diff --git a/api/core/plugin/entities/endpoint.py b/api/core/plugin/entities/endpoint.py index e5bca140f8..6419963668 100644 --- a/api/core/plugin/entities/endpoint.py +++ b/api/core/plugin/entities/endpoint.py @@ -1,4 +1,5 @@ from datetime import datetime +from typing import Any from pydantic import BaseModel, Field, model_validator @@ -31,7 +32,7 @@ class EndpointEntity(BasePluginEntity): entity of an endpoint """ - settings: dict + settings: dict[str, Any] tenant_id: str plugin_id: str expired_at: datetime diff --git a/api/core/plugin/entities/marketplace.py b/api/core/plugin/entities/marketplace.py index 2177e8af90..03398873e3 100644 --- a/api/core/plugin/entities/marketplace.py +++ b/api/core/plugin/entities/marketplace.py @@ -1,10 +1,12 @@ -from graphon.model_runtime.entities.provider_entities import ProviderEntity +from typing import Any + from pydantic import BaseModel, Field, computed_field, model_validator from core.plugin.entities.endpoint import EndpointProviderDeclaration from core.plugin.entities.plugin import PluginResourceRequirements from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderEntity +from graphon.model_runtime.entities.provider_entities import ProviderEntity class MarketplacePluginDeclaration(BaseModel): @@ -40,7 +42,7 @@ class MarketplacePluginDeclaration(BaseModel): @model_validator(mode="before") @classmethod - def transform_declaration(cls, data: dict): + def transform_declaration(cls, data: dict[str, Any]) -> dict[str, Any]: if "endpoint" in data and not data["endpoint"]: del data["endpoint"] if "model" in data and not data["model"]: diff --git a/api/core/plugin/entities/plugin.py b/api/core/plugin/entities/plugin.py index b095b4998d..89e0e8881c 100644 --- a/api/core/plugin/entities/plugin.py +++ b/api/core/plugin/entities/plugin.py @@ -3,7 +3,6 @@ from collections.abc import Mapping from enum import StrEnum, auto from typing import Any -from graphon.model_runtime.entities.provider_entities import ProviderEntity from packaging.version import InvalidVersion, Version from pydantic import BaseModel, Field, field_validator, model_validator @@ -14,6 +13,7 @@ from core.plugin.entities.endpoint import EndpointProviderDeclaration from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderEntity from core.trigger.entities.entities import TriggerProviderEntity +from graphon.model_runtime.entities.provider_entities import ProviderEntity class PluginInstallationSource(StrEnum): @@ -123,7 +123,7 @@ class PluginDeclaration(BaseModel): @model_validator(mode="before") @classmethod - def validate_category(cls, values: dict): + def validate_category(cls, values: dict[str, Any]) -> dict[str, Any]: # auto detect category if values.get("tool"): values["category"] = PluginCategory.Tool diff --git a/api/core/plugin/entities/plugin_daemon.py b/api/core/plugin/entities/plugin_daemon.py index b57180690e..257638ad77 100644 --- a/api/core/plugin/entities/plugin_daemon.py +++ b/api/core/plugin/entities/plugin_daemon.py @@ -6,8 +6,6 @@ from datetime import datetime from enum import StrEnum from typing import Any -from graphon.model_runtime.entities.model_entities import AIModelEntity -from graphon.model_runtime.entities.provider_entities import ProviderEntity from pydantic import BaseModel, ConfigDict, Field from core.agent.plugin_entities import AgentProviderEntityWithPlugin @@ -18,6 +16,8 @@ from core.plugin.entities.plugin import PluginDeclaration, PluginEntity from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderEntityWithPlugin from core.trigger.entities.entities import TriggerProviderEntity +from graphon.model_runtime.entities.model_entities import AIModelEntity +from graphon.model_runtime.entities.provider_entities import ProviderEntity class PluginDaemonBasicResponse[T: BaseModel | dict | list | bool | str](BaseModel): @@ -73,7 +73,7 @@ class PluginBasicBooleanResponse(BaseModel): """ result: bool - credentials: dict | None = None + credentials: dict[str, Any] | None = None class PluginModelSchemaEntity(BaseModel): diff --git a/api/core/plugin/entities/request.py b/api/core/plugin/entities/request.py index 059f3fa9be..1474883204 100644 --- a/api/core/plugin/entities/request.py +++ b/api/core/plugin/entities/request.py @@ -4,6 +4,10 @@ from collections.abc import Mapping from typing import Any, Literal from flask import Response +from pydantic import BaseModel, ConfigDict, Field, field_validator + +from core.entities.provider_entities import BasicProviderConfig +from core.plugin.utils.http_parser import deserialize_response from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, PromptMessage, @@ -21,10 +25,6 @@ from graphon.nodes.parameter_extractor.entities import ( from graphon.nodes.question_classifier.entities import ( ClassConfig, ) -from pydantic import BaseModel, ConfigDict, Field, field_validator - -from core.entities.provider_entities import BasicProviderConfig -from core.plugin.utils.http_parser import deserialize_response class InvokeCredentials(BaseModel): @@ -49,7 +49,7 @@ class RequestInvokeTool(BaseModel): tool_type: Literal["builtin", "workflow", "api", "mcp"] provider: str tool: str - tool_parameters: dict + tool_parameters: dict[str, Any] credential_id: str | None = None @@ -209,7 +209,7 @@ class RequestInvokeEncrypt(BaseModel): opt: Literal["encrypt", "decrypt", "clear"] namespace: Literal["endpoint"] identity: str - data: dict = Field(default_factory=dict) + data: dict[str, Any] = Field(default_factory=dict) config: list[BasicProviderConfig] = Field(default_factory=list) diff --git a/api/core/plugin/impl/base.py b/api/core/plugin/impl/base.py index 7f36560b49..9ee8469892 100644 --- a/api/core/plugin/impl/base.py +++ b/api/core/plugin/impl/base.py @@ -5,14 +5,6 @@ from collections.abc import Callable, Generator from typing import Any, cast import httpx -from graphon.model_runtime.errors.invoke import ( - InvokeAuthorizationError, - InvokeBadRequestError, - InvokeConnectionError, - InvokeRateLimitError, - InvokeServerUnavailableError, -) -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from pydantic import BaseModel from yarl import URL @@ -37,6 +29,14 @@ from core.trigger.errors import ( TriggerPluginInvokeError, TriggerProviderCredentialValidationError, ) +from graphon.model_runtime.errors.invoke import ( + InvokeAuthorizationError, + InvokeBadRequestError, + InvokeConnectionError, + InvokeRateLimitError, + InvokeServerUnavailableError, +) +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError plugin_daemon_inner_api_baseurl = URL(str(dify_config.PLUGIN_DAEMON_URL)) _plugin_daemon_timeout_config = cast( diff --git a/api/core/plugin/impl/datasource.py b/api/core/plugin/impl/datasource.py index ce1ef71494..56c08addba 100644 --- a/api/core/plugin/impl/datasource.py +++ b/api/core/plugin/impl/datasource.py @@ -26,7 +26,7 @@ class PluginDatasourceManager(BasePluginClient): Fetch datasource providers for the given tenant. """ - def transformer(json_response: dict[str, Any]) -> dict: + def transformer(json_response: dict[str, Any]) -> dict[str, Any]: if json_response.get("data"): for provider in json_response.get("data", []): declaration = provider.get("declaration", {}) or {} @@ -68,7 +68,7 @@ class PluginDatasourceManager(BasePluginClient): Fetch datasource providers for the given tenant. """ - def transformer(json_response: dict[str, Any]) -> dict: + def transformer(json_response: dict[str, Any]) -> dict[str, Any]: if json_response.get("data"): for provider in json_response.get("data", []): declaration = provider.get("declaration", {}) or {} @@ -110,7 +110,7 @@ class PluginDatasourceManager(BasePluginClient): tool_provider_id = DatasourceProviderID(provider_id) - def transformer(json_response: dict[str, Any]) -> dict: + def transformer(json_response: dict[str, Any]) -> dict[str, Any]: data = json_response.get("data") if data: for datasource in data.get("declaration", {}).get("datasources", []): diff --git a/api/core/plugin/impl/endpoint.py b/api/core/plugin/impl/endpoint.py index 2db5185a2c..b335b42763 100644 --- a/api/core/plugin/impl/endpoint.py +++ b/api/core/plugin/impl/endpoint.py @@ -1,3 +1,5 @@ +from typing import Any + from core.plugin.entities.endpoint import EndpointEntityWithInstance from core.plugin.impl.base import BasePluginClient from core.plugin.impl.exc import PluginDaemonInternalServerError @@ -5,7 +7,12 @@ from core.plugin.impl.exc import PluginDaemonInternalServerError class PluginEndpointClient(BasePluginClient): def create_endpoint( - self, tenant_id: str, user_id: str, plugin_unique_identifier: str, name: str, settings: dict + self, + tenant_id: str, + user_id: str, + plugin_unique_identifier: str, + name: str, + settings: dict[str, Any], ) -> bool: """ Create an endpoint for the given plugin. @@ -49,7 +56,9 @@ class PluginEndpointClient(BasePluginClient): params={"plugin_id": plugin_id, "page": page, "page_size": page_size}, ) - def update_endpoint(self, tenant_id: str, user_id: str, endpoint_id: str, name: str, settings: dict): + def update_endpoint( + self, tenant_id: str, user_id: str, endpoint_id: str, name: str, settings: dict[str, Any] + ) -> bool: """ Update the settings of the given endpoint. """ diff --git a/api/core/plugin/impl/model.py b/api/core/plugin/impl/model.py index 1e38c24717..47608bdfa6 100644 --- a/api/core/plugin/impl/model.py +++ b/api/core/plugin/impl/model.py @@ -2,13 +2,6 @@ import binascii from collections.abc import Generator, Sequence from typing import IO, Any -from graphon.model_runtime.entities.llm_entities import LLMResultChunk -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool -from graphon.model_runtime.entities.model_entities import AIModelEntity -from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult -from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult -from graphon.model_runtime.utils.encoders import jsonable_encoder - from core.plugin.entities.plugin_daemon import ( PluginBasicBooleanResponse, PluginDaemonInnerError, @@ -20,6 +13,12 @@ from core.plugin.entities.plugin_daemon import ( PluginVoicesResponse, ) from core.plugin.impl.base import BasePluginClient +from graphon.model_runtime.entities.llm_entities import LLMResultChunk +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool +from graphon.model_runtime.entities.model_entities import AIModelEntity +from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult +from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult +from graphon.model_runtime.utils.encoders import jsonable_encoder class PluginModelClient(BasePluginClient): @@ -50,7 +49,7 @@ class PluginModelClient(BasePluginClient): provider: str, model_type: str, model: str, - credentials: dict, + credentials: dict[str, Any], ) -> AIModelEntity | None: """ Get model schema @@ -80,7 +79,7 @@ class PluginModelClient(BasePluginClient): return None def validate_provider_credentials( - self, tenant_id: str, user_id: str | None, plugin_id: str, provider: str, credentials: dict + self, tenant_id: str, user_id: str | None, plugin_id: str, provider: str, credentials: dict[str, Any] ) -> bool: """ validate the credentials of the provider @@ -118,7 +117,7 @@ class PluginModelClient(BasePluginClient): provider: str, model_type: str, model: str, - credentials: dict, + credentials: dict[str, Any], ) -> bool: """ validate the credentials of the provider @@ -157,9 +156,9 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], prompt_messages: list[PromptMessage], - model_parameters: dict | None = None, + model_parameters: dict[str, Any] | None = None, tools: list[PromptMessageTool] | None = None, stop: list[str] | None = None, stream: bool = True, @@ -206,7 +205,7 @@ class PluginModelClient(BasePluginClient): provider: str, model_type: str, model: str, - credentials: dict, + credentials: dict[str, Any], prompt_messages: list[PromptMessage], tools: list[PromptMessageTool] | None = None, ) -> int: @@ -248,7 +247,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], texts: list[str], input_type: str, ) -> EmbeddingResult: @@ -290,7 +289,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], documents: list[dict], input_type: str, ) -> EmbeddingResult: @@ -332,7 +331,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], texts: list[str], ) -> list[int]: """ @@ -372,7 +371,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], query: str, docs: list[str], score_threshold: float | None = None, @@ -418,7 +417,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], query: MultimodalRerankInput, docs: list[MultimodalRerankInput], score_threshold: float | None = None, @@ -463,7 +462,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], content_text: str, voice: str, ) -> Generator[bytes, None, None]: @@ -508,7 +507,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], language: str | None = None, ): """ @@ -552,7 +551,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], file: IO[bytes], ) -> str: """ @@ -592,7 +591,7 @@ class PluginModelClient(BasePluginClient): plugin_id: str, provider: str, model: str, - credentials: dict, + credentials: dict[str, Any], text: str, ) -> bool: """ diff --git a/api/core/plugin/impl/model_runtime.py b/api/core/plugin/impl/model_runtime.py index 22c846b6de..e3fba4ef3a 100644 --- a/api/core/plugin/impl/model_runtime.py +++ b/api/core/plugin/impl/model_runtime.py @@ -6,13 +6,6 @@ from collections.abc import Generator, Iterable, Sequence from threading import Lock from typing import IO, Any, Union -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool -from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType -from graphon.model_runtime.entities.provider_entities import ProviderEntity -from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult -from graphon.model_runtime.entities.text_embedding_entities import EmbeddingInputType, EmbeddingResult -from graphon.model_runtime.runtime import ModelRuntime from pydantic import ValidationError from redis import RedisError @@ -21,6 +14,13 @@ from core.plugin.entities.plugin_daemon import PluginModelProviderEntity from core.plugin.impl.asset import PluginAssetManager from core.plugin.impl.model import PluginModelClient from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageTool +from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType +from graphon.model_runtime.entities.provider_entities import ProviderEntity +from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult +from graphon.model_runtime.entities.text_embedding_entities import EmbeddingInputType, EmbeddingResult +from graphon.model_runtime.runtime import ModelRuntime from models.provider_ids import ModelProviderID logger = logging.getLogger(__name__) diff --git a/api/core/plugin/impl/model_runtime_factory.py b/api/core/plugin/impl/model_runtime_factory.py index 4b29a6fc56..35abd2ae8c 100644 --- a/api/core/plugin/impl/model_runtime_factory.py +++ b/api/core/plugin/impl/model_runtime_factory.py @@ -2,9 +2,8 @@ from __future__ import annotations from typing import TYPE_CHECKING -from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory - from core.plugin.impl.model import PluginModelClient +from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory if TYPE_CHECKING: from core.model_manager import ModelManager diff --git a/api/core/plugin/impl/plugin.py b/api/core/plugin/impl/plugin.py index c75c30a98a..8a7175bb51 100644 --- a/api/core/plugin/impl/plugin.py +++ b/api/core/plugin/impl/plugin.py @@ -1,4 +1,5 @@ from collections.abc import Sequence +from typing import Any from requests import HTTPError @@ -263,7 +264,7 @@ class PluginInstaller(BasePluginClient): original_plugin_unique_identifier: str, new_plugin_unique_identifier: str, source: PluginInstallationSource, - meta: dict, + meta: dict[str, Any], ) -> PluginInstallTaskStartResponse: """ Upgrade a plugin. diff --git a/api/core/plugin/utils/converter.py b/api/core/plugin/utils/converter.py index 90350f8400..12d8e282b2 100644 --- a/api/core/plugin/utils/converter.py +++ b/api/core/plugin/utils/converter.py @@ -1,8 +1,7 @@ from typing import Any -from graphon.file import File - from core.tools.entities.tool_entities import ToolSelector +from graphon.file import File def convert_parameters_to_plugin_format(parameters: dict[str, Any]) -> dict[str, Any]: diff --git a/api/core/prompt/advanced_prompt_transform.py b/api/core/prompt/advanced_prompt_transform.py index 19b5e9223a..24e05ef865 100644 --- a/api/core/prompt/advanced_prompt_transform.py +++ b/api/core/prompt/advanced_prompt_transform.py @@ -1,6 +1,13 @@ from collections.abc import Mapping, Sequence from typing import cast +from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity +from core.helper.code_executor.jinja2.jinja2_formatter import Jinja2Formatter +from core.memory.token_buffer_memory import TokenBufferMemory +from core.model_manager import ModelInstance +from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig +from core.prompt.prompt_transform import PromptTransform +from core.prompt.utils.prompt_template_parser import PromptTemplateParser from graphon.file import File, file_manager from graphon.model_runtime.entities import ( AssistantPromptMessage, @@ -13,14 +20,6 @@ from graphon.model_runtime.entities import ( from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent, PromptMessageContentUnionTypes from graphon.runtime import VariablePool -from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity -from core.helper.code_executor.jinja2.jinja2_formatter import Jinja2Formatter -from core.memory.token_buffer_memory import TokenBufferMemory -from core.model_manager import ModelInstance -from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig -from core.prompt.prompt_transform import PromptTransform -from core.prompt.utils.prompt_template_parser import PromptTemplateParser - class AdvancedPromptTransform(PromptTransform): """ diff --git a/api/core/prompt/agent_history_prompt_transform.py b/api/core/prompt/agent_history_prompt_transform.py index 9be70199b7..8f1d51f08a 100644 --- a/api/core/prompt/agent_history_prompt_transform.py +++ b/api/core/prompt/agent_history_prompt_transform.py @@ -1,17 +1,16 @@ from typing import cast -from graphon.model_runtime.entities.message_entities import ( - PromptMessage, - SystemPromptMessage, - UserPromptMessage, -) -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel - from core.app.entities.app_invoke_entities import ( ModelConfigWithCredentialsEntity, ) from core.memory.token_buffer_memory import TokenBufferMemory from core.prompt.prompt_transform import PromptTransform +from graphon.model_runtime.entities.message_entities import ( + PromptMessage, + SystemPromptMessage, + UserPromptMessage, +) +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel class AgentHistoryPromptTransform(PromptTransform): diff --git a/api/core/prompt/prompt_transform.py b/api/core/prompt/prompt_transform.py index 4539ae9f11..6ff2f44cdc 100644 --- a/api/core/prompt/prompt_transform.py +++ b/api/core/prompt/prompt_transform.py @@ -1,12 +1,11 @@ from typing import Any -from graphon.model_runtime.entities.message_entities import PromptMessage -from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelPropertyKey - from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.memory.token_buffer_memory import TokenBufferMemory from core.model_manager import ModelInstance from core.prompt.entities.advanced_prompt_entities import MemoryConfig +from graphon.model_runtime.entities.message_entities import PromptMessage +from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelPropertyKey class PromptTransform: diff --git a/api/core/prompt/simple_prompt_transform.py b/api/core/prompt/simple_prompt_transform.py index 36fca60db3..1665bdeb52 100644 --- a/api/core/prompt/simple_prompt_transform.py +++ b/api/core/prompt/simple_prompt_transform.py @@ -4,6 +4,12 @@ from collections.abc import Mapping, Sequence from enum import StrEnum, auto from typing import TYPE_CHECKING, Any, TypedDict, cast +from core.app.app_config.entities import PromptTemplateEntity +from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity +from core.memory.token_buffer_memory import TokenBufferMemory +from core.prompt.entities.advanced_prompt_entities import MemoryConfig +from core.prompt.prompt_transform import PromptTransform +from core.prompt.utils.prompt_template_parser import PromptTemplateParser from graphon.file import file_manager from graphon.model_runtime.entities.message_entities import ( ImagePromptMessageContent, @@ -13,13 +19,6 @@ from graphon.model_runtime.entities.message_entities import ( TextPromptMessageContent, UserPromptMessage, ) - -from core.app.app_config.entities import PromptTemplateEntity -from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity -from core.memory.token_buffer_memory import TokenBufferMemory -from core.prompt.entities.advanced_prompt_entities import MemoryConfig -from core.prompt.prompt_transform import PromptTransform -from core.prompt.utils.prompt_template_parser import PromptTemplateParser from models.model import AppMode if TYPE_CHECKING: @@ -96,11 +95,11 @@ class SimplePromptTransform(PromptTransform): app_mode: AppMode, model_config: ModelConfigWithCredentialsEntity, pre_prompt: str, - inputs: dict, + inputs: dict[str, Any], query: str | None = None, context: str | None = None, histories: str | None = None, - ) -> tuple[str, dict]: + ) -> tuple[str, dict[str, Any]]: # get prompt template prompt_template_config = self.get_prompt_template( app_mode=app_mode, @@ -187,7 +186,7 @@ class SimplePromptTransform(PromptTransform): self, app_mode: AppMode, pre_prompt: str, - inputs: dict, + inputs: dict[str, Any], query: str, context: str | None, files: Sequence["File"], @@ -234,7 +233,7 @@ class SimplePromptTransform(PromptTransform): self, app_mode: AppMode, pre_prompt: str, - inputs: dict, + inputs: dict[str, Any], query: str, context: str | None, files: Sequence["File"], @@ -313,7 +312,7 @@ class SimplePromptTransform(PromptTransform): return prompt_message - def _get_prompt_rule(self, app_mode: AppMode, provider: str, model: str): + def _get_prompt_rule(self, app_mode: AppMode, provider: str, model: str) -> dict[str, Any]: """ Get simple prompt rule. :param app_mode: app mode @@ -325,7 +324,7 @@ class SimplePromptTransform(PromptTransform): # Check if the prompt file is already loaded if prompt_file_name in prompt_file_contents: - return cast(dict, prompt_file_contents[prompt_file_name]) + return cast(dict[str, Any], prompt_file_contents[prompt_file_name]) # Get the absolute path of the subdirectory prompt_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), "prompt_templates") @@ -338,7 +337,7 @@ class SimplePromptTransform(PromptTransform): # Store the content of the prompt file prompt_file_contents[prompt_file_name] = content - return cast(dict, content) + return cast(dict[str, Any], content) def _prompt_file_name(self, app_mode: AppMode, provider: str, model: str) -> str: # baichuan diff --git a/api/core/prompt/utils/prompt_message_util.py b/api/core/prompt/utils/prompt_message_util.py index dbda749925..ba76eb0c4e 100644 --- a/api/core/prompt/utils/prompt_message_util.py +++ b/api/core/prompt/utils/prompt_message_util.py @@ -1,6 +1,7 @@ from collections.abc import Sequence from typing import Any, cast +from core.prompt.simple_prompt_transform import ModelMode from graphon.model_runtime.entities import ( AssistantPromptMessage, AudioPromptMessageContent, @@ -11,8 +12,6 @@ from graphon.model_runtime.entities import ( TextPromptMessageContent, ) -from core.prompt.simple_prompt_transform import ModelMode - class PromptMessageUtil: @staticmethod diff --git a/api/core/provider_manager.py b/api/core/provider_manager.py index e3b3f83c20..c3bbe8fc09 100644 --- a/api/core/provider_manager.py +++ b/api/core/provider_manager.py @@ -6,14 +6,6 @@ from collections.abc import Sequence from json import JSONDecodeError from typing import TYPE_CHECKING, Any -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - CredentialFormSchema, - FormType, - ProviderEntity, -) -from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory from pydantic import TypeAdapter from sqlalchemy import select from sqlalchemy.exc import IntegrityError @@ -41,6 +33,14 @@ from core.helper.position_helper import is_filtered from extensions import ext_hosting_provider from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + CredentialFormSchema, + FormType, + ProviderEntity, +) +from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory from models.provider import ( LoadBalancingModelConfig, Provider, @@ -856,7 +856,7 @@ class ProviderManager: secret_variables: list[str], cache_type: ProviderCredentialsCacheType, is_provider: bool = False, - ) -> dict: + ) -> dict[str, Any]: """Get and decrypt credentials with caching.""" credentials_cache = ProviderCredentialsCache( tenant_id=tenant_id, diff --git a/api/core/rag/data_post_processor/data_post_processor.py b/api/core/rag/data_post_processor/data_post_processor.py index 9ce91f52ff..ca530748ed 100644 --- a/api/core/rag/data_post_processor/data_post_processor.py +++ b/api/core/rag/data_post_processor/data_post_processor.py @@ -1,8 +1,5 @@ from typing import TypedDict -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError - from core.model_manager import ModelInstance, ModelManager from core.rag.data_post_processor.reorder import ReorderRunner from core.rag.index_processor.constant.query_type import QueryType @@ -11,6 +8,8 @@ from core.rag.rerank.entity.weight import KeywordSetting, VectorSetting, Weights from core.rag.rerank.rerank_base import BaseRerankRunner from core.rag.rerank.rerank_factory import RerankRunnerFactory from core.rag.rerank.rerank_type import RerankMode +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError class RerankingModelDict(TypedDict): diff --git a/api/core/rag/datasource/retrieval_service.py b/api/core/rag/datasource/retrieval_service.py index c1654ac130..7e71d67ec0 100644 --- a/api/core/rag/datasource/retrieval_service.py +++ b/api/core/rag/datasource/retrieval_service.py @@ -4,7 +4,6 @@ from concurrent.futures import ThreadPoolExecutor from typing import Any, NotRequired, TypedDict from flask import Flask, current_app -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import select from sqlalchemy.orm import Session, load_only @@ -24,6 +23,7 @@ from core.rag.rerank.rerank_type import RerankMode from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.tools.signature import sign_upload_file from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import ( ChildChunk, Dataset, @@ -174,8 +174,8 @@ class RetrievalService: cls, dataset_id: str, query: str, - external_retrieval_model: dict | None = None, - metadata_filtering_conditions: dict | None = None, + external_retrieval_model: dict[str, Any] | None = None, + metadata_filtering_conditions: dict[str, Any] | None = None, ): stmt = select(Dataset).where(Dataset.id == dataset_id) dataset = db.session.scalar(stmt) @@ -195,6 +195,23 @@ class RetrievalService: ) return all_documents + @classmethod + def _filter_documents_by_vector_score_threshold( + cls, documents: list[Document], score_threshold: float | None + ) -> list[Document]: + """Keep documents whose stored retrieval score meets the threshold. + + Used when hybrid search skips early vector thresholding but no rerank + runner applies a threshold afterward (same rule as ``calculate_vector_score``). + """ + if score_threshold is None: + return documents + return [ + document + for document in documents + if document.metadata and document.metadata.get("score", 0) >= score_threshold + ] + @classmethod def _deduplicate_documents(cls, documents: list[Document]) -> list[Document]: """Deduplicate documents in O(n) while preserving first-seen order. @@ -294,13 +311,20 @@ class RetrievalService: vector = Vector(dataset=dataset) documents = [] + # Hybrid search merges keyword / full-text / vector hits and then reranks + # (weighted fusion or reranking model). Applying the user score threshold at + # vector retrieval time uses embedding similarity, which is not comparable to + # reranked or fused scores and incorrectly drops high-quality chunks (#35233). + embedding_score_threshold = ( + 0.0 if retrieval_method == RetrievalMethod.HYBRID_SEARCH else score_threshold + ) if query_type == QueryType.TEXT_QUERY: documents.extend( vector.search_by_vector( query, search_type="similarity_score_threshold", top_k=top_k, - score_threshold=score_threshold, + score_threshold=embedding_score_threshold, filter={"group_id": [dataset.id]}, document_ids_filter=document_ids_filter, ) @@ -312,7 +336,7 @@ class RetrievalService: vector.search_by_file( file_id=query, top_k=top_k, - score_threshold=score_threshold, + score_threshold=embedding_score_threshold, filter={"group_id": [dataset.id]}, document_ids_filter=document_ids_filter, ) @@ -844,6 +868,10 @@ class RetrievalService: top_n=top_k, query_type=QueryType.TEXT_QUERY if query else QueryType.IMAGE_QUERY, ) + if not data_post_processor.rerank_runner and score_threshold: + all_documents_item = self._filter_documents_by_vector_score_threshold( + all_documents_item, score_threshold + ) all_documents.extend(all_documents_item) diff --git a/api/core/rag/datasource/vdb/vector_factory.py b/api/core/rag/datasource/vdb/vector_factory.py index dddd5fc994..59d7f3c3c4 100644 --- a/api/core/rag/datasource/vdb/vector_factory.py +++ b/api/core/rag/datasource/vdb/vector_factory.py @@ -4,7 +4,6 @@ import time from abc import ABC, abstractmethod from typing import Any -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import select from configs import dify_config @@ -19,6 +18,7 @@ from core.rag.models.document import Document from extensions.ext_database import db from extensions.ext_redis import redis_client from extensions.ext_storage import storage +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import Dataset, Whitelist from models.model import UploadFile diff --git a/api/core/rag/docstore/dataset_docstore.py b/api/core/rag/docstore/dataset_docstore.py index 8e9ebdd17a..f4699f6869 100644 --- a/api/core/rag/docstore/dataset_docstore.py +++ b/api/core/rag/docstore/dataset_docstore.py @@ -3,13 +3,13 @@ from __future__ import annotations from collections.abc import Sequence from typing import Any -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import delete, func, select from core.model_manager import ModelManager from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.models.document import AttachmentDocument, Document from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import ChildChunk, Dataset, DocumentSegment, SegmentAttachmentBinding diff --git a/api/core/rag/embedding/cached_embedding.py b/api/core/rag/embedding/cached_embedding.py index 8d1c0da392..4926f44f16 100644 --- a/api/core/rag/embedding/cached_embedding.py +++ b/api/core/rag/embedding/cached_embedding.py @@ -4,8 +4,6 @@ import pickle from typing import Any, cast import numpy as np -from graphon.model_runtime.entities.model_entities import ModelPropertyKey -from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from sqlalchemy import select from sqlalchemy.exc import IntegrityError @@ -15,6 +13,8 @@ from core.model_manager import ModelInstance from core.rag.embedding.embedding_base import Embeddings from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.model_entities import ModelPropertyKey +from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from libs import helper from models.dataset import Embedding @@ -106,7 +106,7 @@ class CacheEmbedding(Embeddings): return text_embeddings - def embed_multimodal_documents(self, multimodel_documents: list[dict]) -> list[list[float]]: + def embed_multimodal_documents(self, multimodel_documents: list[dict[str, Any]]) -> list[list[float]]: """Embed file documents.""" # use doc embedding cache or store if not exists multimodel_embeddings: list[Any] = [None for _ in range(len(multimodel_documents))] @@ -232,7 +232,7 @@ class CacheEmbedding(Embeddings): return embedding_results # type: ignore - def embed_multimodal_query(self, multimodel_document: dict) -> list[float]: + def embed_multimodal_query(self, multimodel_document: dict[str, Any]) -> list[float]: """Embed multimodal documents.""" # use doc embedding cache or store if not exists file_id = multimodel_document["file_id"] diff --git a/api/core/rag/embedding/embedding_base.py b/api/core/rag/embedding/embedding_base.py index 1be55bda80..7ae5c09ab7 100644 --- a/api/core/rag/embedding/embedding_base.py +++ b/api/core/rag/embedding/embedding_base.py @@ -1,4 +1,5 @@ from abc import ABC, abstractmethod +from typing import Any class Embeddings(ABC): @@ -10,7 +11,7 @@ class Embeddings(ABC): raise NotImplementedError @abstractmethod - def embed_multimodal_documents(self, multimodel_documents: list[dict]) -> list[list[float]]: + def embed_multimodal_documents(self, multimodel_documents: list[dict[str, Any]]) -> list[list[float]]: """Embed file documents.""" raise NotImplementedError @@ -20,7 +21,7 @@ class Embeddings(ABC): raise NotImplementedError @abstractmethod - def embed_multimodal_query(self, multimodel_document: dict) -> list[float]: + def embed_multimodal_query(self, multimodel_document: dict[str, Any]) -> list[float]: """Embed multimodal query.""" raise NotImplementedError diff --git a/api/core/rag/extractor/extract_processor.py b/api/core/rag/extractor/extract_processor.py index 449be6a448..fbd2a6db93 100644 --- a/api/core/rag/extractor/extract_processor.py +++ b/api/core/rag/extractor/extract_processor.py @@ -95,9 +95,9 @@ class ExtractProcessor: ) -> list[Document]: if extract_setting.datasource_type == DatasourceType.FILE: with tempfile.TemporaryDirectory() as temp_dir: + upload_file = extract_setting.upload_file if not file_path: - assert extract_setting.upload_file is not None, "upload_file is required" - upload_file: UploadFile = extract_setting.upload_file + assert upload_file is not None, "upload_file is required" suffix = Path(upload_file.key).suffix # FIXME mypy: Cannot determine type of 'tempfile._get_candidate_names' better not use it here file_path = f"{temp_dir}/{next(tempfile._get_candidate_names())}{suffix}" # type: ignore @@ -113,6 +113,7 @@ class ExtractProcessor: if file_extension in {".xlsx", ".xls"}: extractor = ExcelExtractor(file_path) elif file_extension == ".pdf": + assert upload_file is not None extractor = PdfExtractor(file_path, upload_file.tenant_id, upload_file.created_by) elif file_extension in {".md", ".markdown", ".mdx"}: extractor = ( @@ -123,6 +124,7 @@ class ExtractProcessor: elif file_extension in {".htm", ".html"}: extractor = HtmlExtractor(file_path) elif file_extension == ".docx": + assert upload_file is not None extractor = WordExtractor(file_path, upload_file.tenant_id, upload_file.created_by) elif file_extension == ".doc": extractor = UnstructuredWordExtractor(file_path, unstructured_api_url, unstructured_api_key) @@ -149,12 +151,14 @@ class ExtractProcessor: if file_extension in {".xlsx", ".xls"}: extractor = ExcelExtractor(file_path) elif file_extension == ".pdf": + assert upload_file is not None extractor = PdfExtractor(file_path, upload_file.tenant_id, upload_file.created_by) elif file_extension in {".md", ".markdown", ".mdx"}: extractor = MarkdownExtractor(file_path, autodetect_encoding=True) elif file_extension in {".htm", ".html"}: extractor = HtmlExtractor(file_path) elif file_extension == ".docx": + assert upload_file is not None extractor = WordExtractor(file_path, upload_file.tenant_id, upload_file.created_by) elif file_extension == ".csv": extractor = CSVExtractor(file_path, autodetect_encoding=True) diff --git a/api/core/rag/extractor/firecrawl/firecrawl_app.py b/api/core/rag/extractor/firecrawl/firecrawl_app.py index 89bdd56a6c..556158cf00 100644 --- a/api/core/rag/extractor/firecrawl/firecrawl_app.py +++ b/api/core/rag/extractor/firecrawl/firecrawl_app.py @@ -174,21 +174,25 @@ class FirecrawlApp: return f"{self.base_url.rstrip('/')}/{path.lstrip('/')}" def _post_request(self, url, data, headers, retries=3, backoff_factor=0.5) -> httpx.Response: + response: httpx.Response | None = None for attempt in range(retries): response = httpx.post(url, headers=headers, json=data) if response.status_code == 502: time.sleep(backoff_factor * (2**attempt)) else: return response + assert response is not None, "retries must be at least 1" return response def _get_request(self, url, headers, retries=3, backoff_factor=0.5) -> httpx.Response: + response: httpx.Response | None = None for attempt in range(retries): response = httpx.get(url, headers=headers) if response.status_code == 502: time.sleep(backoff_factor * (2**attempt)) else: return response + assert response is not None, "retries must be at least 1" return response def _handle_error(self, response, action): diff --git a/api/core/rag/index_processor/processor/paragraph_index_processor.py b/api/core/rag/index_processor/processor/paragraph_index_processor.py index a487c49053..f8242efe31 100644 --- a/api/core/rag/index_processor/processor/paragraph_index_processor.py +++ b/api/core/rag/index_processor/processor/paragraph_index_processor.py @@ -7,16 +7,6 @@ from typing import Any, TypedDict, cast logger = logging.getLogger(__name__) -from graphon.file import File, FileTransferMethod, FileType, file_manager -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - ImagePromptMessageContent, - PromptMessage, - PromptMessageContentUnionTypes, - TextPromptMessageContent, - UserPromptMessage, -) -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType from sqlalchemy import select from core.app.file_access import DatabaseFileAccessController @@ -43,6 +33,16 @@ from core.tools.utils.text_processing_utils import remove_leading_symbols from core.workflow.file_reference import build_file_reference from extensions.ext_database import db from factories.file_factory import build_from_mapping +from graphon.file import File, FileTransferMethod, FileType, file_manager +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + ImagePromptMessageContent, + PromptMessage, + PromptMessageContentUnionTypes, + TextPromptMessageContent, + UserPromptMessage, +) +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType from libs import helper from models import UploadFile from models.account import Account diff --git a/api/core/rag/models/document.py b/api/core/rag/models/document.py index 087736d0b0..4ebf095904 100644 --- a/api/core/rag/models/document.py +++ b/api/core/rag/models/document.py @@ -2,9 +2,10 @@ from abc import ABC, abstractmethod from collections.abc import Sequence from typing import Any -from graphon.file import File from pydantic import BaseModel, Field +from graphon.file import File + class ChildDocument(BaseModel): """Class for storing a piece of text and associated metadata.""" diff --git a/api/core/rag/rerank/rerank_model.py b/api/core/rag/rerank/rerank_model.py index a8d37845a5..bce08f998f 100644 --- a/api/core/rag/rerank/rerank_model.py +++ b/api/core/rag/rerank/rerank_model.py @@ -1,8 +1,5 @@ import base64 -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult - from core.model_manager import ModelInstance, ModelManager from core.rag.index_processor.constant.doc_type import DocType from core.rag.index_processor.constant.query_type import QueryType @@ -10,6 +7,8 @@ from core.rag.models.document import Document from core.rag.rerank.rerank_base import BaseRerankRunner from extensions.ext_database import db from extensions.ext_storage import storage +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.rerank_entities import MultimodalRerankInput, RerankResult from models.model import UploadFile diff --git a/api/core/rag/rerank/weight_rerank.py b/api/core/rag/rerank/weight_rerank.py index 49123e13d0..d0732b269a 100644 --- a/api/core/rag/rerank/weight_rerank.py +++ b/api/core/rag/rerank/weight_rerank.py @@ -2,7 +2,6 @@ import math from collections import Counter import numpy as np -from graphon.model_runtime.entities.model_entities import ModelType from core.model_manager import ModelManager from core.rag.datasource.keyword.jieba.jieba_keyword_table_handler import JiebaKeywordTableHandler @@ -12,6 +11,7 @@ from core.rag.index_processor.constant.query_type import QueryType from core.rag.models.document import Document from core.rag.rerank.entity.weight import VectorSetting, Weights from core.rag.rerank.rerank_base import BaseRerankRunner +from graphon.model_runtime.entities.model_entities import ModelType class WeightRerankRunner(BaseRerankRunner): diff --git a/api/core/rag/retrieval/dataset_retrieval.py b/api/core/rag/retrieval/dataset_retrieval.py index b681ff5db1..1453fe020b 100644 --- a/api/core/rag/retrieval/dataset_retrieval.py +++ b/api/core/rag/retrieval/dataset_retrieval.py @@ -9,11 +9,6 @@ from collections.abc import Generator, Mapping from typing import Any, Union, cast from flask import Flask, current_app -from graphon.file import File, FileTransferMethod, FileType -from graphon.model_runtime.entities.llm_entities import LLMMode, LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import and_, func, literal, or_, select, update from sqlalchemy.orm import sessionmaker @@ -69,6 +64,11 @@ from core.workflow.nodes.knowledge_retrieval.retrieval import ( ) from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.file import File, FileTransferMethod, FileType +from graphon.model_runtime.entities.llm_entities import LLMMode, LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from libs.helper import parse_uuid_str_or_none from libs.json_in_md_parser import parse_and_check_json_markdown from models import UploadFile @@ -875,7 +875,11 @@ class DatasetRetrieval: return retrieval_resource_list def _on_retrieval_end( - self, flask_app: Flask, documents: list[Document], message_id: str | None = None, timer: dict | None = None + self, + flask_app: Flask, + documents: list[Document], + message_id: str | None = None, + timer: dict[str, Any] | None = None, ): """Handle retrieval end.""" with flask_app.app_context(): @@ -980,7 +984,7 @@ class DatasetRetrieval: self._send_trace_task(message_id, documents, timer) - def _send_trace_task(self, message_id: str | None, documents: list[Document], timer: dict | None): + def _send_trace_task(self, message_id: str | None, documents: list[Document], timer: dict[str, Any] | None): """Send trace task if trace manager is available.""" trace_manager: TraceQueueManager | None = ( self.application_generate_entity.trace_manager if self.application_generate_entity else None @@ -1142,7 +1146,7 @@ class DatasetRetrieval: invoke_from: InvokeFrom, hit_callback: DatasetIndexToolCallbackHandler, user_id: str, - inputs: dict, + inputs: dict[str, Any], ) -> list[DatasetRetrieverBaseTool] | None: """ A dataset tool is a tool that can be used to retrieve information from a dataset @@ -1337,7 +1341,7 @@ class DatasetRetrieval: metadata_filtering_mode: str, metadata_model_config: ModelConfig, metadata_filtering_conditions: MetadataFilteringCondition | None, - inputs: dict, + inputs: dict[str, Any], ) -> tuple[dict[str, list[str]] | None, MetadataFilteringCondition | None]: document_query = select(DatasetDocument).where( DatasetDocument.dataset_id.in_(dataset_ids), @@ -1417,7 +1421,7 @@ class DatasetRetrieval: metadata_filter_document_ids[document.dataset_id].append(document.id) # type: ignore return metadata_filter_document_ids, metadata_condition - def _replace_metadata_filter_value(self, text: str, inputs: dict) -> str: + def _replace_metadata_filter_value(self, text: str, inputs: dict[str, Any]) -> str: if not inputs: return text diff --git a/api/core/rag/retrieval/output_parser/react_output.py b/api/core/rag/retrieval/output_parser/react_output.py index 9a14d41716..29abae4280 100644 --- a/api/core/rag/retrieval/output_parser/react_output.py +++ b/api/core/rag/retrieval/output_parser/react_output.py @@ -1,7 +1,7 @@ from __future__ import annotations from dataclasses import dataclass -from typing import NamedTuple, Union +from typing import Any, NamedTuple, Union @dataclass @@ -10,7 +10,7 @@ class ReactAction: tool: str """The name of the Tool to execute.""" - tool_input: Union[str, dict] + tool_input: Union[str, dict[str, Any]] """The input to pass in to the Tool.""" log: str """Additional information to log about the action.""" @@ -19,7 +19,7 @@ class ReactAction: class ReactFinish(NamedTuple): """The final return value of an ReactFinish.""" - return_values: dict + return_values: dict[str, Any] """Dictionary of return values.""" log: str """Additional information to log about the return value""" diff --git a/api/core/rag/retrieval/router/multi_dataset_function_call_router.py b/api/core/rag/retrieval/router/multi_dataset_function_call_router.py index dce7b6226c..e617a9660e 100644 --- a/api/core/rag/retrieval/router/multi_dataset_function_call_router.py +++ b/api/core/rag/retrieval/router/multi_dataset_function_call_router.py @@ -1,10 +1,9 @@ from typing import Union -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import PromptMessageTool, SystemPromptMessage, UserPromptMessage - from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.model_manager import ModelInstance +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import PromptMessageTool, SystemPromptMessage, UserPromptMessage class FunctionCallMultiDatasetRouter: diff --git a/api/core/rag/retrieval/router/multi_dataset_react_route.py b/api/core/rag/retrieval/router/multi_dataset_react_route.py index dd280cdf6a..21a9d04f7f 100644 --- a/api/core/rag/retrieval/router/multi_dataset_react_route.py +++ b/api/core/rag/retrieval/router/multi_dataset_react_route.py @@ -1,9 +1,5 @@ from collections.abc import Generator, Sequence -from typing import Union - -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool -from graphon.model_runtime.entities.model_entities import ModelType +from typing import Any, Union from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.app.llm import deduct_llm_quota @@ -12,6 +8,9 @@ from core.prompt.advanced_prompt_transform import AdvancedPromptTransform from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate from core.rag.retrieval.output_parser.react_output import ReactAction from core.rag.retrieval.output_parser.structured_chat import StructuredChatOutputParser +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import PromptMessage, PromptMessageRole, PromptMessageTool +from graphon.model_runtime.entities.model_entities import ModelType PREFIX = """Respond to the human as helpfully and accurately as possible. You have access to the following tools:""" @@ -139,7 +138,7 @@ class ReactMultiDatasetRouter: def _invoke_llm( self, - completion_param: dict, + completion_param: dict[str, Any], model_instance: ModelInstance, prompt_messages: list[PromptMessage], stop: list[str], diff --git a/api/core/rag/splitter/fixed_text_splitter.py b/api/core/rag/splitter/fixed_text_splitter.py index 3383c7f3bd..2581c354dd 100644 --- a/api/core/rag/splitter/fixed_text_splitter.py +++ b/api/core/rag/splitter/fixed_text_splitter.py @@ -7,10 +7,9 @@ import re from collections.abc import Collection from typing import Any, Literal -from graphon.model_runtime.model_providers.__base.tokenizers.gpt2_tokenizer import GPT2Tokenizer - from core.model_manager import ModelInstance from core.rag.splitter.text_splitter import RecursiveCharacterTextSplitter +from graphon.model_runtime.model_providers.__base.tokenizers.gpt2_tokenizer import GPT2Tokenizer class EnhanceRecursiveCharacterTextSplitter(RecursiveCharacterTextSplitter): diff --git a/api/core/rag/splitter/text_splitter.py b/api/core/rag/splitter/text_splitter.py index 8977611f93..7f2117e2dd 100644 --- a/api/core/rag/splitter/text_splitter.py +++ b/api/core/rag/splitter/text_splitter.py @@ -63,7 +63,7 @@ class TextSplitter(BaseDocumentTransformer, ABC): def split_text(self, text: str) -> list[str]: """Split text into multiple components.""" - def create_documents(self, texts: list[str], metadatas: list[dict] | None = None) -> list[Document]: + def create_documents(self, texts: list[str], metadatas: list[dict[str, Any]] | None = None) -> list[Document]: """Create documents from a list of texts.""" _metadatas = metadatas or [{}] * len(texts) documents = [] diff --git a/api/core/repositories/celery_workflow_execution_repository.py b/api/core/repositories/celery_workflow_execution_repository.py index b07c63fdf0..e87d1cd6b2 100644 --- a/api/core/repositories/celery_workflow_execution_repository.py +++ b/api/core/repositories/celery_workflow_execution_repository.py @@ -7,11 +7,11 @@ providing improved performance by offloading database operations to background w import logging -from graphon.entities import WorkflowExecution from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from core.repositories.factory import WorkflowExecutionRepository +from graphon.entities import WorkflowExecution from libs.helper import extract_tenant_id from models import Account, CreatorUserRole, EndUser from models.enums import WorkflowRunTriggeredFrom diff --git a/api/core/repositories/celery_workflow_node_execution_repository.py b/api/core/repositories/celery_workflow_node_execution_repository.py index cdb3af01a8..2451563317 100644 --- a/api/core/repositories/celery_workflow_node_execution_repository.py +++ b/api/core/repositories/celery_workflow_node_execution_repository.py @@ -8,7 +8,6 @@ providing improved performance by offloading database operations to background w import logging from collections.abc import Sequence -from graphon.entities import WorkflowNodeExecution from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker @@ -16,6 +15,7 @@ from core.repositories.factory import ( OrderConfig, WorkflowNodeExecutionRepository, ) +from graphon.entities import WorkflowNodeExecution from libs.helper import extract_tenant_id from models import Account, CreatorUserRole, EndUser from models.workflow import WorkflowNodeExecutionTriggeredFrom diff --git a/api/core/repositories/factory.py b/api/core/repositories/factory.py index ce3ad15759..4e83e70799 100644 --- a/api/core/repositories/factory.py +++ b/api/core/repositories/factory.py @@ -9,11 +9,11 @@ from collections.abc import Sequence from dataclasses import dataclass from typing import Literal, Protocol -from graphon.entities import WorkflowExecution, WorkflowNodeExecution from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from configs import dify_config +from graphon.entities import WorkflowExecution, WorkflowNodeExecution from libs.module_loading import import_string from models import Account, EndUser from models.enums import WorkflowRunTriggeredFrom diff --git a/api/core/repositories/human_input_repository.py b/api/core/repositories/human_input_repository.py index 72d9394149..02625e242f 100644 --- a/api/core/repositories/human_input_repository.py +++ b/api/core/repositories/human_input_repository.py @@ -4,8 +4,6 @@ from collections.abc import Mapping, Sequence from datetime import datetime from typing import Any, Protocol -from graphon.nodes.human_input.entities import FormDefinition, HumanInputNodeData -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from sqlalchemy import select from sqlalchemy.orm import Session, selectinload @@ -19,6 +17,8 @@ from core.workflow.human_input_compat import ( InteractiveSurfaceDeliveryMethod, is_human_input_webapp_enabled, ) +from graphon.nodes.human_input.entities import FormDefinition, HumanInputNodeData +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import naive_utc_now from libs.uuid_utils import uuidv7 from models.account import Account, TenantAccountJoin diff --git a/api/core/repositories/sqlalchemy_workflow_execution_repository.py b/api/core/repositories/sqlalchemy_workflow_execution_repository.py index d74cc8f231..6be3902317 100644 --- a/api/core/repositories/sqlalchemy_workflow_execution_repository.py +++ b/api/core/repositories/sqlalchemy_workflow_execution_repository.py @@ -5,13 +5,13 @@ SQLAlchemy implementation of the WorkflowExecutionRepository. import json import logging -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowExecutionStatus, WorkflowType -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from core.repositories.factory import WorkflowExecutionRepository +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowExecutionStatus, WorkflowType +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.helper import extract_tenant_id from models import ( Account, diff --git a/api/core/repositories/sqlalchemy_workflow_node_execution_repository.py b/api/core/repositories/sqlalchemy_workflow_node_execution_repository.py index 13e885672a..b036687bc9 100644 --- a/api/core/repositories/sqlalchemy_workflow_node_execution_repository.py +++ b/api/core/repositories/sqlalchemy_workflow_node_execution_repository.py @@ -10,10 +10,6 @@ from concurrent.futures import ThreadPoolExecutor from typing import Any import psycopg2.errors -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy import UnaryExpression, asc, desc, select from sqlalchemy.engine import Engine from sqlalchemy.exc import IntegrityError @@ -23,6 +19,10 @@ from tenacity import before_sleep_log, retry, retry_if_exception, stop_after_att from configs import dify_config from core.repositories.factory import OrderConfig, WorkflowNodeExecutionRepository from extensions.ext_storage import storage +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.helper import extract_tenant_id from libs.uuid_utils import uuidv7 from models import ( diff --git a/api/core/schemas/resolver.py b/api/core/schemas/resolver.py index 6e26664ac2..e267c1abd9 100644 --- a/api/core/schemas/resolver.py +++ b/api/core/schemas/resolver.py @@ -254,7 +254,7 @@ def resolve_dify_schema_refs( return resolver.resolve(schema) -def _remove_metadata_fields(schema: dict) -> dict: +def _remove_metadata_fields(schema: dict[str, Any]) -> dict[str, Any]: """ Remove metadata fields from schema that shouldn't be included in resolved output diff --git a/api/core/telemetry/gateway.py b/api/core/telemetry/gateway.py index 7b013d0563..812edeeb14 100644 --- a/api/core/telemetry/gateway.py +++ b/api/core/telemetry/gateway.py @@ -89,7 +89,7 @@ def _get_case_routing() -> dict[TelemetryCase, CaseRoute]: return _case_routing -def __getattr__(name: str) -> dict: +def __getattr__(name: str) -> Any: """Lazy module-level access to routing tables.""" if name == "CASE_ROUTING": return _get_case_routing() diff --git a/api/core/tools/__base/tool.py b/api/core/tools/__base/tool.py index 7bb2cdb876..ab0f73a9a2 100644 --- a/api/core/tools/__base/tool.py +++ b/api/core/tools/__base/tool.py @@ -198,7 +198,7 @@ class Tool(ABC): message=ToolInvokeMessage.TextMessage(text=text), ) - def create_blob_message(self, blob: bytes, meta: dict | None = None) -> ToolInvokeMessage: + def create_blob_message(self, blob: bytes, meta: dict[str, Any] | None = None) -> ToolInvokeMessage: """ create a blob message @@ -212,7 +212,7 @@ class Tool(ABC): meta=meta, ) - def create_json_message(self, object: dict, suppress_output: bool = False) -> ToolInvokeMessage: + def create_json_message(self, object: dict[str, Any], suppress_output: bool = False) -> ToolInvokeMessage: """ create a json message """ diff --git a/api/core/tools/builtin_tool/providers/audio/tools/asr.py b/api/core/tools/builtin_tool/providers/audio/tools/asr.py index e539074303..95660ab93b 100644 --- a/api/core/tools/builtin_tool/providers/audio/tools/asr.py +++ b/api/core/tools/builtin_tool/providers/audio/tools/asr.py @@ -2,15 +2,14 @@ import io from collections.abc import Generator from typing import Any -from graphon.file import FileType -from graphon.file.file_manager import download -from graphon.model_runtime.entities.model_entities import ModelType - from core.model_manager import ModelManager from core.plugin.entities.parameters import PluginParameterOption from core.tools.builtin_tool.tool import BuiltinTool from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolInvokeMessage, ToolParameter +from graphon.file import FileType +from graphon.file.file_manager import download +from graphon.model_runtime.entities.model_entities import ModelType from services.model_provider_service import ModelProviderService diff --git a/api/core/tools/builtin_tool/providers/audio/tools/tts.py b/api/core/tools/builtin_tool/providers/audio/tools/tts.py index f49c669fe0..ac3820f1ab 100644 --- a/api/core/tools/builtin_tool/providers/audio/tools/tts.py +++ b/api/core/tools/builtin_tool/providers/audio/tools/tts.py @@ -2,13 +2,12 @@ import io from collections.abc import Generator from typing import Any -from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType - from core.model_manager import ModelManager from core.plugin.entities.parameters import PluginParameterOption from core.tools.builtin_tool.tool import BuiltinTool from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolInvokeMessage, ToolParameter +from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType from services.model_provider_service import ModelProviderService diff --git a/api/core/tools/builtin_tool/tool.py b/api/core/tools/builtin_tool/tool.py index 14af63a962..d41503e1e6 100644 --- a/api/core/tools/builtin_tool/tool.py +++ b/api/core/tools/builtin_tool/tool.py @@ -1,12 +1,11 @@ from __future__ import annotations -from graphon.model_runtime.entities.llm_entities import LLMResult -from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage - from core.tools.__base.tool import Tool from core.tools.__base.tool_runtime import ToolRuntime from core.tools.entities.tool_entities import ToolProviderType from core.tools.utils.model_invocation_utils import ModelInvocationUtils +from graphon.model_runtime.entities.llm_entities import LLMResult +from graphon.model_runtime.entities.message_entities import PromptMessage, SystemPromptMessage, UserPromptMessage _SUMMARY_PROMPT = """You are a professional language researcher, you are interested in the language and you can quickly aimed at the main point of an webpage and reproduce it in your own words but diff --git a/api/core/tools/custom_tool/tool.py b/api/core/tools/custom_tool/tool.py index 0a2c37c563..168e5f4493 100644 --- a/api/core/tools/custom_tool/tool.py +++ b/api/core/tools/custom_tool/tool.py @@ -6,7 +6,6 @@ from typing import Any, Union from urllib.parse import urlencode import httpx -from graphon.file.file_manager import download from core.helper import ssrf_proxy from core.tools.__base.tool import Tool @@ -14,6 +13,7 @@ from core.tools.__base.tool_runtime import ToolRuntime from core.tools.entities.tool_bundle import ApiToolBundle from core.tools.entities.tool_entities import ToolEntity, ToolInvokeMessage, ToolProviderType from core.tools.errors import ToolInvokeError, ToolParameterValidationError, ToolProviderCredentialValidationError +from graphon.file.file_manager import download API_TOOL_DEFAULT_TIMEOUT = ( int(getenv("API_TOOL_DEFAULT_CONNECT_TIMEOUT", "10")), diff --git a/api/core/tools/entities/api_entities.py b/api/core/tools/entities/api_entities.py index 410ec72baf..42a88c0003 100644 --- a/api/core/tools/entities/api_entities.py +++ b/api/core/tools/entities/api_entities.py @@ -2,7 +2,6 @@ from collections.abc import Mapping from datetime import datetime from typing import Any, Literal -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, Field, field_validator from core.entities.mcp_provider import MCPAuthentication, MCPConfiguration @@ -10,6 +9,7 @@ from core.plugin.entities.plugin_daemon import CredentialType from core.tools.__base.tool import ToolParameter from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolProviderType +from graphon.model_runtime.utils.encoders import jsonable_encoder class ToolApiEntity(BaseModel): diff --git a/api/core/tools/entities/tool_bundle.py b/api/core/tools/entities/tool_bundle.py index 10710c4376..4e07b7157a 100644 --- a/api/core/tools/entities/tool_bundle.py +++ b/api/core/tools/entities/tool_bundle.py @@ -1,4 +1,5 @@ from collections.abc import Mapping +from typing import Any from pydantic import BaseModel, Field @@ -26,6 +27,6 @@ class ApiToolBundle(BaseModel): # icon icon: str | None = None # openapi operation - openapi: dict + openapi: dict[str, Any] # output schema output_schema: Mapping[str, object] = Field(default_factory=dict) diff --git a/api/core/tools/entities/tool_entities.py b/api/core/tools/entities/tool_entities.py index b4253652f9..0c77693dde 100644 --- a/api/core/tools/entities/tool_entities.py +++ b/api/core/tools/entities/tool_entities.py @@ -149,7 +149,7 @@ class ToolInvokeMessage(BaseModel): text: str class JsonMessage(BaseModel): - json_object: dict | list + json_object: dict[str, Any] | list[Any] suppress_output: bool = Field(default=False, description="Whether to suppress JSON output in result string") class BlobMessage(BaseModel): @@ -337,7 +337,7 @@ class ToolParameter(PluginParameter): form: ToolParameterForm = Field(..., description="The form of the parameter, schema/form/llm") llm_description: str | None = None # MCP object and array type parameters use this field to store the schema - input_schema: dict | None = None + input_schema: dict[str, Any] | None = None @classmethod def get_simple_instance( @@ -463,7 +463,7 @@ class ToolInvokeMeta(BaseModel): time_cost: float = Field(..., description="The time cost of the tool invoke") error: str | None = None - tool_config: dict | None = None + tool_config: dict[str, Any] | None = None @classmethod def empty(cls) -> ToolInvokeMeta: diff --git a/api/core/tools/errors.py b/api/core/tools/errors.py index 4c3efd6ff9..2b26832b44 100644 --- a/api/core/tools/errors.py +++ b/api/core/tools/errors.py @@ -38,6 +38,17 @@ class ToolCredentialPolicyViolationError(ValueError): pass +class ApiToolProviderNotFoundError(ValueError): + error_code = "api_tool_provider_not_found" + provider_name: str + tenant_id: str + + def __init__(self, provider_name: str, tenant_id: str): + self.provider_name = provider_name + self.tenant_id = tenant_id + super().__init__(f"api provider {provider_name} does not exist") + + class WorkflowToolHumanInputNotSupportedError(BaseHTTPException): error_code = "workflow_tool_human_input_not_supported" description = "Workflow with Human Input nodes cannot be published as a workflow tool." diff --git a/api/core/tools/mcp_tool/tool.py b/api/core/tools/mcp_tool/tool.py index f6d09472b3..00fc8a8282 100644 --- a/api/core/tools/mcp_tool/tool.py +++ b/api/core/tools/mcp_tool/tool.py @@ -6,8 +6,6 @@ import logging from collections.abc import Generator, Mapping from typing import Any, cast -from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata - from core.mcp.auth_client import MCPClientWithAuthRetry from core.mcp.error import MCPConnectionError from core.mcp.types import ( @@ -23,6 +21,7 @@ from core.tools.__base.tool import Tool from core.tools.__base.tool_runtime import ToolRuntime from core.tools.entities.tool_entities import ToolEntity, ToolInvokeMessage, ToolProviderType from core.tools.errors import ToolInvokeError +from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata logger = logging.getLogger(__name__) diff --git a/api/core/tools/tool_engine.py b/api/core/tools/tool_engine.py index d1e333f502..3caacb8706 100644 --- a/api/core/tools/tool_engine.py +++ b/api/core/tools/tool_engine.py @@ -7,7 +7,6 @@ from datetime import UTC, datetime from mimetypes import guess_type from typing import Any, Union, cast -from graphon.file import FileTransferMethod, FileType from yarl import URL from core.app.entities.app_invoke_entities import InvokeFrom @@ -33,6 +32,7 @@ from core.tools.errors import ( from core.tools.utils.message_transformer import ToolFileMessageTransformer, safe_json_value from core.tools.workflow_as_tool.tool import WorkflowTool from extensions.ext_database import db +from graphon.file import FileTransferMethod, FileType from models.enums import CreatorUserRole, MessageFileBelongsTo from models.model import Message, MessageFile @@ -47,7 +47,7 @@ class ToolEngine: @staticmethod def agent_invoke( tool: Tool, - tool_parameters: Union[str, dict], + tool_parameters: Union[str, dict[str, Any]], user_id: str, tenant_id: str, message: Message, @@ -85,7 +85,8 @@ class ToolEngine: invocation_meta_dict: dict[str, ToolInvokeMeta] = {} def message_callback( - invocation_meta_dict: dict, messages: Generator[ToolInvokeMessage | ToolInvokeMeta, None, None] + invocation_meta_dict: dict[str, ToolInvokeMeta], + messages: Generator[ToolInvokeMessage | ToolInvokeMeta, None, None], ): for message in messages: if isinstance(message, ToolInvokeMeta): @@ -200,7 +201,7 @@ class ToolEngine: @staticmethod def _invoke( tool: Tool, - tool_parameters: dict, + tool_parameters: dict[str, Any], user_id: str, conversation_id: str | None = None, app_id: str | None = None, diff --git a/api/core/tools/tool_file_manager.py b/api/core/tools/tool_file_manager.py index d8674b3af9..b3424cd9a5 100644 --- a/api/core/tools/tool_file_manager.py +++ b/api/core/tools/tool_file_manager.py @@ -9,7 +9,6 @@ from mimetypes import guess_extension, guess_type from uuid import uuid4 import httpx -from graphon.file import File, FileTransferMethod, get_file_type_by_mime_type from sqlalchemy import select from configs import dify_config @@ -17,6 +16,7 @@ from core.db.session_factory import session_factory from core.helper import ssrf_proxy from core.workflow.file_reference import build_file_reference from extensions.ext_storage import storage +from graphon.file import File, FileTransferMethod, get_file_type_by_mime_type from models.model import MessageFile from models.tools import ToolFile diff --git a/api/core/tools/tool_label_manager.py b/api/core/tools/tool_label_manager.py index 58190d1089..d8969a3391 100644 --- a/api/core/tools/tool_label_manager.py +++ b/api/core/tools/tool_label_manager.py @@ -1,4 +1,5 @@ from sqlalchemy import delete, select +from sqlalchemy.orm import Session, sessionmaker from core.tools.__base.tool_provider import ToolProviderController from core.tools.builtin_tool.provider import BuiltinToolProviderController @@ -19,10 +20,18 @@ class ToolLabelManager: return list(set(tool_labels)) @classmethod - def update_tool_labels(cls, controller: ToolProviderController, labels: list[str]): + def update_tool_labels( + cls, controller: ToolProviderController, labels: list[str], session: Session | None = None + ) -> None: """ Update tool labels + + :param controller: tool provider controller + :param labels: list of tool labels + :param session: database session, if None, a new session will be created + :return: None """ + labels = cls.filter_tool_labels(labels) if isinstance(controller, ApiToolProviderController | WorkflowToolProviderController): @@ -30,26 +39,46 @@ class ToolLabelManager: else: raise ValueError("Unsupported tool type") + if session is not None: + cls._update_tool_labels_logics(session, provider_id, controller, labels) + else: + with sessionmaker(db.engine).begin() as _session: + cls._update_tool_labels_logics(_session, provider_id, controller, labels) + + @classmethod + def _update_tool_labels_logics( + cls, session: Session, provider_id: str, controller: ToolProviderController, labels: list[str] + ) -> None: + """ + Update tool labels logics + + :param session: database session + :param provider_id: tool provider ID + :param controller: tool provider controller + :param labels: list of tool labels + :return: None + """ + # delete old labels - db.session.execute(delete(ToolLabelBinding).where(ToolLabelBinding.tool_id == provider_id)) + _ = session.execute( + delete(ToolLabelBinding).where( + ToolLabelBinding.tool_id == provider_id, ToolLabelBinding.tool_type == controller.provider_type + ) + ) # insert new labels for label in labels: - db.session.add( - ToolLabelBinding( - tool_id=provider_id, - tool_type=controller.provider_type, - label_name=label, - ) - ) - - db.session.commit() + session.add(ToolLabelBinding(tool_id=provider_id, tool_type=controller.provider_type, label_name=label)) @classmethod def get_tool_labels(cls, controller: ToolProviderController) -> list[str]: """ Get tool labels + + :param controller: tool provider controller + :return: list of tool labels (str) """ + if isinstance(controller, ApiToolProviderController | WorkflowToolProviderController): provider_id = controller.provider_id elif isinstance(controller, BuiltinToolProviderController): @@ -60,9 +89,11 @@ class ToolLabelManager: ToolLabelBinding.tool_id == provider_id, ToolLabelBinding.tool_type == controller.provider_type, ) - labels = db.session.scalars(stmt).all() - return list(labels) + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + labels: list[str] = list(_session.scalars(stmt).all()) + + return labels @classmethod def get_tools_labels(cls, tool_providers: list[ToolProviderController]) -> dict[str, list[str]]: @@ -78,16 +109,22 @@ class ToolLabelManager: if not tool_providers: return {} + provider_ids: list[str] = [] + provider_types: set[str] = set() + for controller in tool_providers: if not isinstance(controller, ApiToolProviderController | WorkflowToolProviderController): raise ValueError("Unsupported tool type") - - provider_ids = [] - for controller in tool_providers: - assert isinstance(controller, ApiToolProviderController | WorkflowToolProviderController) provider_ids.append(controller.provider_id) + provider_types.add(controller.provider_type) - labels = db.session.scalars(select(ToolLabelBinding).where(ToolLabelBinding.tool_id.in_(provider_ids))).all() + labels: list[ToolLabelBinding] = [] + + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + stmt = select(ToolLabelBinding).where( + ToolLabelBinding.tool_id.in_(provider_ids), ToolLabelBinding.tool_type.in_(list(provider_types)) + ) + labels = list(_session.scalars(stmt).all()) tool_labels: dict[str, list[str]] = {label.tool_id: [] for label in labels} diff --git a/api/core/tools/tool_manager.py b/api/core/tools/tool_manager.py index be13d40f3e..f4588904d3 100644 --- a/api/core/tools/tool_manager.py +++ b/api/core/tools/tool_manager.py @@ -8,7 +8,6 @@ from threading import Lock from typing import TYPE_CHECKING, Any, Literal, Protocol, cast import sqlalchemy as sa -from graphon.runtime import VariablePool from pydantic import TypeAdapter from sqlalchemy import select from sqlalchemy.orm import Session @@ -29,14 +28,13 @@ from core.tools.plugin_tool.tool import PluginTool from core.tools.utils.uuid_utils import is_valid_uuid from core.tools.workflow_as_tool.provider import WorkflowToolProviderController from extensions.ext_database import db +from graphon.runtime import VariablePool from models.provider_ids import ToolProviderID from services.tools.mcp_tools_manage_service import MCPToolManageService if TYPE_CHECKING: pass -from graphon.model_runtime.utils.encoders import jsonable_encoder - from core.agent.entities import AgentToolEntity from core.app.entities.app_invoke_entities import InvokeFrom from core.helper.module_import_helper import load_single_subclass_from_source @@ -62,6 +60,7 @@ from core.tools.tool_label_manager import ToolLabelManager from core.tools.utils.configuration import ToolParameterConfigurationManager from core.tools.utils.encryption import create_provider_encrypter, create_tool_provider_encrypter from core.tools.workflow_as_tool.tool import WorkflowTool +from graphon.model_runtime.utils.encoders import jsonable_encoder from models.tools import ApiToolProvider, BuiltinToolProvider, WorkflowToolProvider from services.tools.tools_transform_service import ToolTransformService diff --git a/api/core/tools/utils/dataset_retriever/dataset_multi_retriever_tool.py b/api/core/tools/utils/dataset_retriever/dataset_multi_retriever_tool.py index 03e3c5918d..b6890b2611 100644 --- a/api/core/tools/utils/dataset_retriever/dataset_multi_retriever_tool.py +++ b/api/core/tools/utils/dataset_retriever/dataset_multi_retriever_tool.py @@ -1,7 +1,6 @@ import threading from flask import Flask, current_app -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import BaseModel, Field from sqlalchemy import select @@ -15,6 +14,7 @@ from core.rag.rerank.rerank_model import RerankModelRunner from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.tools.utils.dataset_retriever.dataset_retriever_base_tool import DatasetRetrieverBaseTool from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import Dataset, Document, DocumentSegment default_retrieval_model: DefaultRetrievalModelDict = { diff --git a/api/core/tools/utils/dataset_retriever/dataset_retriever_tool.py b/api/core/tools/utils/dataset_retriever/dataset_retriever_tool.py index 6a189fa6aa..0d1dc7273b 100644 --- a/api/core/tools/utils/dataset_retriever/dataset_retriever_tool.py +++ b/api/core/tools/utils/dataset_retriever/dataset_retriever_tool.py @@ -1,4 +1,4 @@ -from typing import cast +from typing import Any, cast from pydantic import BaseModel, Field from sqlalchemy import select @@ -39,7 +39,7 @@ class DatasetRetrieverTool(DatasetRetrieverBaseTool): dataset_id: str user_id: str | None = None retrieve_config: DatasetRetrieveConfigEntity - inputs: dict + inputs: dict[str, Any] @classmethod def from_dataset(cls, dataset: Dataset, **kwargs): diff --git a/api/core/tools/utils/dataset_retriever_tool.py b/api/core/tools/utils/dataset_retriever_tool.py index fca6e6f1c7..0bdc3df869 100644 --- a/api/core/tools/utils/dataset_retriever_tool.py +++ b/api/core/tools/utils/dataset_retriever_tool.py @@ -33,7 +33,7 @@ class DatasetRetrieverTool(Tool): invoke_from: InvokeFrom, hit_callback: DatasetIndexToolCallbackHandler, user_id: str, - inputs: dict, + inputs: dict[str, Any], ) -> list["DatasetRetrieverTool"]: """ get dataset tool diff --git a/api/core/tools/utils/message_transformer.py b/api/core/tools/utils/message_transformer.py index 81c85bc90d..79d0c114d4 100644 --- a/api/core/tools/utils/message_transformer.py +++ b/api/core/tools/utils/message_transformer.py @@ -9,11 +9,11 @@ from uuid import UUID import numpy as np import pytz -from graphon.file import File, FileTransferMethod, FileType from core.tools.entities.tool_entities import ToolInvokeMessage from core.tools.tool_file_manager import ToolFileManager from core.workflow.file_reference import parse_file_reference +from graphon.file import File, FileTransferMethod, FileType from libs.login import current_user from models import Account diff --git a/api/core/tools/utils/model_invocation_utils.py b/api/core/tools/utils/model_invocation_utils.py index 8d6f83dc07..9e1d41cb39 100644 --- a/api/core/tools/utils/model_invocation_utils.py +++ b/api/core/tools/utils/model_invocation_utils.py @@ -8,6 +8,9 @@ import json from decimal import Decimal from typing import cast +from core.model_manager import ModelManager +from core.tools.entities.tool_entities import ToolProviderType +from extensions.ext_database import db from graphon.model_runtime.entities.llm_entities import LLMResult from graphon.model_runtime.entities.message_entities import PromptMessage from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType @@ -20,10 +23,6 @@ from graphon.model_runtime.errors.invoke import ( ) from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from graphon.model_runtime.utils.encoders import jsonable_encoder - -from core.model_manager import ModelManager -from core.tools.entities.tool_entities import ToolProviderType -from extensions.ext_database import db from models.tools import ToolModelInvoke diff --git a/api/core/tools/utils/workflow_configuration_sync.py b/api/core/tools/utils/workflow_configuration_sync.py index 2159eb8638..45718cadb6 100644 --- a/api/core/tools/utils/workflow_configuration_sync.py +++ b/api/core/tools/utils/workflow_configuration_sync.py @@ -1,13 +1,12 @@ from collections.abc import Mapping, Sequence from typing import Any +from core.tools.entities.tool_entities import WorkflowToolParameterConfiguration +from core.tools.errors import WorkflowToolHumanInputNotSupportedError from graphon.enums import BuiltinNodeTypes from graphon.nodes.base.entities import OutputVariableEntity from graphon.variables.input_entities import VariableEntity -from core.tools.entities.tool_entities import WorkflowToolParameterConfiguration -from core.tools.errors import WorkflowToolHumanInputNotSupportedError - class WorkflowToolConfigurationUtils: @classmethod diff --git a/api/core/tools/workflow_as_tool/provider.py b/api/core/tools/workflow_as_tool/provider.py index a01004448a..5905fd919e 100644 --- a/api/core/tools/workflow_as_tool/provider.py +++ b/api/core/tools/workflow_as_tool/provider.py @@ -2,7 +2,6 @@ from __future__ import annotations from collections.abc import Mapping -from graphon.variables.input_entities import VariableEntity, VariableEntityType from pydantic import Field from sqlalchemy import select from sqlalchemy.orm import Session @@ -25,6 +24,7 @@ from core.tools.entities.tool_entities import ( from core.tools.utils.workflow_configuration_sync import WorkflowToolConfigurationUtils from core.tools.workflow_as_tool.tool import WorkflowTool from extensions.ext_database import db +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models.account import Account from models.model import App, AppMode from models.tools import WorkflowToolProvider diff --git a/api/core/tools/workflow_as_tool/tool.py b/api/core/tools/workflow_as_tool/tool.py index a17b7f108d..52ab605963 100644 --- a/api/core/tools/workflow_as_tool/tool.py +++ b/api/core/tools/workflow_as_tool/tool.py @@ -5,8 +5,6 @@ import logging from collections.abc import Generator, Mapping, Sequence from typing import Any, cast -from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod -from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata from sqlalchemy import select from core.app.file_access import DatabaseFileAccessController @@ -22,6 +20,8 @@ from core.tools.entities.tool_entities import ( from core.tools.errors import ToolInvokeError from core.workflow.file_reference import resolve_file_record_id from factories.file_factory import build_from_mapping +from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod +from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata from models import Account, Tenant from models.model import App, EndUser from models.utils.file_input_compat import build_file_from_stored_mapping @@ -277,7 +277,7 @@ class WorkflowTool(Tool): session.expunge(app) return app - def _transform_args(self, tool_parameters: dict) -> tuple[dict, list[dict]]: + def _transform_args(self, tool_parameters: dict[str, Any]) -> tuple[dict[str, Any], list[dict[str, str | None]]]: """ transform the tool parameters @@ -323,7 +323,7 @@ class WorkflowTool(Tool): return parameters_result, files - def _extract_files(self, outputs: dict) -> tuple[dict, list[File]]: + def _extract_files(self, outputs: dict[str, Any]) -> tuple[dict[str, Any], list[File]]: """ extract files from the result @@ -355,7 +355,7 @@ class WorkflowTool(Tool): return result, files - def _update_file_mapping(self, file_dict: dict): + def _update_file_mapping(self, file_dict: dict[str, Any]) -> dict[str, Any]: file_id = resolve_file_record_id(file_dict.get("reference") or file_dict.get("related_id")) transfer_method = FileTransferMethod.value_of(file_dict.get("transfer_method")) match transfer_method: diff --git a/api/core/trigger/debug/event_selectors.py b/api/core/trigger/debug/event_selectors.py index 61d1cd8540..24c1271488 100644 --- a/api/core/trigger/debug/event_selectors.py +++ b/api/core/trigger/debug/event_selectors.py @@ -8,7 +8,6 @@ from collections.abc import Mapping from datetime import datetime from typing import Any -from graphon.entities.graph_config import NodeConfigDict from pydantic import BaseModel from core.plugin.entities.request import TriggerInvokeEventResponse @@ -28,6 +27,7 @@ from core.trigger.debug.events import ( from core.workflow.nodes.trigger_plugin.entities import TriggerEventNodeData from core.workflow.nodes.trigger_schedule.entities import ScheduleConfig from extensions.ext_redis import redis_client +from graphon.entities.graph_config import NodeConfigDict from libs.datetime_utils import ensure_naive_utc, naive_utc_now from libs.schedule_utils import calculate_next_run_at from models.model import App diff --git a/api/core/workflow/human_input_compat.py b/api/core/workflow/human_input_compat.py index c95516a240..75a0a0c202 100644 --- a/api/core/workflow/human_input_compat.py +++ b/api/core/workflow/human_input_compat.py @@ -14,12 +14,13 @@ from typing import Annotated, Any, ClassVar, Literal import bleach import markdown +from markdown.extensions.tables import TableExtension +from pydantic import AliasChoices, BaseModel, ConfigDict, Field, TypeAdapter + from graphon.enums import BuiltinNodeTypes from graphon.nodes.base.variable_template_parser import VariableTemplateParser from graphon.runtime import VariablePool from graphon.variables.consts import SELECTORS_LENGTH -from markdown.extensions.tables import TableExtension -from pydantic import AliasChoices, BaseModel, ConfigDict, Field, TypeAdapter class DeliveryMethodType(enum.StrEnum): diff --git a/api/core/workflow/node_factory.py b/api/core/workflow/node_factory.py index b04ac7da3d..351da3444f 100644 --- a/api/core/workflow/node_factory.py +++ b/api/core/workflow/node_factory.py @@ -5,22 +5,6 @@ from dataclasses import dataclass from functools import lru_cache from typing import TYPE_CHECKING, Any, cast, final, override -from graphon.entities.base_node_data import BaseNodeData -from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter -from graphon.enums import BuiltinNodeTypes, NodeType -from graphon.file.file_manager import file_manager -from graphon.graph.graph import NodeFactory -from graphon.model_runtime.memory import PromptMessageMemory -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel -from graphon.nodes.base.node import Node -from graphon.nodes.code.code_node import WorkflowCodeExecutor -from graphon.nodes.code.entities import CodeLanguage -from graphon.nodes.code.limits import CodeNodeLimits -from graphon.nodes.document_extractor import UnstructuredApiConfig -from graphon.nodes.http_request import build_http_request_config -from graphon.nodes.llm.entities import LLMNodeData -from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData -from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData from sqlalchemy import select from sqlalchemy.orm import Session @@ -56,6 +40,22 @@ from core.workflow.nodes.agent.runtime_support import AgentRuntimeSupport from core.workflow.system_variables import SystemVariableKey, get_system_text, system_variable_selector from core.workflow.template_rendering import CodeExecutorJinja2TemplateRenderer from extensions.ext_database import db +from graphon.entities.base_node_data import BaseNodeData +from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter +from graphon.enums import BuiltinNodeTypes, NodeType +from graphon.file.file_manager import file_manager +from graphon.graph.graph import NodeFactory +from graphon.model_runtime.memory import PromptMessageMemory +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel +from graphon.nodes.base.node import Node +from graphon.nodes.code.code_node import WorkflowCodeExecutor +from graphon.nodes.code.entities import CodeLanguage +from graphon.nodes.code.limits import CodeNodeLimits +from graphon.nodes.document_extractor import UnstructuredApiConfig +from graphon.nodes.http_request import build_http_request_config +from graphon.nodes.llm.entities import LLMNodeData +from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData +from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData from models.model import Conversation if TYPE_CHECKING: diff --git a/api/core/workflow/node_runtime.py b/api/core/workflow/node_runtime.py index 19cb3a7b0a..2e632e56f0 100644 --- a/api/core/workflow/node_runtime.py +++ b/api/core/workflow/node_runtime.py @@ -4,6 +4,32 @@ from collections.abc import Callable, Generator, Mapping, Sequence from dataclasses import dataclass from typing import TYPE_CHECKING, Any, cast +from sqlalchemy import select +from sqlalchemy.orm import Session + +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.app.file_access import DatabaseFileAccessController +from core.callback_handler.workflow_tool_callback_handler import DifyWorkflowCallbackHandler +from core.llm_generator.output_parser.errors import OutputParserError +from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output +from core.model_manager import ModelInstance +from core.plugin.impl.exc import PluginDaemonClientSideError, PluginInvokeError +from core.plugin.impl.plugin import PluginInstaller +from core.prompt.utils.prompt_message_util import PromptMessageUtil +from core.repositories.human_input_repository import ( + FormCreateParams, + HumanInputFormRepository, + HumanInputFormRepositoryImpl, +) +from core.tools.entities.tool_entities import ToolProviderType as CoreToolProviderType +from core.tools.errors import ToolInvokeError +from core.tools.tool_engine import ToolEngine +from core.tools.tool_file_manager import ToolFileManager +from core.tools.tool_manager import ToolManager +from core.tools.utils.message_transformer import ToolFileMessageTransformer +from core.workflow.file_reference import build_file_reference +from extensions.ext_database import db +from factories import file_factory from graphon.file import FileTransferMethod, FileType from graphon.model_runtime.entities import LLMMode from graphon.model_runtime.entities.llm_entities import ( @@ -34,32 +60,6 @@ from graphon.nodes.tool_runtime_entities import ( ToolRuntimeMessage, ToolRuntimeParameter, ) -from sqlalchemy import select -from sqlalchemy.orm import Session - -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.app.file_access import DatabaseFileAccessController -from core.callback_handler.workflow_tool_callback_handler import DifyWorkflowCallbackHandler -from core.llm_generator.output_parser.errors import OutputParserError -from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output -from core.model_manager import ModelInstance -from core.plugin.impl.exc import PluginDaemonClientSideError, PluginInvokeError -from core.plugin.impl.plugin import PluginInstaller -from core.prompt.utils.prompt_message_util import PromptMessageUtil -from core.repositories.human_input_repository import ( - FormCreateParams, - HumanInputFormRepository, - HumanInputFormRepositoryImpl, -) -from core.tools.entities.tool_entities import ToolProviderType as CoreToolProviderType -from core.tools.errors import ToolInvokeError -from core.tools.tool_engine import ToolEngine -from core.tools.tool_file_manager import ToolFileManager -from core.tools.tool_manager import ToolManager -from core.tools.utils.message_transformer import ToolFileMessageTransformer -from core.workflow.file_reference import build_file_reference -from extensions.ext_database import db -from factories import file_factory from models.dataset import SegmentAttachmentBinding from models.model import UploadFile from services.tools.builtin_tools_manage_service import BuiltinToolManageService @@ -76,13 +76,12 @@ from .human_input_compat import ( from .system_variables import SystemVariableKey, get_system_text if TYPE_CHECKING: + from core.tools.__base.tool import Tool + from core.tools.entities.tool_entities import ToolInvokeMessage as CoreToolInvokeMessage from graphon.file import File from graphon.nodes.llm.file_saver import LLMFileSaver from graphon.nodes.tool.entities import ToolNodeData - from core.tools.__base.tool import Tool - from core.tools.entities.tool_entities import ToolInvokeMessage as CoreToolInvokeMessage - _file_access_controller = DatabaseFileAccessController() diff --git a/api/core/workflow/nodes/agent/agent_node.py b/api/core/workflow/nodes/agent/agent_node.py index bfd5536e4a..7b000101b0 100644 --- a/api/core/workflow/nodes/agent/agent_node.py +++ b/api/core/workflow/nodes/agent/agent_node.py @@ -3,15 +3,14 @@ from __future__ import annotations from collections.abc import Generator, Mapping, Sequence from typing import TYPE_CHECKING, Any +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.workflow.system_variables import SystemVariableKey, get_system_text from graphon.entities.graph_config import NodeConfigDict from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.node_events import NodeEventBase, NodeRunResult, StreamCompletedEvent from graphon.nodes.base.node import Node from graphon.nodes.base.variable_template_parser import VariableTemplateParser -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.workflow.system_variables import SystemVariableKey, get_system_text - from .entities import AgentNodeData from .exceptions import ( AgentInvocationError, diff --git a/api/core/workflow/nodes/agent/entities.py b/api/core/workflow/nodes/agent/entities.py index c52aad150b..51452c29a3 100644 --- a/api/core/workflow/nodes/agent/entities.py +++ b/api/core/workflow/nodes/agent/entities.py @@ -1,12 +1,12 @@ from enum import IntEnum, StrEnum, auto from typing import Any, Literal, Union -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import BuiltinNodeTypes, NodeType from pydantic import BaseModel from core.prompt.entities.advanced_prompt_entities import MemoryConfig from core.tools.entities.tool_entities import ToolSelector +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import BuiltinNodeTypes, NodeType class AgentNodeData(BaseNodeData): diff --git a/api/core/workflow/nodes/agent/message_transformer.py b/api/core/workflow/nodes/agent/message_transformer.py index db74590ed7..f44681377d 100644 --- a/api/core/workflow/nodes/agent/message_transformer.py +++ b/api/core/workflow/nodes/agent/message_transformer.py @@ -3,6 +3,14 @@ from __future__ import annotations from collections.abc import Generator, Mapping from typing import Any, cast +from sqlalchemy import select +from sqlalchemy.orm import Session + +from core.app.file_access import DatabaseFileAccessController +from core.tools.entities.tool_entities import ToolInvokeMessage +from core.tools.utils.message_transformer import ToolFileMessageTransformer +from extensions.ext_database import db +from factories import file_factory from graphon.enums import BuiltinNodeTypes, NodeType, WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from graphon.file import File, FileTransferMethod, get_file_type_by_mime_type from graphon.model_runtime.entities.llm_entities import LLMUsage, LLMUsageMetadata @@ -15,14 +23,6 @@ from graphon.node_events import ( StreamCompletedEvent, ) from graphon.variables.segments import ArrayFileSegment -from sqlalchemy import select -from sqlalchemy.orm import Session - -from core.app.file_access import DatabaseFileAccessController -from core.tools.entities.tool_entities import ToolInvokeMessage -from core.tools.utils.message_transformer import ToolFileMessageTransformer -from extensions.ext_database import db -from factories import file_factory from models import ToolFile from services.tools.builtin_tools_manage_service import BuiltinToolManageService diff --git a/api/core/workflow/nodes/agent/runtime_support.py b/api/core/workflow/nodes/agent/runtime_support.py index be50edbc4d..a872774c98 100644 --- a/api/core/workflow/nodes/agent/runtime_support.py +++ b/api/core/workflow/nodes/agent/runtime_support.py @@ -4,8 +4,6 @@ import json from collections.abc import Sequence from typing import Any, cast -from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType -from graphon.runtime import VariablePool from packaging.version import Version from pydantic import ValidationError from sqlalchemy import select @@ -21,6 +19,8 @@ from core.tools.entities.tool_entities import ToolIdentity, ToolParameter, ToolP from core.tools.tool_manager import ToolManager from core.workflow.system_variables import SystemVariableKey, get_system_text from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType +from graphon.runtime import VariablePool from models.model import Conversation from .entities import AgentNodeData, AgentOldVersionModelFeatures, ParamsAutoGenerated diff --git a/api/core/workflow/nodes/datasource/datasource_node.py b/api/core/workflow/nodes/datasource/datasource_node.py index d9247b2593..e4f6b3b470 100644 --- a/api/core/workflow/nodes/datasource/datasource_node.py +++ b/api/core/workflow/nodes/datasource/datasource_node.py @@ -1,6 +1,12 @@ from collections.abc import Generator, Mapping, Sequence from typing import TYPE_CHECKING, Any +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.datasource.datasource_manager import DatasourceManager +from core.datasource.entities.datasource_entities import DatasourceProviderType +from core.plugin.impl.exc import PluginDaemonClientSideError +from core.workflow.file_reference import resolve_file_record_id +from core.workflow.system_variables import SystemVariableKey, get_system_segment from graphon.entities.graph_config import NodeConfigDict from graphon.enums import ( BuiltinNodeTypes, @@ -12,13 +18,6 @@ from graphon.node_events import NodeRunResult, StreamCompletedEvent from graphon.nodes.base.node import Node from graphon.nodes.base.variable_template_parser import VariableTemplateParser -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.datasource.datasource_manager import DatasourceManager -from core.datasource.entities.datasource_entities import DatasourceProviderType -from core.plugin.impl.exc import PluginDaemonClientSideError -from core.workflow.file_reference import resolve_file_record_id -from core.workflow.system_variables import SystemVariableKey, get_system_segment - from .entities import DatasourceNodeData, DatasourceParameter, OnlineDriveDownloadFileParam from .exc import DatasourceNodeError diff --git a/api/core/workflow/nodes/datasource/entities.py b/api/core/workflow/nodes/datasource/entities.py index cad32f8d5b..28966f2392 100644 --- a/api/core/workflow/nodes/datasource/entities.py +++ b/api/core/workflow/nodes/datasource/entities.py @@ -1,9 +1,10 @@ from typing import Any, Literal, Union +from pydantic import BaseModel, field_validator +from pydantic_core.core_schema import ValidationInfo + from graphon.entities.base_node_data import BaseNodeData from graphon.enums import BuiltinNodeTypes, NodeType -from pydantic import BaseModel, field_validator -from pydantic_core.core_schema import ValidationInfo class DatasourceEntity(BaseModel): diff --git a/api/core/workflow/nodes/knowledge_index/entities.py b/api/core/workflow/nodes/knowledge_index/entities.py index 04a10f9257..260881e49c 100644 --- a/api/core/workflow/nodes/knowledge_index/entities.py +++ b/api/core/workflow/nodes/knowledge_index/entities.py @@ -1,13 +1,13 @@ from typing import Union -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import NodeType from pydantic import BaseModel from core.rag.entities import RerankingModelConfig, WeightedScoreConfig from core.rag.index_processor.index_processor_base import SummaryIndexSettingDict from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import NodeType class RetrievalSetting(BaseModel): diff --git a/api/core/workflow/nodes/knowledge_index/knowledge_index_node.py b/api/core/workflow/nodes/knowledge_index/knowledge_index_node.py index bb72fe3881..d5cab05dbe 100644 --- a/api/core/workflow/nodes/knowledge_index/knowledge_index_node.py +++ b/api/core/workflow/nodes/knowledge_index/knowledge_index_node.py @@ -2,17 +2,16 @@ import logging from collections.abc import Mapping from typing import TYPE_CHECKING, Any -from graphon.entities.graph_config import NodeConfigDict -from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus -from graphon.node_events import NodeRunResult -from graphon.nodes.base.node import Node -from graphon.nodes.base.template import Template - from core.rag.index_processor.index_processor import IndexProcessor from core.rag.index_processor.index_processor_base import SummaryIndexSettingDict from core.rag.summary_index.summary_index import SummaryIndex from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE from core.workflow.system_variables import SystemVariableKey, get_system_segment, get_system_text +from graphon.entities.graph_config import NodeConfigDict +from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus +from graphon.node_events import NodeRunResult +from graphon.nodes.base.node import Node +from graphon.nodes.base.template import Template from .entities import KnowledgeIndexNodeData from .exc import ( diff --git a/api/core/workflow/nodes/knowledge_index/protocols.py b/api/core/workflow/nodes/knowledge_index/protocols.py index 6668f0c98e..d04e79c2a8 100644 --- a/api/core/workflow/nodes/knowledge_index/protocols.py +++ b/api/core/workflow/nodes/knowledge_index/protocols.py @@ -43,15 +43,20 @@ class IndexProcessorProtocol(Protocol): original_document_id: str, chunks: Mapping[str, Any], batch: Any, - summary_index_setting: dict | None = None, + summary_index_setting: dict[str, Any] | None = None, ) -> IndexingResultDict: ... def get_preview_output( - self, chunks: Any, dataset_id: str, document_id: str, chunk_structure: str, summary_index_setting: dict | None + self, + chunks: Any, + dataset_id: str, + document_id: str, + chunk_structure: str, + summary_index_setting: dict[str, Any] | None, ) -> Preview: ... class SummaryIndexServiceProtocol(Protocol): def generate_and_vectorize_summary( - self, dataset_id: str, document_id: str, is_preview: bool, summary_index_setting: dict | None = None + self, dataset_id: str, document_id: str, is_preview: bool, summary_index_setting: dict[str, Any] | None = None ) -> None: ... diff --git a/api/core/workflow/nodes/knowledge_retrieval/entities.py b/api/core/workflow/nodes/knowledge_retrieval/entities.py index 460ec693ce..3825f526a2 100644 --- a/api/core/workflow/nodes/knowledge_retrieval/entities.py +++ b/api/core/workflow/nodes/knowledge_retrieval/entities.py @@ -1,11 +1,11 @@ from typing import Literal -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import BuiltinNodeTypes, NodeType -from graphon.nodes.llm.entities import ModelConfig, VisionConfig from pydantic import BaseModel, Field from core.rag.entities import Condition, MetadataFilteringCondition, RerankingModelConfig, WeightedScoreConfig +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import BuiltinNodeTypes, NodeType +from graphon.nodes.llm.entities import ModelConfig, VisionConfig __all__ = ["Condition"] diff --git a/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py b/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py index 13624b27b3..47ad14b499 100644 --- a/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py +++ b/api/core/workflow/nodes/knowledge_retrieval/knowledge_retrieval_node.py @@ -8,6 +8,11 @@ import logging from collections.abc import Mapping, Sequence from typing import TYPE_CHECKING, Any, Literal +from core.app.app_config.entities import DatasetRetrieveConfigEntity +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext +from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict +from core.rag.retrieval.dataset_retrieval import DatasetRetrieval +from core.workflow.file_reference import parse_file_reference from graphon.entities import GraphInitParams from graphon.entities.graph_config import NodeConfigDict from graphon.enums import ( @@ -27,12 +32,6 @@ from graphon.variables import ( ) from graphon.variables.segments import ArrayObjectSegment -from core.app.app_config.entities import DatasetRetrieveConfigEntity -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext -from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict -from core.rag.retrieval.dataset_retrieval import DatasetRetrieval -from core.workflow.file_reference import parse_file_reference - from .entities import ( Condition, KnowledgeRetrievalNodeData, diff --git a/api/core/workflow/nodes/knowledge_retrieval/retrieval.py b/api/core/workflow/nodes/knowledge_retrieval/retrieval.py index 39e2008a2c..ea45dcf5c2 100644 --- a/api/core/workflow/nodes/knowledge_retrieval/retrieval.py +++ b/api/core/workflow/nodes/knowledge_retrieval/retrieval.py @@ -1,10 +1,10 @@ from typing import Any, Literal, Protocol -from graphon.model_runtime.entities import LLMUsage -from graphon.nodes.llm.entities import ModelConfig from pydantic import BaseModel, Field from core.rag.data_post_processor.data_post_processor import RerankingModelDict, WeightsDict +from graphon.model_runtime.entities import LLMUsage +from graphon.nodes.llm.entities import ModelConfig from .entities import MetadataFilteringCondition diff --git a/api/core/workflow/nodes/trigger_plugin/entities.py b/api/core/workflow/nodes/trigger_plugin/entities.py index bf5be2379a..23ed2cd408 100644 --- a/api/core/workflow/nodes/trigger_plugin/entities.py +++ b/api/core/workflow/nodes/trigger_plugin/entities.py @@ -1,12 +1,12 @@ from collections.abc import Mapping from typing import Any, Literal, Union -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import NodeType from pydantic import BaseModel, Field, ValidationInfo, field_validator from core.trigger.constants import TRIGGER_PLUGIN_NODE_TYPE from core.trigger.entities.entities import EventParameter +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import NodeType from .exc import TriggerEventParameterError diff --git a/api/core/workflow/nodes/trigger_plugin/trigger_event_node.py b/api/core/workflow/nodes/trigger_plugin/trigger_event_node.py index e50de11bb9..c848a86255 100644 --- a/api/core/workflow/nodes/trigger_plugin/trigger_event_node.py +++ b/api/core/workflow/nodes/trigger_plugin/trigger_event_node.py @@ -1,13 +1,12 @@ from collections.abc import Mapping from typing import Any +from core.trigger.constants import TRIGGER_PLUGIN_NODE_TYPE +from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID from graphon.enums import NodeExecutionType, WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from graphon.node_events import NodeRunResult from graphon.nodes.base.node import Node -from core.trigger.constants import TRIGGER_PLUGIN_NODE_TYPE -from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID - from .entities import TriggerEventNodeData diff --git a/api/core/workflow/nodes/trigger_schedule/entities.py b/api/core/workflow/nodes/trigger_schedule/entities.py index f14ca893c9..683c8d420f 100644 --- a/api/core/workflow/nodes/trigger_schedule/entities.py +++ b/api/core/workflow/nodes/trigger_schedule/entities.py @@ -1,10 +1,10 @@ -from typing import Literal, Union +from typing import Any, Literal, Union -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import NodeType from pydantic import BaseModel, Field from core.trigger.constants import TRIGGER_SCHEDULE_NODE_TYPE +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import NodeType class TriggerScheduleNodeData(BaseNodeData): @@ -16,7 +16,7 @@ class TriggerScheduleNodeData(BaseNodeData): mode: str = Field(default="visual", description="Schedule mode: visual or cron") frequency: str | None = Field(default=None, description="Frequency for visual mode: hourly, daily, weekly, monthly") cron_expression: str | None = Field(default=None, description="Cron expression for cron mode") - visual_config: dict | None = Field(default=None, description="Visual configuration details") + visual_config: dict[str, Any] | None = Field(default=None, description="Visual configuration details") timezone: str = Field(default="UTC", description="Timezone for schedule execution") diff --git a/api/core/workflow/nodes/trigger_schedule/trigger_schedule_node.py b/api/core/workflow/nodes/trigger_schedule/trigger_schedule_node.py index a9753ab387..b46cc76a6e 100644 --- a/api/core/workflow/nodes/trigger_schedule/trigger_schedule_node.py +++ b/api/core/workflow/nodes/trigger_schedule/trigger_schedule_node.py @@ -1,11 +1,10 @@ from collections.abc import Mapping -from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus -from graphon.node_events import NodeRunResult -from graphon.nodes.base.node import Node - from core.trigger.constants import TRIGGER_SCHEDULE_NODE_TYPE from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID +from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus +from graphon.node_events import NodeRunResult +from graphon.nodes.base.node import Node from .entities import TriggerScheduleNodeData diff --git a/api/core/workflow/nodes/trigger_webhook/entities.py b/api/core/workflow/nodes/trigger_webhook/entities.py index a30f877e4b..b261039448 100644 --- a/api/core/workflow/nodes/trigger_webhook/entities.py +++ b/api/core/workflow/nodes/trigger_webhook/entities.py @@ -1,12 +1,12 @@ from collections.abc import Sequence from enum import StrEnum -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import NodeType -from graphon.variables.types import SegmentType from pydantic import BaseModel, Field, field_validator from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import NodeType +from graphon.variables.types import SegmentType _WEBHOOK_HEADER_ALLOWED_TYPES: frozenset[SegmentType] = frozenset((SegmentType.STRING,)) diff --git a/api/core/workflow/nodes/trigger_webhook/node.py b/api/core/workflow/nodes/trigger_webhook/node.py index 8c866aea81..13c4f05bfd 100644 --- a/api/core/workflow/nodes/trigger_webhook/node.py +++ b/api/core/workflow/nodes/trigger_webhook/node.py @@ -2,6 +2,10 @@ import logging from collections.abc import Mapping from typing import Any +from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE +from core.workflow.file_reference import resolve_file_record_id +from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID +from factories.variable_factory import build_segment_with_type from graphon.enums import NodeExecutionType, WorkflowNodeExecutionStatus from graphon.file import FileTransferMethod from graphon.node_events import NodeRunResult @@ -10,11 +14,6 @@ from graphon.nodes.protocols import FileReferenceFactoryProtocol from graphon.variables.types import SegmentType from graphon.variables.variables import FileVariable -from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE -from core.workflow.file_reference import resolve_file_record_id -from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID -from factories.variable_factory import build_segment_with_type - from .entities import ContentType, WebhookData logger = logging.getLogger(__name__) @@ -75,7 +74,7 @@ class TriggerWebhookNode(Node[WebhookData]): outputs=outputs, ) - def generate_file_var(self, param_name: str, file: dict): + def generate_file_var(self, param_name: str, file: dict[str, Any]): file_id = resolve_file_record_id(file.get("reference") or file.get("related_id")) transfer_method_value = file.get("transfer_method") if transfer_method_value: @@ -147,7 +146,7 @@ class TriggerWebhookNode(Node[WebhookData]): outputs[param_name] = str(webhook_data.get("body", {}).get("raw", "")) continue elif self.node_data.content_type == ContentType.BINARY: - raw_data: dict = webhook_data.get("body", {}).get("raw", {}) + raw_data: dict[str, Any] = webhook_data.get("body", {}).get("raw", {}) file_var = self.generate_file_var(param_name, raw_data) if file_var: outputs[param_name] = file_var diff --git a/api/core/workflow/template_rendering.py b/api/core/workflow/template_rendering.py index d51cfadd09..b4ffb37549 100644 --- a/api/core/workflow/template_rendering.py +++ b/api/core/workflow/template_rendering.py @@ -3,11 +3,10 @@ from __future__ import annotations from collections.abc import Mapping from typing import Any +from core.helper.code_executor.code_executor import CodeExecutionError, CodeExecutor from graphon.nodes.code.entities import CodeLanguage from graphon.template_rendering import Jinja2TemplateRenderer, TemplateRenderError -from core.helper.code_executor.code_executor import CodeExecutionError, CodeExecutor - class CodeExecutorJinja2TemplateRenderer(Jinja2TemplateRenderer): """Sandbox-backed Jinja2 renderer for workflow-owned node composition.""" diff --git a/api/core/workflow/workflow_entry.py b/api/core/workflow/workflow_entry.py index f0a5fbb400..4e2f603e5b 100644 --- a/api/core/workflow/workflow_entry.py +++ b/api/core/workflow/workflow_entry.py @@ -3,20 +3,6 @@ import time from collections.abc import Generator, Mapping, Sequence from typing import Any, TypedDict -from graphon.entities import GraphInitParams -from graphon.entities.graph_config import NodeConfigDictAdapter -from graphon.errors import WorkflowNodeRunFailedError -from graphon.file import File -from graphon.graph import Graph -from graphon.graph_engine import GraphEngine, GraphEngineConfig -from graphon.graph_engine.command_channels import CommandChannel, InMemoryChannel -from graphon.graph_engine.layers import DebugLoggingLayer, ExecutionLimitsLayer -from graphon.graph_events import GraphEngineEvent, GraphNodeEventBase, GraphRunFailedEvent -from graphon.nodes import BuiltinNodeTypes -from graphon.nodes.base.node import Node -from graphon.runtime import ChildGraphNotFoundError, GraphRuntimeState, VariablePool -from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader, load_into_variable_pool - from configs import dify_config from context import capture_current_context from core.app.apps.exc import GenerateTaskStoppedError @@ -40,6 +26,19 @@ from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add from core.workflow.variable_prefixes import ENVIRONMENT_VARIABLE_NODE_ID from extensions.otel.runtime import is_instrument_flag_enabled from factories import file_factory +from graphon.entities import GraphInitParams +from graphon.entities.graph_config import NodeConfigDictAdapter +from graphon.errors import WorkflowNodeRunFailedError +from graphon.file import File +from graphon.graph import Graph +from graphon.graph_engine import GraphEngine, GraphEngineConfig +from graphon.graph_engine.command_channels import CommandChannel, InMemoryChannel +from graphon.graph_engine.layers import DebugLoggingLayer, ExecutionLimitsLayer +from graphon.graph_events import GraphEngineEvent, GraphNodeEventBase, GraphRunFailedEvent +from graphon.nodes import BuiltinNodeTypes +from graphon.nodes.base.node import Node +from graphon.runtime import ChildGraphNotFoundError, GraphRuntimeState, VariablePool +from graphon.variable_loader import DUMMY_VARIABLE_LOADER, VariableLoader, load_into_variable_pool from models.workflow import Workflow logger = logging.getLogger(__name__) diff --git a/api/docker/entrypoint.sh b/api/docker/entrypoint.sh index 6b904b7d0d..fc118df5bc 100755 --- a/api/docker/entrypoint.sh +++ b/api/docker/entrypoint.sh @@ -35,10 +35,10 @@ if [[ "${MODE}" == "worker" ]]; then if [[ -z "${CELERY_QUEUES}" ]]; then if [[ "${EDITION}" == "CLOUD" ]]; then # Cloud edition: separate queues for dataset and trigger tasks - DEFAULT_QUEUES="api_token,dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow_professional,workflow_team,workflow_sandbox,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention,workflow_based_app_execution" + DEFAULT_QUEUES="api_token,dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow_professional,workflow_team,workflow_sandbox,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_publisher,trigger_refresh_executor,retention,workflow_based_app_execution" else # Community edition (SELF_HOSTED): dataset, pipeline and workflow have separate queues - DEFAULT_QUEUES="api_token,dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_executor,retention,workflow_based_app_execution" + DEFAULT_QUEUES="api_token,dataset,dataset_summary,priority_dataset,priority_pipeline,pipeline,mail,ops_trace,app_deletion,plugin,workflow_storage,conversation,workflow,schedule_poller,schedule_executor,triggered_workflow_dispatcher,trigger_refresh_publisher,trigger_refresh_executor,retention,workflow_based_app_execution" fi else DEFAULT_QUEUES="${CELERY_QUEUES}" @@ -119,14 +119,16 @@ elif [[ "${MODE}" == "job" ]]; then else if [[ "${DEBUG}" == "true" ]]; then - exec flask run --host=${DIFY_BIND_ADDRESS:-0.0.0.0} --port=${DIFY_PORT:-5001} --debug + export HOST=${DIFY_BIND_ADDRESS:-0.0.0.0} + export PORT=${DIFY_PORT:-5001} + exec python -m app else exec gunicorn \ --bind "${DIFY_BIND_ADDRESS:-0.0.0.0}:${DIFY_PORT:-5001}" \ --workers ${SERVER_WORKER_AMOUNT:-1} \ - --worker-class ${SERVER_WORKER_CLASS:-gevent} \ + --worker-class ${SERVER_WORKER_CLASS:-geventwebsocket.gunicorn.workers.GeventWebSocketWorker} \ --worker-connections ${SERVER_WORKER_CONNECTIONS:-10} \ --timeout ${GUNICORN_TIMEOUT:-200} \ - app:app + app:socketio_app fi fi diff --git a/api/enterprise/telemetry/draft_trace.py b/api/enterprise/telemetry/draft_trace.py index 5a8d0ee6f4..dff558988c 100644 --- a/api/enterprise/telemetry/draft_trace.py +++ b/api/enterprise/telemetry/draft_trace.py @@ -3,10 +3,9 @@ from __future__ import annotations from collections.abc import Mapping from typing import Any -from graphon.enums import WorkflowNodeExecutionMetadataKey - from core.telemetry import TelemetryContext, TelemetryEvent, TraceTaskName from core.telemetry import emit as telemetry_emit +from graphon.enums import WorkflowNodeExecutionMetadataKey from models.workflow import WorkflowNodeExecutionModel diff --git a/api/enterprise/telemetry/metric_handler.py b/api/enterprise/telemetry/metric_handler.py index 9cda0bf90a..c564ace584 100644 --- a/api/enterprise/telemetry/metric_handler.py +++ b/api/enterprise/telemetry/metric_handler.py @@ -329,7 +329,7 @@ class EnterpriseMetricHandler: return include_content = exporter.include_content - attrs: dict = { + attrs: dict[str, Any] = { "dify.message.id": payload.get("message_id"), "dify.tenant_id": envelope.tenant_id, "dify.event.id": envelope.event_id, diff --git a/api/events/event_handlers/create_document_index.py b/api/events/event_handlers/create_document_index.py index b7e7a6e60f..0c535a1c5b 100644 --- a/api/events/event_handlers/create_document_index.py +++ b/api/events/event_handlers/create_document_index.py @@ -6,9 +6,9 @@ import click from sqlalchemy import select from werkzeug.exceptions import NotFound +from core.db.session_factory import session_factory from core.indexing_runner import DocumentIsPausedError, IndexingRunner from events.document_index_event import document_index_created -from extensions.ext_database import db from libs.datetime_utils import naive_utc_now from models.dataset import Document from models.enums import IndexingStatus @@ -22,24 +22,25 @@ def handle(sender, **kwargs): document_ids = kwargs.get("document_ids", []) documents = [] start_at = time.perf_counter() - for document_id in document_ids: - logger.info(click.style(f"Start process document: {document_id}", fg="green")) + with session_factory.create_session() as session: + for document_id in document_ids: + logger.info(click.style(f"Start process document: {document_id}", fg="green")) - document = db.session.scalar( - select(Document).where( - Document.id == document_id, - Document.dataset_id == dataset_id, + document = session.scalar( + select(Document).where( + Document.id == document_id, + Document.dataset_id == dataset_id, + ) ) - ) - if not document: - raise NotFound("Document not found") + if not document: + raise NotFound("Document not found") - document.indexing_status = IndexingStatus.PARSING - document.processing_started_at = naive_utc_now() - documents.append(document) - db.session.add(document) - db.session.commit() + document.indexing_status = IndexingStatus.PARSING + document.processing_started_at = naive_utc_now() + documents.append(document) + session.add(document) + session.commit() with contextlib.suppress(Exception): try: diff --git a/api/events/event_handlers/create_installed_app_when_app_created.py b/api/events/event_handlers/create_installed_app_when_app_created.py index 57412cc4ad..38e102d5fd 100644 --- a/api/events/event_handlers/create_installed_app_when_app_created.py +++ b/api/events/event_handlers/create_installed_app_when_app_created.py @@ -1,5 +1,5 @@ +from core.db.session_factory import session_factory from events.app_event import app_was_created -from extensions.ext_database import db from models.model import InstalledApp @@ -12,5 +12,6 @@ def handle(sender, **kwargs): app_id=app.id, app_owner_tenant_id=app.tenant_id, ) - db.session.add(installed_app) - db.session.commit() + with session_factory.create_session() as session: + session.add(installed_app) + session.commit() diff --git a/api/events/event_handlers/create_site_record_when_app_created.py b/api/events/event_handlers/create_site_record_when_app_created.py index 84be592b1a..5e2a456dce 100644 --- a/api/events/event_handlers/create_site_record_when_app_created.py +++ b/api/events/event_handlers/create_site_record_when_app_created.py @@ -1,5 +1,5 @@ +from core.db.session_factory import session_factory from events.app_event import app_was_created -from extensions.ext_database import db from models.enums import CustomizeTokenStrategy from models.model import Site @@ -22,6 +22,6 @@ def handle(sender, **kwargs): created_by=app.created_by, updated_by=app.updated_by, ) - - db.session.add(site) - db.session.commit() + with session_factory.create_session() as session: + session.add(site) + session.commit() diff --git a/api/events/event_handlers/delete_tool_parameters_cache_when_sync_draft_workflow.py b/api/events/event_handlers/delete_tool_parameters_cache_when_sync_draft_workflow.py index 7bd8e88231..ba9758175f 100644 --- a/api/events/event_handlers/delete_tool_parameters_cache_when_sync_draft_workflow.py +++ b/api/events/event_handlers/delete_tool_parameters_cache_when_sync_draft_workflow.py @@ -1,12 +1,11 @@ import logging -from graphon.nodes import BuiltinNodeTypes -from graphon.nodes.tool.entities import ToolEntity - from core.tools.entities.tool_entities import ToolProviderType from core.tools.tool_manager import ToolManager from core.tools.utils.configuration import ToolParameterConfigurationManager from events.app_event import app_draft_workflow_was_synced +from graphon.nodes import BuiltinNodeTypes +from graphon.nodes.tool.entities import ToolEntity logger = logging.getLogger(__name__) diff --git a/api/events/event_handlers/update_app_dataset_join_when_app_published_workflow_updated.py b/api/events/event_handlers/update_app_dataset_join_when_app_published_workflow_updated.py index 86b5b2bbf0..6769b94cde 100644 --- a/api/events/event_handlers/update_app_dataset_join_when_app_published_workflow_updated.py +++ b/api/events/event_handlers/update_app_dataset_join_when_app_published_workflow_updated.py @@ -1,11 +1,11 @@ from typing import cast -from graphon.nodes import BuiltinNodeTypes from sqlalchemy import delete, select from core.workflow.nodes.knowledge_retrieval.entities import KnowledgeRetrievalNodeData from events.app_event import app_published_workflow_was_updated from extensions.ext_database import db +from graphon.nodes import BuiltinNodeTypes from models.dataset import AppDatasetJoin from models.workflow import Workflow diff --git a/api/extensions/ext_celery.py b/api/extensions/ext_celery.py index 86b0550187..340f514fcc 100644 --- a/api/extensions/ext_celery.py +++ b/api/extensions/ext_celery.py @@ -9,6 +9,7 @@ from typing_extensions import TypedDict from configs import dify_config from dify_app import DifyApp +from extensions.redis_names import normalize_redis_key_prefix class _CelerySentinelKwargsDict(TypedDict): @@ -16,9 +17,10 @@ class _CelerySentinelKwargsDict(TypedDict): password: str | None -class CelerySentinelTransportDict(TypedDict): +class CelerySentinelTransportDict(TypedDict, total=False): master_name: str | None sentinel_kwargs: _CelerySentinelKwargsDict + global_keyprefix: str class CelerySSLOptionsDict(TypedDict): @@ -61,15 +63,31 @@ def get_celery_ssl_options() -> CelerySSLOptionsDict | None: def get_celery_broker_transport_options() -> CelerySentinelTransportDict | dict[str, Any]: """Get broker transport options (e.g. Redis Sentinel) for Celery connections.""" + transport_options: CelerySentinelTransportDict | dict[str, Any] if dify_config.CELERY_USE_SENTINEL: - return CelerySentinelTransportDict( + transport_options = CelerySentinelTransportDict( master_name=dify_config.CELERY_SENTINEL_MASTER_NAME, sentinel_kwargs=_CelerySentinelKwargsDict( socket_timeout=dify_config.CELERY_SENTINEL_SOCKET_TIMEOUT, password=dify_config.CELERY_SENTINEL_PASSWORD, ), ) - return {} + else: + transport_options = {} + + global_keyprefix = get_celery_redis_global_keyprefix() + if global_keyprefix: + transport_options["global_keyprefix"] = global_keyprefix + + return transport_options + + +def get_celery_redis_global_keyprefix() -> str | None: + """Return the Redis transport prefix for Celery when namespace isolation is enabled.""" + normalized_prefix = normalize_redis_key_prefix(dify_config.REDIS_KEY_PREFIX) + if not normalized_prefix: + return None + return f"{normalized_prefix}:" def init_app(app: DifyApp) -> Celery: diff --git a/api/extensions/ext_redis.py b/api/extensions/ext_redis.py index 20f05b8b9e..9f7f73765e 100644 --- a/api/extensions/ext_redis.py +++ b/api/extensions/ext_redis.py @@ -3,7 +3,7 @@ import logging import ssl from collections.abc import Callable from datetime import timedelta -from typing import TYPE_CHECKING, Any, Union +from typing import Any, Union, cast import redis from redis import RedisError @@ -18,17 +18,26 @@ from typing_extensions import TypedDict from configs import dify_config from dify_app import DifyApp +from extensions.redis_names import ( + normalize_redis_key_prefix, + serialize_redis_name, + serialize_redis_name_arg, + serialize_redis_name_args, +) from libs.broadcast_channel.channel import BroadcastChannel as BroadcastChannelProtocol from libs.broadcast_channel.redis.channel import BroadcastChannel as RedisBroadcastChannel from libs.broadcast_channel.redis.sharded_channel import ShardedRedisBroadcastChannel from libs.broadcast_channel.redis.streams_channel import StreamsBroadcastChannel -if TYPE_CHECKING: - from redis.lock import Lock - logger = logging.getLogger(__name__) +_normalize_redis_key_prefix = normalize_redis_key_prefix +_serialize_redis_name = serialize_redis_name +_serialize_redis_name_arg = serialize_redis_name_arg +_serialize_redis_name_args = serialize_redis_name_args + + class RedisClientWrapper: """ A wrapper class for the Redis client that addresses the issue where the global @@ -59,68 +68,148 @@ class RedisClientWrapper: if self._client is None: self._client = client - if TYPE_CHECKING: - # Type hints for IDE support and static analysis - # These are not executed at runtime but provide type information - def get(self, name: str | bytes) -> Any: ... - - def set( - self, - name: str | bytes, - value: Any, - ex: int | None = None, - px: int | None = None, - nx: bool = False, - xx: bool = False, - keepttl: bool = False, - get: bool = False, - exat: int | None = None, - pxat: int | None = None, - ) -> Any: ... - - def setex(self, name: str | bytes, time: int | timedelta, value: Any) -> Any: ... - def setnx(self, name: str | bytes, value: Any) -> Any: ... - def delete(self, *names: str | bytes) -> Any: ... - def incr(self, name: str | bytes, amount: int = 1) -> Any: ... - def expire( - self, - name: str | bytes, - time: int | timedelta, - nx: bool = False, - xx: bool = False, - gt: bool = False, - lt: bool = False, - ) -> Any: ... - def lock( - self, - name: str, - timeout: float | None = None, - sleep: float = 0.1, - blocking: bool = True, - blocking_timeout: float | None = None, - thread_local: bool = True, - ) -> Lock: ... - def zadd( - self, - name: str | bytes, - mapping: dict[str | bytes | int | float, float | int | str | bytes], - nx: bool = False, - xx: bool = False, - ch: bool = False, - incr: bool = False, - gt: bool = False, - lt: bool = False, - ) -> Any: ... - def zremrangebyscore(self, name: str | bytes, min: float | str, max: float | str) -> Any: ... - def zcard(self, name: str | bytes) -> Any: ... - def getdel(self, name: str | bytes) -> Any: ... - def pubsub(self) -> PubSub: ... - def pipeline(self, transaction: bool = True, shard_hint: str | None = None) -> Any: ... - - def __getattr__(self, item: str) -> Any: + def _require_client(self) -> redis.Redis | RedisCluster: if self._client is None: raise RuntimeError("Redis client is not initialized. Call init_app first.") - return getattr(self._client, item) + return self._client + + def _get_prefix(self) -> str: + return dify_config.REDIS_KEY_PREFIX + + def get(self, name: str | bytes) -> Any: + return self._require_client().get(_serialize_redis_name_arg(name, self._get_prefix())) + + def set( + self, + name: str | bytes, + value: Any, + ex: int | None = None, + px: int | None = None, + nx: bool = False, + xx: bool = False, + keepttl: bool = False, + get: bool = False, + exat: int | None = None, + pxat: int | None = None, + ) -> Any: + return self._require_client().set( + _serialize_redis_name_arg(name, self._get_prefix()), + value, + ex=ex, + px=px, + nx=nx, + xx=xx, + keepttl=keepttl, + get=get, + exat=exat, + pxat=pxat, + ) + + def setex(self, name: str | bytes, time: int | timedelta, value: Any) -> Any: + return self._require_client().setex(_serialize_redis_name_arg(name, self._get_prefix()), time, value) + + def setnx(self, name: str | bytes, value: Any) -> Any: + return self._require_client().setnx(_serialize_redis_name_arg(name, self._get_prefix()), value) + + def delete(self, *names: str | bytes) -> Any: + return self._require_client().delete(*_serialize_redis_name_args(names, self._get_prefix())) + + def incr(self, name: str | bytes, amount: int = 1) -> Any: + return self._require_client().incr(_serialize_redis_name_arg(name, self._get_prefix()), amount) + + def expire( + self, + name: str | bytes, + time: int | timedelta, + nx: bool = False, + xx: bool = False, + gt: bool = False, + lt: bool = False, + ) -> Any: + return self._require_client().expire( + _serialize_redis_name_arg(name, self._get_prefix()), + time, + nx=nx, + xx=xx, + gt=gt, + lt=lt, + ) + + def exists(self, *names: str | bytes) -> Any: + return self._require_client().exists(*_serialize_redis_name_args(names, self._get_prefix())) + + def ttl(self, name: str | bytes) -> Any: + return self._require_client().ttl(_serialize_redis_name_arg(name, self._get_prefix())) + + def getdel(self, name: str | bytes) -> Any: + return self._require_client().getdel(_serialize_redis_name_arg(name, self._get_prefix())) + + def lock( + self, + name: str, + timeout: float | None = None, + sleep: float = 0.1, + blocking: bool = True, + blocking_timeout: float | None = None, + thread_local: bool = True, + ) -> Any: + return self._require_client().lock( + _serialize_redis_name(name, self._get_prefix()), + timeout=timeout, + sleep=sleep, + blocking=blocking, + blocking_timeout=blocking_timeout, + thread_local=thread_local, + ) + + def hset(self, name: str | bytes, *args: Any, **kwargs: Any) -> Any: + return self._require_client().hset(_serialize_redis_name_arg(name, self._get_prefix()), *args, **kwargs) + + def hgetall(self, name: str | bytes) -> Any: + return self._require_client().hgetall(_serialize_redis_name_arg(name, self._get_prefix())) + + def hdel(self, name: str | bytes, *keys: str | bytes) -> Any: + return self._require_client().hdel(_serialize_redis_name_arg(name, self._get_prefix()), *keys) + + def hlen(self, name: str | bytes) -> Any: + return self._require_client().hlen(_serialize_redis_name_arg(name, self._get_prefix())) + + def zadd( + self, + name: str | bytes, + mapping: dict[str | bytes | int | float, float | int | str | bytes], + nx: bool = False, + xx: bool = False, + ch: bool = False, + incr: bool = False, + gt: bool = False, + lt: bool = False, + ) -> Any: + return self._require_client().zadd( + _serialize_redis_name_arg(name, self._get_prefix()), + cast(Any, mapping), + nx=nx, + xx=xx, + ch=ch, + incr=incr, + gt=gt, + lt=lt, + ) + + def zremrangebyscore(self, name: str | bytes, min: float | str, max: float | str) -> Any: + return self._require_client().zremrangebyscore(_serialize_redis_name_arg(name, self._get_prefix()), min, max) + + def zcard(self, name: str | bytes) -> Any: + return self._require_client().zcard(_serialize_redis_name_arg(name, self._get_prefix())) + + def pubsub(self) -> PubSub: + return self._require_client().pubsub() + + def pipeline(self, transaction: bool = True, shard_hint: str | None = None) -> Any: + return self._require_client().pipeline(transaction=transaction, shard_hint=shard_hint) + + def __getattr__(self, item: str) -> Any: + return getattr(self._require_client(), item) redis_client: RedisClientWrapper = RedisClientWrapper() diff --git a/api/extensions/ext_sentry.py b/api/extensions/ext_sentry.py index 5cc58f27c4..69d1f1ab07 100644 --- a/api/extensions/ext_sentry.py +++ b/api/extensions/ext_sentry.py @@ -5,11 +5,12 @@ from dify_app import DifyApp def init_app(app: DifyApp): if dify_config.SENTRY_DSN: import sentry_sdk - from graphon.model_runtime.errors.invoke import InvokeRateLimitError from sentry_sdk.integrations.celery import CeleryIntegration from sentry_sdk.integrations.flask import FlaskIntegration from werkzeug.exceptions import HTTPException + from graphon.model_runtime.errors.invoke import InvokeRateLimitError + try: from langfuse._utils import parse_error diff --git a/api/extensions/ext_socketio.py b/api/extensions/ext_socketio.py new file mode 100644 index 0000000000..5ed82bac8d --- /dev/null +++ b/api/extensions/ext_socketio.py @@ -0,0 +1,5 @@ +import socketio # type: ignore[reportMissingTypeStubs] + +from configs import dify_config + +sio = socketio.Server(async_mode="gevent", cors_allowed_origins=dify_config.CONSOLE_CORS_ALLOW_ORIGINS) diff --git a/api/extensions/logstore/repositories/logstore_api_workflow_node_execution_repository.py b/api/extensions/logstore/repositories/logstore_api_workflow_node_execution_repository.py index db599c5d49..64ff0f0674 100644 --- a/api/extensions/logstore/repositories/logstore_api_workflow_node_execution_repository.py +++ b/api/extensions/logstore/repositories/logstore_api_workflow_node_execution_repository.py @@ -11,12 +11,12 @@ from collections.abc import Sequence from datetime import datetime from typing import Any -from graphon.enums import WorkflowNodeExecutionStatus from sqlalchemy.orm import sessionmaker from extensions.logstore.aliyun_logstore import AliyunLogStore from extensions.logstore.repositories import safe_float, safe_int from extensions.logstore.sql_escape import escape_identifier, escape_logstore_query_value +from graphon.enums import WorkflowNodeExecutionStatus from models.enums import CreatorUserRole from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom from repositories.api_workflow_node_execution_repository import DifyAPIWorkflowNodeExecutionRepository diff --git a/api/extensions/logstore/repositories/logstore_api_workflow_run_repository.py b/api/extensions/logstore/repositories/logstore_api_workflow_run_repository.py index 2745141431..7f77a0437a 100644 --- a/api/extensions/logstore/repositories/logstore_api_workflow_run_repository.py +++ b/api/extensions/logstore/repositories/logstore_api_workflow_run_repository.py @@ -20,12 +20,12 @@ from collections.abc import Sequence from datetime import datetime from typing import Any, cast -from graphon.enums import WorkflowExecutionStatus from sqlalchemy.orm import sessionmaker from extensions.logstore.aliyun_logstore import AliyunLogStore from extensions.logstore.repositories import safe_float, safe_int from extensions.logstore.sql_escape import escape_identifier, escape_logstore_query_value, escape_sql_string +from graphon.enums import WorkflowExecutionStatus from libs.infinite_scroll_pagination import InfiniteScrollPagination from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import WorkflowRun, WorkflowType diff --git a/api/extensions/logstore/repositories/logstore_workflow_execution_repository.py b/api/extensions/logstore/repositories/logstore_workflow_execution_repository.py index d0f3e2e244..544109276d 100644 --- a/api/extensions/logstore/repositories/logstore_workflow_execution_repository.py +++ b/api/extensions/logstore/repositories/logstore_workflow_execution_repository.py @@ -3,14 +3,14 @@ import logging import os import time -from graphon.entities import WorkflowExecution -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from core.repositories.factory import WorkflowExecutionRepository from core.repositories.sqlalchemy_workflow_execution_repository import SQLAlchemyWorkflowExecutionRepository from extensions.logstore.aliyun_logstore import AliyunLogStore +from graphon.entities import WorkflowExecution +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.helper import extract_tenant_id from models import ( Account, diff --git a/api/extensions/logstore/repositories/logstore_workflow_node_execution_repository.py b/api/extensions/logstore/repositories/logstore_workflow_node_execution_repository.py index 37952d6464..dc7654a25c 100644 --- a/api/extensions/logstore/repositories/logstore_workflow_node_execution_repository.py +++ b/api/extensions/logstore/repositories/logstore_workflow_node_execution_repository.py @@ -13,10 +13,6 @@ from collections.abc import Sequence from datetime import datetime from typing import Any -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker @@ -26,6 +22,10 @@ from core.repositories.factory import OrderConfig, WorkflowNodeExecutionReposito from extensions.logstore.aliyun_logstore import AliyunLogStore from extensions.logstore.repositories import safe_float, safe_int from extensions.logstore.sql_escape import escape_identifier +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from libs.helper import extract_tenant_id from models import ( Account, diff --git a/api/extensions/otel/parser/base.py b/api/extensions/otel/parser/base.py index 23d324f9ea..fbf379b3e5 100644 --- a/api/extensions/otel/parser/base.py +++ b/api/extensions/otel/parser/base.py @@ -10,17 +10,17 @@ Gate is only active in EE (``ENTERPRISE_ENABLED=True``) when import json from typing import Any, Protocol -from graphon.enums import BuiltinNodeTypes -from graphon.file import File -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node -from graphon.variables import Segment from opentelemetry.trace import Span from opentelemetry.trace.status import Status, StatusCode from pydantic import BaseModel from configs import dify_config from extensions.otel.semconv.gen_ai import ChainAttributes, GenAIAttributes +from graphon.enums import BuiltinNodeTypes +from graphon.file import File +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node +from graphon.variables import Segment def should_include_content() -> bool: diff --git a/api/extensions/otel/parser/llm.py b/api/extensions/otel/parser/llm.py index 335c5cc29e..ec3c78a12d 100644 --- a/api/extensions/otel/parser/llm.py +++ b/api/extensions/otel/parser/llm.py @@ -6,12 +6,12 @@ import logging from collections.abc import Mapping from typing import Any -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node from opentelemetry.trace import Span from extensions.otel.parser.base import DefaultNodeOTelParser, safe_json_dumps from extensions.otel.semconv.gen_ai import LLMAttributes +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node logger = logging.getLogger(__name__) diff --git a/api/extensions/otel/parser/retrieval.py b/api/extensions/otel/parser/retrieval.py index 6df5f62c15..56672d1fd4 100644 --- a/api/extensions/otel/parser/retrieval.py +++ b/api/extensions/otel/parser/retrieval.py @@ -6,13 +6,13 @@ import logging from collections.abc import Sequence from typing import Any -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node -from graphon.variables import Segment from opentelemetry.trace import Span from extensions.otel.parser.base import DefaultNodeOTelParser, safe_json_dumps from extensions.otel.semconv.gen_ai import RetrieverAttributes +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node +from graphon.variables import Segment logger = logging.getLogger(__name__) diff --git a/api/extensions/otel/parser/tool.py b/api/extensions/otel/parser/tool.py index b9fdd9e1ca..75ddbba448 100644 --- a/api/extensions/otel/parser/tool.py +++ b/api/extensions/otel/parser/tool.py @@ -2,14 +2,14 @@ Parser for tool nodes that captures tool-specific metadata. """ -from graphon.enums import WorkflowNodeExecutionMetadataKey -from graphon.graph_events import GraphNodeEventBase -from graphon.nodes.base.node import Node -from graphon.nodes.tool.entities import ToolNodeData from opentelemetry.trace import Span from extensions.otel.parser.base import DefaultNodeOTelParser, safe_json_dumps from extensions.otel.semconv.gen_ai import ToolAttributes +from graphon.enums import WorkflowNodeExecutionMetadataKey +from graphon.graph_events import GraphNodeEventBase +from graphon.nodes.base.node import Node +from graphon.nodes.tool.entities import ToolNodeData class ToolNodeOTelParser: diff --git a/api/extensions/redis_names.py b/api/extensions/redis_names.py new file mode 100644 index 0000000000..9e63416daf --- /dev/null +++ b/api/extensions/redis_names.py @@ -0,0 +1,32 @@ +from configs import dify_config + + +def normalize_redis_key_prefix(prefix: str | None) -> str: + """Normalize the configured Redis key prefix for consistent runtime use.""" + if prefix is None: + return "" + return prefix.strip() + + +def get_redis_key_prefix() -> str: + """Read and normalize the current Redis key prefix from config.""" + return normalize_redis_key_prefix(dify_config.REDIS_KEY_PREFIX) + + +def serialize_redis_name(name: str, prefix: str | None = None) -> str: + """Convert a logical Redis name into the physical name used in Redis.""" + normalized_prefix = get_redis_key_prefix() if prefix is None else normalize_redis_key_prefix(prefix) + if not normalized_prefix: + return name + return f"{normalized_prefix}:{name}" + + +def serialize_redis_name_arg(name: str | bytes, prefix: str | None = None) -> str | bytes: + """Prefix string Redis names while preserving bytes inputs unchanged.""" + if isinstance(name, bytes): + return name + return serialize_redis_name(name, prefix) + + +def serialize_redis_name_args(names: tuple[str | bytes, ...], prefix: str | None = None) -> tuple[str | bytes, ...]: + return tuple(serialize_redis_name_arg(name, prefix) for name in names) diff --git a/api/extensions/storage/clickzetta_volume/clickzetta_volume_storage.py b/api/extensions/storage/clickzetta_volume/clickzetta_volume_storage.py index 18eed4e481..05492327c8 100644 --- a/api/extensions/storage/clickzetta_volume/clickzetta_volume_storage.py +++ b/api/extensions/storage/clickzetta_volume/clickzetta_volume_storage.py @@ -10,6 +10,7 @@ import tempfile from collections.abc import Generator from io import BytesIO from pathlib import Path +from typing import Any import clickzetta from pydantic import BaseModel, model_validator @@ -39,7 +40,7 @@ class ClickZettaVolumeConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): """Validate the configuration values. This method will first try to use CLICKZETTA_VOLUME_* environment variables, diff --git a/api/extensions/storage/clickzetta_volume/file_lifecycle.py b/api/extensions/storage/clickzetta_volume/file_lifecycle.py index 86b1bba544..1cb940b797 100644 --- a/api/extensions/storage/clickzetta_volume/file_lifecycle.py +++ b/api/extensions/storage/clickzetta_volume/file_lifecycle.py @@ -65,7 +65,7 @@ class FileMetadata: return data @classmethod - def from_dict(cls, data: dict) -> FileMetadata: + def from_dict(cls, data: dict[str, Any]) -> FileMetadata: """Create instance from dictionary""" data = data.copy() data["created_at"] = datetime.fromisoformat(data["created_at"]) @@ -459,7 +459,7 @@ class FileLifecycleManager: newest_file=None, ) - def _create_version_backup(self, filename: str, metadata: dict): + def _create_version_backup(self, filename: str, metadata: dict[str, Any]): """Create version backup""" try: # Read current file content @@ -487,7 +487,7 @@ class FileLifecycleManager: logger.warning("Failed to load metadata: %s", e) return {} - def _save_metadata(self, metadata_dict: dict): + def _save_metadata(self, metadata_dict: dict[str, Any]): """Save metadata file""" try: metadata_content = json.dumps(metadata_dict, indent=2, ensure_ascii=False) diff --git a/api/extensions/storage/opendal_storage.py b/api/extensions/storage/opendal_storage.py index 96f5915ff0..cd7f7db295 100644 --- a/api/extensions/storage/opendal_storage.py +++ b/api/extensions/storage/opendal_storage.py @@ -2,6 +2,7 @@ import logging import os from collections.abc import Generator from pathlib import Path +from typing import Any import opendal from dotenv import dotenv_values @@ -19,7 +20,7 @@ def _get_opendal_kwargs(*, scheme: str, env_file_path: str = ".env", prefix: str if key.startswith(config_prefix): kwargs[key[len(config_prefix) :].lower()] = value - file_env_vars: dict = dotenv_values(env_file_path) or {} + file_env_vars: dict[str, Any] = dotenv_values(env_file_path) or {} for key, value in file_env_vars.items(): if key.startswith(config_prefix) and key[len(config_prefix) :].lower() not in kwargs and value: kwargs[key[len(config_prefix) :].lower()] = value diff --git a/api/factories/file_factory/builders.py b/api/factories/file_factory/builders.py index 7516d18c8e..288d37d265 100644 --- a/api/factories/file_factory/builders.py +++ b/api/factories/file_factory/builders.py @@ -7,12 +7,12 @@ import uuid from collections.abc import Mapping, Sequence from typing import Any -from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig, helpers, standardize_file_type from sqlalchemy import select from core.app.file_access import FileAccessControllerProtocol from core.workflow.file_reference import build_file_reference from extensions.ext_database import db +from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig, helpers, standardize_file_type from models import ToolFile, UploadFile from .common import resolve_mapping_file_id diff --git a/api/factories/file_factory/message_files.py b/api/factories/file_factory/message_files.py index 5582b85c95..4b3d514238 100644 --- a/api/factories/file_factory/message_files.py +++ b/api/factories/file_factory/message_files.py @@ -4,9 +4,8 @@ from __future__ import annotations from collections.abc import Sequence -from graphon.file import File, FileBelongsTo, FileTransferMethod, FileUploadConfig - from core.app.file_access import FileAccessControllerProtocol +from graphon.file import File, FileBelongsTo, FileTransferMethod, FileUploadConfig from models import MessageFile from .builders import build_from_mapping diff --git a/api/factories/file_factory/storage_keys.py b/api/factories/file_factory/storage_keys.py index db3a7f3015..dba4c84407 100644 --- a/api/factories/file_factory/storage_keys.py +++ b/api/factories/file_factory/storage_keys.py @@ -5,12 +5,12 @@ from __future__ import annotations import uuid from collections.abc import Mapping, Sequence -from graphon.file import File, FileTransferMethod from sqlalchemy import select from sqlalchemy.orm import Session from core.app.file_access import FileAccessControllerProtocol from core.workflow.file_reference import build_file_reference, parse_file_reference +from graphon.file import File, FileTransferMethod from models import ToolFile, UploadFile diff --git a/api/factories/variable_factory.py b/api/factories/variable_factory.py index 57205b5739..fd7acb14d3 100644 --- a/api/factories/variable_factory.py +++ b/api/factories/variable_factory.py @@ -8,6 +8,11 @@ shared conversion functions for legacy callers and tests. from collections.abc import Mapping, Sequence from typing import Any, cast +from configs import dify_config +from core.workflow.variable_prefixes import ( + CONVERSATION_VARIABLE_NODE_ID, + ENVIRONMENT_VARIABLE_NODE_ID, +) from graphon.variables.exc import VariableError from graphon.variables.factory import ( TypeMismatchError, @@ -31,12 +36,6 @@ from graphon.variables.variables import ( VariableBase, ) -from configs import dify_config -from core.workflow.variable_prefixes import ( - CONVERSATION_VARIABLE_NODE_ID, - ENVIRONMENT_VARIABLE_NODE_ID, -) - __all__ = [ "TypeMismatchError", "UnsupportedSegmentTypeError", diff --git a/api/fields/conversation_fields.py b/api/fields/conversation_fields.py index 1afcbdb5b9..bf5c9ffcb1 100644 --- a/api/fields/conversation_fields.py +++ b/api/fields/conversation_fields.py @@ -3,10 +3,10 @@ from __future__ import annotations from datetime import datetime from typing import Any -from graphon.file import File from pydantic import Field, field_validator, model_validator from fields.base import ResponseModel +from graphon.file import File type JSONValue = Any @@ -96,7 +96,7 @@ class ConversationAnnotation(ResponseModel): class ConversationAnnotationHitHistory(ResponseModel): - annotation_id: str + annotation_id: str = Field(validation_alias="id") annotation_create_account: SimpleAccount | None = None created_at: int | None = None @@ -143,7 +143,7 @@ class MessageDetail(ResponseModel): query: str message: JSONValue message_tokens: int - answer: str + answer: str = Field(validation_alias="re_sign_file_url_answer") answer_tokens: int provider_response_latency: float from_source: str @@ -156,7 +156,7 @@ class MessageDetail(ResponseModel): created_at: int | None = None agent_thoughts: list[AgentThought] message_files: list[MessageFile] - metadata: JSONValue + metadata: JSONValue = Field(validation_alias="message_metadata_dict") status: str error: str | None = None parent_message_id: str | None = None @@ -196,7 +196,7 @@ class ModelConfig(ResponseModel): class SimpleModelConfig(ResponseModel): - model: JSONValue | None = None + model: JSONValue | None = Field(default=None, validation_alias="model_dict") pre_prompt: str | None = None @@ -211,6 +211,11 @@ class SimpleMessageDetail(ResponseModel): def _normalize_inputs(cls, value: JSONValue) -> JSONValue: return format_files_contained(value) + @field_validator("message", mode="before") + @classmethod + def _normalize_message(cls, value: JSONValue) -> str: + return message_text(value) + class Conversation(ResponseModel): id: str @@ -227,15 +232,22 @@ class Conversation(ResponseModel): model_config_: SimpleModelConfig | None = Field(default=None, alias="model_config") user_feedback_stats: FeedbackStat | None = None admin_feedback_stats: FeedbackStat | None = None - message: SimpleMessageDetail | None = None + message: SimpleMessageDetail | None = Field(default=None, validation_alias="first_message") + + @field_validator("read_at", "created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value class ConversationPagination(ResponseModel): page: int - limit: int + limit: int = Field(validation_alias="per_page") total: int - has_more: bool - data: list[Conversation] + has_more: bool = Field(validation_alias="has_next") + data: list[Conversation] = Field(validation_alias="items") class ConversationMessageDetail(ResponseModel): @@ -246,7 +258,14 @@ class ConversationMessageDetail(ResponseModel): from_account_id: str | None = None created_at: int | None = None model_config_: ModelConfig | None = Field(default=None, alias="model_config") - message: MessageDetail | None = None + message: MessageDetail | None = Field(default=None, validation_alias="first_message") + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_created_at(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value class ConversationWithSummary(ResponseModel): @@ -258,7 +277,7 @@ class ConversationWithSummary(ResponseModel): from_account_id: str | None = None from_account_name: str | None = None name: str - summary: str + summary: str = Field(validation_alias="summary_or_query") read_at: int | None = None created_at: int | None = None updated_at: int | None = None @@ -269,13 +288,20 @@ class ConversationWithSummary(ResponseModel): admin_feedback_stats: FeedbackStat | None = None status_count: StatusCount | None = None + @field_validator("read_at", "created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value + class ConversationWithSummaryPagination(ResponseModel): page: int - limit: int + limit: int = Field(validation_alias="per_page") total: int - has_more: bool - data: list[ConversationWithSummary] + has_more: bool = Field(validation_alias="has_next") + data: list[ConversationWithSummary] = Field(validation_alias="items") class ConversationDetail(ResponseModel): @@ -293,6 +319,13 @@ class ConversationDetail(ResponseModel): user_feedback_stats: FeedbackStat | None = None admin_feedback_stats: FeedbackStat | None = None + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return to_timestamp(value) + return value + def to_timestamp(value: datetime | None) -> int | None: if value is None: diff --git a/api/fields/conversation_variable_fields.py b/api/fields/conversation_variable_fields.py index c55014a368..cf4a71d545 100644 --- a/api/fields/conversation_variable_fields.py +++ b/api/fields/conversation_variable_fields.py @@ -1,5 +1,13 @@ -from flask_restx import Namespace, fields +from __future__ import annotations +from datetime import datetime +from typing import Any + +from flask_restx import Namespace, fields +from pydantic import field_validator + +from fields.base import ResponseModel +from graphon.variables.types import SegmentType from libs.helper import TimestampField from ._value_type_serializer import serialize_value_type @@ -29,6 +37,74 @@ conversation_variable_infinite_scroll_pagination_fields = { } +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class ConversationVariableResponse(ResponseModel): + id: str + name: str + value_type: str + value: str | None = None + description: str | None = None + created_at: int | None = None + updated_at: int | None = None + + @field_validator("value_type", mode="before") + @classmethod + def _normalize_value_type(cls, value: Any) -> str: + exposed_type = getattr(value, "exposed_type", None) + if callable(exposed_type): + return str(exposed_type().value) + if isinstance(value, str): + try: + return str(SegmentType(value).exposed_type().value) + except ValueError: + return value + try: + return serialize_value_type(value) + except (AttributeError, TypeError, ValueError): + pass + + try: + return serialize_value_type({"value_type": value}) + except (AttributeError, TypeError, ValueError): + value_attr = getattr(value, "value", None) + if value_attr is not None: + return str(value_attr) + return str(value) + + @field_validator("value", mode="before") + @classmethod + def _normalize_value(cls, value: Any | None) -> str | None: + if value is None: + return None + if isinstance(value, str): + return value + return str(value) + + @field_validator("created_at", "updated_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class PaginatedConversationVariableResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[ConversationVariableResponse] + + +class ConversationVariableInfiniteScrollPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[ConversationVariableResponse] + + def build_conversation_variable_model(api_or_ns: Namespace): """Build the conversation variable model for the API or Namespace.""" return api_or_ns.model("ConversationVariable", conversation_variable_fields) diff --git a/api/fields/member_fields.py b/api/fields/member_fields.py index cfe0015918..67b320beaa 100644 --- a/api/fields/member_fields.py +++ b/api/fields/member_fields.py @@ -3,10 +3,10 @@ from __future__ import annotations from datetime import datetime from flask_restx import fields -from graphon.file import helpers as file_helpers from pydantic import computed_field, field_validator from fields.base import ResponseModel +from graphon.file import helpers as file_helpers simple_account_fields = { "id": fields.String, diff --git a/api/fields/message_fields.py b/api/fields/message_fields.py index 1a871204a0..ca18f1c203 100644 --- a/api/fields/message_fields.py +++ b/api/fields/message_fields.py @@ -3,12 +3,12 @@ from __future__ import annotations from datetime import datetime from uuid import uuid4 -from graphon.file import File from pydantic import Field, field_validator from core.entities.execution_extra_content import ExecutionExtraContentDomainModel from fields.base import ResponseModel from fields.conversation_fields import AgentThought, JSONValue, MessageFile +from graphon.file import File type JSONValueType = JSONValue diff --git a/api/fields/online_user_fields.py b/api/fields/online_user_fields.py new file mode 100644 index 0000000000..bdbe19679c --- /dev/null +++ b/api/fields/online_user_fields.py @@ -0,0 +1,16 @@ +from flask_restx import fields + +online_user_partial_fields = { + "user_id": fields.String, + "username": fields.String, + "avatar": fields.String, +} + +workflow_online_users_fields = { + "app_id": fields.String, + "users": fields.List(fields.Nested(online_user_partial_fields)), +} + +online_user_list_fields = { + "data": fields.List(fields.Nested(workflow_online_users_fields)), +} diff --git a/api/fields/raws.py b/api/fields/raws.py index 4c65cdab7a..ee6f53b360 100644 --- a/api/fields/raws.py +++ b/api/fields/raws.py @@ -1,4 +1,5 @@ from flask_restx import fields + from graphon.file import File diff --git a/api/fields/workflow_app_log_fields.py b/api/fields/workflow_app_log_fields.py index d0e762f62b..1b2c71255d 100644 --- a/api/fields/workflow_app_log_fields.py +++ b/api/fields/workflow_app_log_fields.py @@ -1,8 +1,17 @@ -from flask_restx import Namespace, fields +from __future__ import annotations -from fields.end_user_fields import simple_end_user_fields -from fields.member_fields import simple_account_fields +from datetime import datetime +from typing import Any + +from flask_restx import Namespace, fields +from pydantic import field_validator + +from fields.base import ResponseModel +from fields.end_user_fields import SimpleEndUser, simple_end_user_fields +from fields.member_fields import SimpleAccount, simple_account_fields from fields.workflow_run_fields import ( + WorkflowRunForArchivedLogResponse, + WorkflowRunForLogResponse, build_workflow_run_for_archived_log_model, build_workflow_run_for_log_model, workflow_run_for_archived_log_fields, @@ -85,3 +94,55 @@ def build_workflow_archived_log_pagination_model(api_or_ns: Namespace): copied_fields = workflow_archived_log_pagination_fields.copy() copied_fields["data"] = fields.List(fields.Nested(workflow_archived_log_partial_model)) return api_or_ns.model("WorkflowArchivedLogPagination", copied_fields) + + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowAppLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForLogResponse | None = None + details: Any = None + created_from: str | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowArchivedLogPartialResponse(ResponseModel): + id: str + workflow_run: WorkflowRunForArchivedLogResponse | None = None + trigger_metadata: Any = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + + @field_validator("created_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowAppLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowAppLogPartialResponse] + + +class WorkflowArchivedLogPaginationResponse(ResponseModel): + page: int + limit: int + total: int + has_more: bool + data: list[WorkflowArchivedLogPartialResponse] diff --git a/api/fields/workflow_comment_fields.py b/api/fields/workflow_comment_fields.py new file mode 100644 index 0000000000..c708dd3460 --- /dev/null +++ b/api/fields/workflow_comment_fields.py @@ -0,0 +1,96 @@ +from flask_restx import fields + +from libs.helper import AvatarUrlField, TimestampField + +# basic account fields for comments +account_fields = { + "id": fields.String, + "name": fields.String, + "email": fields.String, + "avatar_url": AvatarUrlField, +} + +# Comment mention fields +workflow_comment_mention_fields = { + "mentioned_user_id": fields.String, + "mentioned_user_account": fields.Nested(account_fields, allow_null=True), + "reply_id": fields.String, +} + +# Comment reply fields +workflow_comment_reply_fields = { + "id": fields.String, + "content": fields.String, + "created_by": fields.String, + "created_by_account": fields.Nested(account_fields, allow_null=True), + "created_at": TimestampField, +} + +# Basic comment fields (for list views) +workflow_comment_basic_fields = { + "id": fields.String, + "position_x": fields.Float, + "position_y": fields.Float, + "content": fields.String, + "created_by": fields.String, + "created_by_account": fields.Nested(account_fields, allow_null=True), + "created_at": TimestampField, + "updated_at": TimestampField, + "resolved": fields.Boolean, + "resolved_at": TimestampField, + "resolved_by": fields.String, + "resolved_by_account": fields.Nested(account_fields, allow_null=True), + "reply_count": fields.Integer, + "mention_count": fields.Integer, + "participants": fields.List(fields.Nested(account_fields)), +} + +# Detailed comment fields (for single comment view) +workflow_comment_detail_fields = { + "id": fields.String, + "position_x": fields.Float, + "position_y": fields.Float, + "content": fields.String, + "created_by": fields.String, + "created_by_account": fields.Nested(account_fields, allow_null=True), + "created_at": TimestampField, + "updated_at": TimestampField, + "resolved": fields.Boolean, + "resolved_at": TimestampField, + "resolved_by": fields.String, + "resolved_by_account": fields.Nested(account_fields, allow_null=True), + "replies": fields.List(fields.Nested(workflow_comment_reply_fields)), + "mentions": fields.List(fields.Nested(workflow_comment_mention_fields)), +} + +# Comment creation response fields (simplified) +workflow_comment_create_fields = { + "id": fields.String, + "created_at": TimestampField, +} + +# Comment update response fields (simplified) +workflow_comment_update_fields = { + "id": fields.String, + "updated_at": TimestampField, +} + +# Comment resolve response fields +workflow_comment_resolve_fields = { + "id": fields.String, + "resolved": fields.Boolean, + "resolved_at": TimestampField, + "resolved_by": fields.String, +} + +# Reply creation response fields (simplified) +workflow_comment_reply_create_fields = { + "id": fields.String, + "created_at": TimestampField, +} + +# Reply update response fields +workflow_comment_reply_update_fields = { + "id": fields.String, + "updated_at": TimestampField, +} diff --git a/api/fields/workflow_fields.py b/api/fields/workflow_fields.py index b0b6cc0b48..f9b5e98936 100644 --- a/api/fields/workflow_fields.py +++ b/api/fields/workflow_fields.py @@ -1,8 +1,8 @@ from flask_restx import fields -from graphon.variables import SecretVariable, SegmentType, VariableBase from core.helper import encrypter from fields.member_fields import simple_account_fields +from graphon.variables import SecretVariable, SegmentType, VariableBase from libs.helper import TimestampField from ._value_type_serializer import serialize_value_type diff --git a/api/fields/workflow_run_fields.py b/api/fields/workflow_run_fields.py index 35bb442c59..8c659086ed 100644 --- a/api/fields/workflow_run_fields.py +++ b/api/fields/workflow_run_fields.py @@ -1,7 +1,14 @@ -from flask_restx import Namespace, fields +from __future__ import annotations -from fields.end_user_fields import simple_end_user_fields -from fields.member_fields import simple_account_fields +from datetime import datetime +from typing import Any + +from flask_restx import Namespace, fields +from pydantic import Field, field_validator + +from fields.base import ResponseModel +from fields.end_user_fields import SimpleEndUser, simple_end_user_fields +from fields.member_fields import SimpleAccount, simple_account_fields from libs.helper import TimestampField workflow_run_for_log_fields = { @@ -147,3 +154,174 @@ workflow_run_node_execution_fields = { workflow_run_node_execution_list_fields = { "data": fields.List(fields.Nested(workflow_run_node_execution_fields)), } + + +def _to_timestamp(value: datetime | int | None) -> int | None: + if isinstance(value, datetime): + return int(value.timestamp()) + return value + + +class WorkflowRunForLogResponse(ResponseModel): + id: str + version: str | None = None + status: str | None = None + triggered_from: str | None = None + error: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowRunForArchivedLogResponse(ResponseModel): + id: str + status: str | None = None + triggered_from: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + +class WorkflowRunForListResponse(ResponseModel): + id: str + version: str | None = None + status: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_by_account: SimpleAccount | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None + retry_index: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class AdvancedChatWorkflowRunForListResponse(WorkflowRunForListResponse): + conversation_id: str | None = None + message_id: str | None = None + + +class AdvancedChatWorkflowRunPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[AdvancedChatWorkflowRunForListResponse] + + +class WorkflowRunPaginationResponse(ResponseModel): + limit: int + has_more: bool + data: list[WorkflowRunForListResponse] + + +class WorkflowRunCountResponse(ResponseModel): + total: int + running: int + succeeded: int + failed: int + stopped: int + partial_succeeded: int = Field(validation_alias="partial-succeeded") + + +class WorkflowRunDetailResponse(ResponseModel): + id: str + version: str | None = None + graph: Any = Field(validation_alias="graph_dict") + inputs: Any = Field(validation_alias="inputs_dict") + status: str | None = None + outputs: Any = Field(validation_alias="outputs_dict") + error: str | None = None + elapsed_time: float | None = None + total_tokens: int | None = None + total_steps: int | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + created_at: int | None = None + finished_at: int | None = None + exceptions_count: int | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowRunNodeExecutionResponse(ResponseModel): + id: str + index: int | None = None + predecessor_node_id: str | None = None + node_id: str | None = None + node_type: str | None = None + title: str | None = None + inputs: Any = Field(default=None, validation_alias="inputs_dict") + process_data: Any = Field(default=None, validation_alias="process_data_dict") + outputs: Any = Field(default=None, validation_alias="outputs_dict") + status: str | None = None + error: str | None = None + elapsed_time: float | None = None + execution_metadata: Any = Field(default=None, validation_alias="execution_metadata_dict") + extras: Any = None + created_at: int | None = None + created_by_role: str | None = None + created_by_account: SimpleAccount | None = None + created_by_end_user: SimpleEndUser | None = None + finished_at: int | None = None + inputs_truncated: bool | None = None + outputs_truncated: bool | None = None + process_data_truncated: bool | None = None + + @field_validator("status", mode="before") + @classmethod + def _normalize_status(cls, value: Any) -> str | None: + if value is None or isinstance(value, str): + return value + return str(getattr(value, "value", value)) + + @field_validator("created_at", "finished_at", mode="before") + @classmethod + def _normalize_timestamp(cls, value: datetime | int | None) -> int | None: + return _to_timestamp(value) + + +class WorkflowRunNodeExecutionListResponse(ResponseModel): + data: list[WorkflowRunNodeExecutionResponse] diff --git a/api/libs/broadcast_channel/redis/_subscription.py b/api/libs/broadcast_channel/redis/_subscription.py index 40027bc424..4db79a15a9 100644 --- a/api/libs/broadcast_channel/redis/_subscription.py +++ b/api/libs/broadcast_channel/redis/_subscription.py @@ -3,7 +3,7 @@ import queue import threading import types from collections.abc import Generator, Iterator -from typing import Self +from typing import Any, Self from libs.broadcast_channel.channel import Subscription from libs.broadcast_channel.exc import SubscriptionClosedError @@ -221,7 +221,7 @@ class RedisSubscriptionBase(Subscription): """Unsubscribe from the Redis topic using the appropriate command.""" raise NotImplementedError - def _get_message(self) -> dict | None: + def _get_message(self) -> dict[str, Any] | None: """Get a message from Redis using the appropriate method.""" raise NotImplementedError diff --git a/api/libs/broadcast_channel/redis/channel.py b/api/libs/broadcast_channel/redis/channel.py index bd6d58c53f..b76a23eb3c 100644 --- a/api/libs/broadcast_channel/redis/channel.py +++ b/api/libs/broadcast_channel/redis/channel.py @@ -1,5 +1,8 @@ from __future__ import annotations +from typing import Any + +from extensions.redis_names import serialize_redis_name from libs.broadcast_channel.channel import Producer, Subscriber, Subscription from redis import Redis, RedisCluster @@ -30,12 +33,13 @@ class Topic: def __init__(self, redis_client: Redis | RedisCluster, topic: str): self._client = redis_client self._topic = topic + self._redis_topic = serialize_redis_name(topic) def as_producer(self) -> Producer: return self def publish(self, payload: bytes) -> None: - self._client.publish(self._topic, payload) + self._client.publish(self._redis_topic, payload) def as_subscriber(self) -> Subscriber: return self @@ -44,7 +48,7 @@ class Topic: return _RedisSubscription( client=self._client, pubsub=self._client.pubsub(), - topic=self._topic, + topic=self._redis_topic, ) @@ -62,7 +66,7 @@ class _RedisSubscription(RedisSubscriptionBase): assert self._pubsub is not None self._pubsub.unsubscribe(self._topic) - def _get_message(self) -> dict | None: + def _get_message(self) -> dict[str, Any] | None: assert self._pubsub is not None return self._pubsub.get_message(ignore_subscribe_messages=True, timeout=1) diff --git a/api/libs/broadcast_channel/redis/sharded_channel.py b/api/libs/broadcast_channel/redis/sharded_channel.py index 20c43b8bbb..919d8d622e 100644 --- a/api/libs/broadcast_channel/redis/sharded_channel.py +++ b/api/libs/broadcast_channel/redis/sharded_channel.py @@ -1,5 +1,8 @@ from __future__ import annotations +from typing import Any + +from extensions.redis_names import serialize_redis_name from libs.broadcast_channel.channel import Producer, Subscriber, Subscription from redis import Redis, RedisCluster @@ -28,12 +31,13 @@ class ShardedTopic: def __init__(self, redis_client: Redis | RedisCluster, topic: str): self._client = redis_client self._topic = topic + self._redis_topic = serialize_redis_name(topic) def as_producer(self) -> Producer: return self def publish(self, payload: bytes) -> None: - self._client.spublish(self._topic, payload) # type: ignore[attr-defined,union-attr] + self._client.spublish(self._redis_topic, payload) # type: ignore[attr-defined,union-attr] def as_subscriber(self) -> Subscriber: return self @@ -42,7 +46,7 @@ class ShardedTopic: return _RedisShardedSubscription( client=self._client, pubsub=self._client.pubsub(), - topic=self._topic, + topic=self._redis_topic, ) @@ -60,7 +64,7 @@ class _RedisShardedSubscription(RedisSubscriptionBase): assert self._pubsub is not None self._pubsub.sunsubscribe(self._topic) # type: ignore[attr-defined] - def _get_message(self) -> dict | None: + def _get_message(self) -> dict[str, Any] | None: assert self._pubsub is not None # NOTE(QuantumGhost): this is an issue in # upstream code. If Sharded PubSub is used with Cluster, the diff --git a/api/libs/broadcast_channel/redis/streams_channel.py b/api/libs/broadcast_channel/redis/streams_channel.py index 983f785027..55ff6cd4f9 100644 --- a/api/libs/broadcast_channel/redis/streams_channel.py +++ b/api/libs/broadcast_channel/redis/streams_channel.py @@ -6,6 +6,7 @@ import threading from collections.abc import Iterator from typing import Self +from extensions.redis_names import serialize_redis_name from libs.broadcast_channel.channel import Producer, Subscriber, Subscription from libs.broadcast_channel.exc import SubscriptionClosedError from redis import Redis, RedisCluster @@ -35,7 +36,7 @@ class StreamsTopic: def __init__(self, redis_client: Redis | RedisCluster, topic: str, *, retention_seconds: int = 600): self._client = redis_client self._topic = topic - self._key = f"stream:{topic}" + self._key = serialize_redis_name(f"stream:{topic}") self._retention_seconds = retention_seconds self.max_length = 5000 diff --git a/api/libs/db_migration_lock.py b/api/libs/db_migration_lock.py index ca8956e397..b5fe38342a 100644 --- a/api/libs/db_migration_lock.py +++ b/api/libs/db_migration_lock.py @@ -103,7 +103,10 @@ class DbMigrationAutoRenewLock: timeout=self._ttl_seconds, thread_local=False, ) - acquired = bool(self._lock.acquire(*args, **kwargs)) + lock = self._lock + if lock is None: + raise RuntimeError("Redis lock initialization failed.") + acquired = bool(lock.acquire(*args, **kwargs)) self._acquired = acquired if acquired: self._start_heartbeat() diff --git a/api/libs/email_i18n.py b/api/libs/email_i18n.py index 0828cf80bf..1519f07bb1 100644 --- a/api/libs/email_i18n.py +++ b/api/libs/email_i18n.py @@ -37,6 +37,7 @@ class EmailType(StrEnum): ENTERPRISE_CUSTOM = auto() QUEUE_MONITOR_ALERT = auto() DOCUMENT_CLEAN_NOTIFY = auto() + WORKFLOW_COMMENT_MENTION = auto() EMAIL_REGISTER = auto() EMAIL_REGISTER_WHEN_ACCOUNT_EXIST = auto() RESET_PASSWORD_WHEN_ACCOUNT_NOT_EXIST_NO_REGISTER = auto() @@ -453,6 +454,18 @@ def create_default_email_config() -> EmailI18nConfig: branded_template_path="clean_document_job_mail_template_zh-CN.html", ), }, + EmailType.WORKFLOW_COMMENT_MENTION: { + EmailLanguage.EN_US: EmailTemplate( + subject="You were mentioned in a workflow comment", + template_path="workflow_comment_mention_template_en-US.html", + branded_template_path="without-brand/workflow_comment_mention_template_en-US.html", + ), + EmailLanguage.ZH_HANS: EmailTemplate( + subject="你在工作流评论中被提及", + template_path="workflow_comment_mention_template_zh-CN.html", + branded_template_path="without-brand/workflow_comment_mention_template_zh-CN.html", + ), + }, EmailType.TRIGGER_EVENTS_LIMIT_SANDBOX: { EmailLanguage.EN_US: EmailTemplate( subject="You’ve reached your Sandbox Trigger Events limit", diff --git a/api/libs/exception.py b/api/libs/exception.py index 73379dfded..1e4bbb44f6 100644 --- a/api/libs/exception.py +++ b/api/libs/exception.py @@ -1,9 +1,11 @@ +from typing import Any + from werkzeug.exceptions import HTTPException class BaseHTTPException(HTTPException): error_code: str = "unknown" - data: dict | None = None + data: dict[str, Any] | None = None def __init__(self, description=None, response=None): super().__init__(description, response) diff --git a/api/libs/helper.py b/api/libs/helper.py index e7decd43b3..ac69a11084 100644 --- a/api/libs/helper.py +++ b/api/libs/helper.py @@ -16,8 +16,6 @@ from zoneinfo import available_timezones from flask import Response, stream_with_context from flask_restx import fields -from graphon.file import helpers as file_helpers -from graphon.model_runtime.utils.encoders import jsonable_encoder from pydantic import BaseModel, TypeAdapter from pydantic.functional_validators import AfterValidator from typing_extensions import TypedDict @@ -25,6 +23,8 @@ from typing_extensions import TypedDict from configs import dify_config from core.app.features.rate_limiting.rate_limit import RateLimitGenerator from extensions.ext_redis import redis_client +from graphon.file import helpers as file_helpers +from graphon.model_runtime.utils.encoders import jsonable_encoder if TYPE_CHECKING: from models import Account @@ -120,10 +120,22 @@ class AppIconUrlField(fields.Raw): obj = obj["app"] if isinstance(obj, App | Site) and obj.icon_type == IconType.IMAGE: - return file_helpers.get_signed_file_url(obj.icon) + return build_icon_url(obj.icon_type, obj.icon) return None +def build_icon_url(icon_type: Any, icon: str | None) -> str | None: + if icon is None or icon_type is None: + return None + + from models.model import IconType + + icon_type_value = icon_type.value if isinstance(icon_type, IconType) else str(icon_type) + if icon_type_value.lower() != IconType.IMAGE: + return None + return file_helpers.get_signed_file_url(icon) + + class AvatarUrlField(fields.Raw): def output(self, key, obj, **kwargs): if obj is None: @@ -410,7 +422,7 @@ class TokenManager: token_type: str, account: "Account | None" = None, email: str | None = None, - additional_data: dict | None = None, + additional_data: dict[str, Any] | None = None, ) -> str: if account is None and email is None: raise ValueError("Account or email must be provided") diff --git a/api/libs/sendgrid.py b/api/libs/sendgrid.py index c047c54d06..0338641d11 100644 --- a/api/libs/sendgrid.py +++ b/api/libs/sendgrid.py @@ -1,4 +1,5 @@ import logging +from typing import Any import sendgrid from python_http_client.exceptions import ForbiddenError, UnauthorizedError @@ -12,7 +13,7 @@ class SendGridClient: self.sendgrid_api_key = sendgrid_api_key self._from = _from - def send(self, mail: dict): + def send(self, mail: dict[str, Any]): logger.debug("Sending email with SendGrid") _to = "" try: diff --git a/api/libs/smtp.py b/api/libs/smtp.py index 6f82f1440a..53906d1769 100644 --- a/api/libs/smtp.py +++ b/api/libs/smtp.py @@ -2,6 +2,7 @@ import logging import smtplib from email.mime.multipart import MIMEMultipart from email.mime.text import MIMEText +from typing import Any from configs import dify_config @@ -20,7 +21,7 @@ class SMTPClient: self.use_tls = use_tls self.opportunistic_tls = opportunistic_tls - def send(self, mail: dict): + def send(self, mail: dict[str, Any]): smtp: smtplib.SMTP | None = None local_host = dify_config.SMTP_LOCAL_HOSTNAME try: diff --git a/api/migrations/versions/2026_04_14_1500-8574b23a38fd_add_qdrant_endpoint_to_tidb_auth_bindings.py b/api/migrations/versions/2026_04_14_1500-8574b23a38fd_add_qdrant_endpoint_to_tidb_auth_bindings.py new file mode 100644 index 0000000000..0e188ec080 --- /dev/null +++ b/api/migrations/versions/2026_04_14_1500-8574b23a38fd_add_qdrant_endpoint_to_tidb_auth_bindings.py @@ -0,0 +1,26 @@ +"""add qdrant_endpoint to tidb_auth_bindings + +Revision ID: 8574b23a38fd +Revises: 6b5f9f8b1a2c +Create Date: 2026-04-14 15:00:00.000000 + +""" + +import sqlalchemy as sa +from alembic import op + +# revision identifiers, used by Alembic. +revision = "8574b23a38fd" +down_revision = "6b5f9f8b1a2c" +branch_labels = None +depends_on = None + + +def upgrade(): + with op.batch_alter_table("tidb_auth_bindings", schema=None) as batch_op: + batch_op.add_column(sa.Column("qdrant_endpoint", sa.String(length=512), nullable=True)) + + +def downgrade(): + with op.batch_alter_table("tidb_auth_bindings", schema=None) as batch_op: + batch_op.drop_column("qdrant_endpoint") diff --git a/api/migrations/versions/2026_04_15_1726-227822d22895_add_workflow_comments_table.py b/api/migrations/versions/2026_04_15_1726-227822d22895_add_workflow_comments_table.py new file mode 100644 index 0000000000..0548c932b5 --- /dev/null +++ b/api/migrations/versions/2026_04_15_1726-227822d22895_add_workflow_comments_table.py @@ -0,0 +1,90 @@ +"""Add workflow comments table + +Revision ID: 227822d22895 +Revises: 8574b23a38fd +Create Date: 2025-08-22 17:26:15.255980 + +""" +from alembic import op +import models as models +import sqlalchemy as sa + + +# revision identifiers, used by Alembic. +revision = '227822d22895' +down_revision = '8574b23a38fd' +branch_labels = None +depends_on = None + + +def upgrade(): + # ### commands auto generated by Alembic - please adjust! ### + op.create_table('workflow_comments', + sa.Column('id', models.types.StringUUID(), nullable=False), + sa.Column('tenant_id', models.types.StringUUID(), nullable=False), + sa.Column('app_id', models.types.StringUUID(), nullable=False), + sa.Column('position_x', sa.Float(), nullable=False), + sa.Column('position_y', sa.Float(), nullable=False), + sa.Column('content', sa.Text(), nullable=False), + sa.Column('created_by', models.types.StringUUID(), nullable=False), + sa.Column('created_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False), + sa.Column('updated_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False), + sa.Column('resolved', sa.Boolean(), server_default=sa.text('false'), nullable=False), + sa.Column('resolved_at', sa.DateTime(), nullable=True), + sa.Column('resolved_by', models.types.StringUUID(), nullable=True), + sa.PrimaryKeyConstraint('id', name='workflow_comments_pkey') + ) + with op.batch_alter_table('workflow_comments', schema=None) as batch_op: + batch_op.create_index('workflow_comments_app_idx', ['tenant_id', 'app_id'], unique=False) + batch_op.create_index('workflow_comments_created_at_idx', ['created_at'], unique=False) + + op.create_table('workflow_comment_replies', + sa.Column('id', models.types.StringUUID(), nullable=False), + sa.Column('comment_id', models.types.StringUUID(), nullable=False), + sa.Column('content', sa.Text(), nullable=False), + sa.Column('created_by', models.types.StringUUID(), nullable=False), + sa.Column('created_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False), + sa.Column('updated_at', sa.DateTime(), server_default=sa.text('CURRENT_TIMESTAMP'), nullable=False), + sa.ForeignKeyConstraint(['comment_id'], ['workflow_comments.id'], name=op.f('workflow_comment_replies_comment_id_fkey'), ondelete='CASCADE'), + sa.PrimaryKeyConstraint('id', name='workflow_comment_replies_pkey') + ) + with op.batch_alter_table('workflow_comment_replies', schema=None) as batch_op: + batch_op.create_index('comment_replies_comment_idx', ['comment_id'], unique=False) + batch_op.create_index('comment_replies_created_at_idx', ['created_at'], unique=False) + + op.create_table('workflow_comment_mentions', + sa.Column('id', models.types.StringUUID(), nullable=False), + sa.Column('comment_id', models.types.StringUUID(), nullable=False), + sa.Column('reply_id', models.types.StringUUID(), nullable=True), + sa.Column('mentioned_user_id', models.types.StringUUID(), nullable=False), + sa.ForeignKeyConstraint(['comment_id'], ['workflow_comments.id'], name=op.f('workflow_comment_mentions_comment_id_fkey'), ondelete='CASCADE'), + sa.ForeignKeyConstraint(['reply_id'], ['workflow_comment_replies.id'], name=op.f('workflow_comment_mentions_reply_id_fkey'), ondelete='CASCADE'), + sa.PrimaryKeyConstraint('id', name='workflow_comment_mentions_pkey') + ) + with op.batch_alter_table('workflow_comment_mentions', schema=None) as batch_op: + batch_op.create_index('comment_mentions_comment_idx', ['comment_id'], unique=False) + batch_op.create_index('comment_mentions_reply_idx', ['reply_id'], unique=False) + batch_op.create_index('comment_mentions_user_idx', ['mentioned_user_id'], unique=False) + + # ### end Alembic commands ### + + +def downgrade(): + # ### commands auto generated by Alembic - please adjust! ### + with op.batch_alter_table('workflow_comment_mentions', schema=None) as batch_op: + batch_op.drop_index('comment_mentions_user_idx') + batch_op.drop_index('comment_mentions_reply_idx') + batch_op.drop_index('comment_mentions_comment_idx') + + op.drop_table('workflow_comment_mentions') + with op.batch_alter_table('workflow_comment_replies', schema=None) as batch_op: + batch_op.drop_index('comment_replies_created_at_idx') + batch_op.drop_index('comment_replies_comment_idx') + + op.drop_table('workflow_comment_replies') + with op.batch_alter_table('workflow_comments', schema=None) as batch_op: + batch_op.drop_index('workflow_comments_created_at_idx') + batch_op.drop_index('workflow_comments_app_idx') + + op.drop_table('workflow_comments') + # ### end Alembic commands ### diff --git a/api/models/__init__.py b/api/models/__init__.py index fcae07f948..85be9ca3bd 100644 --- a/api/models/__init__.py +++ b/api/models/__init__.py @@ -9,6 +9,11 @@ from .account import ( TenantStatus, ) from .api_based_extension import APIBasedExtension, APIBasedExtensionPoint +from .comment import ( + WorkflowComment, + WorkflowCommentMention, + WorkflowCommentReply, +) from .dataset import ( AppDatasetJoin, Dataset, @@ -208,6 +213,9 @@ __all__ = [ "WorkflowAppLog", "WorkflowAppLogCreatedFrom", "WorkflowArchiveLog", + "WorkflowComment", + "WorkflowCommentMention", + "WorkflowCommentReply", "WorkflowNodeExecutionModel", "WorkflowNodeExecutionOffload", "WorkflowNodeExecutionTriggeredFrom", diff --git a/api/models/comment.py b/api/models/comment.py new file mode 100644 index 0000000000..308339e6f6 --- /dev/null +++ b/api/models/comment.py @@ -0,0 +1,218 @@ +"""Workflow comment models.""" + +from datetime import datetime +from typing import Optional + +from sqlalchemy import Index, func +from sqlalchemy.orm import Mapped, mapped_column, relationship + +from .account import Account +from .base import Base +from .engine import db +from .types import StringUUID + + +class WorkflowComment(Base): + """Workflow comment model for canvas commenting functionality. + + Comments are associated with apps rather than specific workflow versions, + since an app has only one draft workflow at a time and comments should persist + across workflow version changes. + + Attributes: + id: Comment ID + tenant_id: Workspace ID + app_id: App ID (primary association, comments belong to apps) + position_x: X coordinate on canvas + position_y: Y coordinate on canvas + content: Comment content + created_by: Creator account ID + created_at: Creation time + updated_at: Last update time + resolved: Whether comment is resolved + resolved_at: Resolution time + resolved_by: Resolver account ID + """ + + __tablename__ = "workflow_comments" + __table_args__ = ( + db.PrimaryKeyConstraint("id", name="workflow_comments_pkey"), + Index("workflow_comments_app_idx", "tenant_id", "app_id"), + Index("workflow_comments_created_at_idx", "created_at"), + ) + + id: Mapped[str] = mapped_column(StringUUID, server_default=db.text("uuidv7()")) + tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + app_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + position_x: Mapped[float] = mapped_column(db.Float) + position_y: Mapped[float] = mapped_column(db.Float) + content: Mapped[str] = mapped_column(db.Text, nullable=False) + created_by: Mapped[str] = mapped_column(StringUUID, nullable=False) + created_at: Mapped[datetime] = mapped_column(db.DateTime, nullable=False, server_default=func.current_timestamp()) + updated_at: Mapped[datetime] = mapped_column( + db.DateTime, nullable=False, server_default=func.current_timestamp(), onupdate=func.current_timestamp() + ) + resolved: Mapped[bool] = mapped_column(db.Boolean, nullable=False, server_default=db.text("false")) + resolved_at: Mapped[datetime | None] = mapped_column(db.DateTime) + resolved_by: Mapped[str | None] = mapped_column(StringUUID) + + # Relationships + replies: Mapped[list["WorkflowCommentReply"]] = relationship( + "WorkflowCommentReply", back_populates="comment", cascade="all, delete-orphan" + ) + mentions: Mapped[list["WorkflowCommentMention"]] = relationship( + "WorkflowCommentMention", back_populates="comment", cascade="all, delete-orphan" + ) + + @property + def created_by_account(self): + """Get creator account.""" + if hasattr(self, "_created_by_account_cache"): + return self._created_by_account_cache + return db.session.get(Account, self.created_by) + + def cache_created_by_account(self, account: Account | None) -> None: + """Cache creator account to avoid extra queries.""" + self._created_by_account_cache = account + + @property + def resolved_by_account(self): + """Get resolver account.""" + if hasattr(self, "_resolved_by_account_cache"): + return self._resolved_by_account_cache + if self.resolved_by: + return db.session.get(Account, self.resolved_by) + return None + + def cache_resolved_by_account(self, account: Account | None) -> None: + """Cache resolver account to avoid extra queries.""" + self._resolved_by_account_cache = account + + @property + def reply_count(self): + """Get reply count.""" + return len(self.replies) + + @property + def mention_count(self): + """Get mention count.""" + return len(self.mentions) + + @property + def participants(self): + """Get all participants (creator + repliers + mentioned users).""" + participant_ids: set[str] = set() + participants: list[Account] = [] + + # Use account properties to reuse preloaded caches and avoid hidden N+1. + if self.created_by not in participant_ids: + participant_ids.add(self.created_by) + created_by_account = self.created_by_account + if created_by_account: + participants.append(created_by_account) + + for reply in self.replies: + if reply.created_by in participant_ids: + continue + participant_ids.add(reply.created_by) + reply_account = reply.created_by_account + if reply_account: + participants.append(reply_account) + + for mention in self.mentions: + if mention.mentioned_user_id in participant_ids: + continue + participant_ids.add(mention.mentioned_user_id) + mentioned_account = mention.mentioned_user_account + if mentioned_account: + participants.append(mentioned_account) + + return participants + + +class WorkflowCommentReply(Base): + """Workflow comment reply model. + + Attributes: + id: Reply ID + comment_id: Parent comment ID + content: Reply content + created_by: Creator account ID + created_at: Creation time + """ + + __tablename__ = "workflow_comment_replies" + __table_args__ = ( + db.PrimaryKeyConstraint("id", name="workflow_comment_replies_pkey"), + Index("comment_replies_comment_idx", "comment_id"), + Index("comment_replies_created_at_idx", "created_at"), + ) + + id: Mapped[str] = mapped_column(StringUUID, server_default=db.text("uuidv7()")) + comment_id: Mapped[str] = mapped_column( + StringUUID, db.ForeignKey("workflow_comments.id", ondelete="CASCADE"), nullable=False + ) + content: Mapped[str] = mapped_column(db.Text, nullable=False) + created_by: Mapped[str] = mapped_column(StringUUID, nullable=False) + created_at: Mapped[datetime] = mapped_column(db.DateTime, nullable=False, server_default=func.current_timestamp()) + updated_at: Mapped[datetime] = mapped_column( + db.DateTime, nullable=False, server_default=func.current_timestamp(), onupdate=func.current_timestamp() + ) + # Relationships + comment: Mapped["WorkflowComment"] = relationship("WorkflowComment", back_populates="replies") + + @property + def created_by_account(self): + """Get creator account.""" + if hasattr(self, "_created_by_account_cache"): + return self._created_by_account_cache + return db.session.get(Account, self.created_by) + + def cache_created_by_account(self, account: Account | None) -> None: + """Cache creator account to avoid extra queries.""" + self._created_by_account_cache = account + + +class WorkflowCommentMention(Base): + """Workflow comment mention model. + + Mentions are only for internal accounts since end users + cannot access workflow canvas and commenting features. + + Attributes: + id: Mention ID + comment_id: Parent comment ID + mentioned_user_id: Mentioned account ID + """ + + __tablename__ = "workflow_comment_mentions" + __table_args__ = ( + db.PrimaryKeyConstraint("id", name="workflow_comment_mentions_pkey"), + Index("comment_mentions_comment_idx", "comment_id"), + Index("comment_mentions_reply_idx", "reply_id"), + Index("comment_mentions_user_idx", "mentioned_user_id"), + ) + + id: Mapped[str] = mapped_column(StringUUID, server_default=db.text("uuidv7()")) + comment_id: Mapped[str] = mapped_column( + StringUUID, db.ForeignKey("workflow_comments.id", ondelete="CASCADE"), nullable=False + ) + reply_id: Mapped[str | None] = mapped_column( + StringUUID, db.ForeignKey("workflow_comment_replies.id", ondelete="CASCADE"), nullable=True + ) + mentioned_user_id: Mapped[str] = mapped_column(StringUUID, nullable=False) + + # Relationships + comment: Mapped["WorkflowComment"] = relationship("WorkflowComment", back_populates="mentions") + reply: Mapped[Optional["WorkflowCommentReply"]] = relationship("WorkflowCommentReply") + + @property + def mentioned_user_account(self): + """Get mentioned account.""" + if hasattr(self, "_mentioned_user_account_cache"): + return self._mentioned_user_account_cache + return db.session.get(Account, self.mentioned_user_id) + + def cache_mentioned_user_account(self, account: Account | None) -> None: + """Cache mentioned account to avoid extra queries.""" + self._mentioned_user_account_cache = account diff --git a/api/models/dataset.py b/api/models/dataset.py index a48afa7ea7..50301dd2d7 100644 --- a/api/models/dataset.py +++ b/api/models/dataset.py @@ -1305,6 +1305,7 @@ class TidbAuthBinding(TypeBase): ) account: Mapped[str] = mapped_column(String(255), nullable=False) password: Mapped[str] = mapped_column(String(255), nullable=False) + qdrant_endpoint: Mapped[str | None] = mapped_column(String(512), nullable=True, default=None) created_at: Mapped[datetime] = mapped_column( DateTime, nullable=False, server_default=func.current_timestamp(), init=False ) @@ -1551,7 +1552,7 @@ class PipelineBuiltInTemplate(TypeBase): name: Mapped[str] = mapped_column(sa.String(255), nullable=False) description: Mapped[str] = mapped_column(LongText, nullable=False) chunk_structure: Mapped[str] = mapped_column(sa.String(255), nullable=False) - icon: Mapped[dict] = mapped_column(sa.JSON, nullable=False) + icon: Mapped[dict[str, Any]] = mapped_column(sa.JSON, nullable=False) yaml_content: Mapped[str] = mapped_column(LongText, nullable=False) copyright: Mapped[str] = mapped_column(sa.String(255), nullable=False) privacy_policy: Mapped[str] = mapped_column(sa.String(255), nullable=False) @@ -1584,7 +1585,7 @@ class PipelineCustomizedTemplate(TypeBase): name: Mapped[str] = mapped_column(sa.String(255), nullable=False) description: Mapped[str] = mapped_column(LongText, nullable=False) chunk_structure: Mapped[str] = mapped_column(sa.String(255), nullable=False) - icon: Mapped[dict] = mapped_column(sa.JSON, nullable=False) + icon: Mapped[dict[str, Any]] = mapped_column(sa.JSON, nullable=False) position: Mapped[int] = mapped_column(sa.Integer, nullable=False) yaml_content: Mapped[str] = mapped_column(LongText, nullable=False) install_count: Mapped[int] = mapped_column(sa.Integer, nullable=False) @@ -1657,7 +1658,7 @@ class DocumentPipelineExecutionLog(TypeBase): datasource_type: Mapped[str] = mapped_column(sa.String(255), nullable=False) datasource_info: Mapped[str] = mapped_column(LongText, nullable=False) datasource_node_id: Mapped[str] = mapped_column(sa.String(255), nullable=False) - input_data: Mapped[dict] = mapped_column(sa.JSON, nullable=False) + input_data: Mapped[dict[str, Any]] = mapped_column(sa.JSON, nullable=False) created_by: Mapped[str | None] = mapped_column(StringUUID, nullable=True) created_at: Mapped[datetime] = mapped_column( sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False diff --git a/api/models/human_input.py b/api/models/human_input.py index 79c5d62f6a..b4c7a634b6 100644 --- a/api/models/human_input.py +++ b/api/models/human_input.py @@ -3,11 +3,11 @@ from enum import StrEnum from typing import Annotated, Literal, Self, final import sqlalchemy as sa -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from pydantic import BaseModel, Field from sqlalchemy.orm import Mapped, mapped_column, relationship from core.workflow.human_input_compat import DeliveryMethodType +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.helper import generate_string from .base import Base, DefaultFieldsMixin diff --git a/api/models/model.py b/api/models/model.py index 47b096d0bf..7fe0731098 100644 --- a/api/models/model.py +++ b/api/models/model.py @@ -14,9 +14,6 @@ from uuid import uuid4 import sqlalchemy as sa from flask import request from flask_login import UserMixin # type: ignore[import-untyped] -from graphon.enums import WorkflowExecutionStatus -from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod, FileType -from graphon.file import helpers as file_helpers from sqlalchemy import BigInteger, Float, Index, PrimaryKeyConstraint, String, exists, func, select, text from sqlalchemy.orm import Mapped, Session, mapped_column, sessionmaker @@ -24,6 +21,9 @@ from configs import dify_config from constants import DEFAULT_FILE_NUMBER_LIMITS from core.tools.signature import sign_tool_file from extensions.storage.storage_type import StorageType +from graphon.enums import WorkflowExecutionStatus +from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod, FileType +from graphon.file import helpers as file_helpers from libs.helper import generate_string # type: ignore[import-not-found] from libs.uuid_utils import uuidv7 from models.utils.file_input_compat import build_file_from_input_mapping @@ -1007,7 +1007,7 @@ class OAuthProviderApp(TypeBase): app_icon: Mapped[str] = mapped_column(String(255), nullable=False) client_id: Mapped[str] = mapped_column(String(255), nullable=False) client_secret: Mapped[str] = mapped_column(String(255), nullable=False) - app_label: Mapped[dict] = mapped_column(sa.JSON, nullable=False, default_factory=dict) + app_label: Mapped[dict[str, Any]] = mapped_column(sa.JSON, nullable=False, default_factory=dict) redirect_uris: Mapped[list] = mapped_column(sa.JSON, nullable=False, default_factory=list) scope: Mapped[str] = mapped_column( String(255), @@ -2495,7 +2495,7 @@ class TraceAppConfig(TypeBase): ) app_id: Mapped[str] = mapped_column(StringUUID, nullable=False) tracing_provider: Mapped[str | None] = mapped_column(String(255), nullable=True) - tracing_config: Mapped[dict | None] = mapped_column(sa.JSON, nullable=True) + tracing_config: Mapped[dict[str, Any] | None] = mapped_column(sa.JSON, nullable=True) created_at: Mapped[datetime] = mapped_column( sa.DateTime, nullable=False, server_default=func.current_timestamp(), init=False ) diff --git a/api/models/oauth.py b/api/models/oauth.py index 1db2552469..bd04d890d3 100644 --- a/api/models/oauth.py +++ b/api/models/oauth.py @@ -1,4 +1,5 @@ from datetime import datetime +from typing import Any import sqlalchemy as sa from sqlalchemy import func @@ -22,7 +23,7 @@ class DatasourceOauthParamConfig(TypeBase): ) plugin_id: Mapped[str] = mapped_column(sa.String(255), nullable=False) provider: Mapped[str] = mapped_column(sa.String(255), nullable=False) - system_credentials: Mapped[dict] = mapped_column(AdjustedJSON, nullable=False) + system_credentials: Mapped[dict[str, Any]] = mapped_column(AdjustedJSON, nullable=False) class DatasourceProvider(TypeBase): @@ -40,7 +41,7 @@ class DatasourceProvider(TypeBase): provider: Mapped[str] = mapped_column(sa.String(128), nullable=False) plugin_id: Mapped[str] = mapped_column(sa.String(255), nullable=False) auth_type: Mapped[str] = mapped_column(sa.String(255), nullable=False) - encrypted_credentials: Mapped[dict] = mapped_column(AdjustedJSON, nullable=False) + encrypted_credentials: Mapped[dict[str, Any]] = mapped_column(AdjustedJSON, nullable=False) avatar_url: Mapped[str] = mapped_column(LongText, nullable=True, default="default") is_default: Mapped[bool] = mapped_column(sa.Boolean, nullable=False, server_default=sa.text("false"), default=False) expires_at: Mapped[int] = mapped_column(sa.Integer, nullable=False, server_default="-1", default=-1) @@ -70,7 +71,7 @@ class DatasourceOauthTenantParamConfig(TypeBase): tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False) provider: Mapped[str] = mapped_column(sa.String(255), nullable=False) plugin_id: Mapped[str] = mapped_column(sa.String(255), nullable=False) - client_params: Mapped[dict] = mapped_column(AdjustedJSON, nullable=False, default_factory=dict) + client_params: Mapped[dict[str, Any]] = mapped_column(AdjustedJSON, nullable=False, default_factory=dict) enabled: Mapped[bool] = mapped_column(sa.Boolean, nullable=False, default=False) created_at: Mapped[datetime] = mapped_column( diff --git a/api/models/provider.py b/api/models/provider.py index 8270961b31..2bb67d605b 100644 --- a/api/models/provider.py +++ b/api/models/provider.py @@ -6,10 +6,10 @@ from functools import cached_property from uuid import uuid4 import sqlalchemy as sa -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import DateTime, String, func, select, text from sqlalchemy.orm import Mapped, mapped_column +from graphon.model_runtime.entities.model_entities import ModelType from libs.uuid_utils import uuidv7 from .base import TypeBase diff --git a/api/models/source.py b/api/models/source.py index 8078b32f8c..8fce7df205 100644 --- a/api/models/source.py +++ b/api/models/source.py @@ -25,7 +25,7 @@ class DataSourceOauthBinding(TypeBase): tenant_id: Mapped[str] = mapped_column(StringUUID, nullable=False) access_token: Mapped[str] = mapped_column(String(255), nullable=False) provider: Mapped[str] = mapped_column(String(255), nullable=False) - source_info: Mapped[dict] = mapped_column(AdjustedJSON, nullable=False) + source_info: Mapped[dict[str, Any]] = mapped_column(AdjustedJSON, nullable=False) created_at: Mapped[datetime] = mapped_column( DateTime, nullable=False, server_default=func.current_timestamp(), init=False ) diff --git a/api/models/types.py b/api/models/types.py index c1d9c3845a..4f35c31a27 100644 --- a/api/models/types.py +++ b/api/models/types.py @@ -103,10 +103,14 @@ class AdjustedJSON(TypeDecorator[dict | list | None]): else: return dialect.type_descriptor(sa.JSON()) - def process_bind_param(self, value: dict | list | None, dialect: Dialect) -> dict | list | None: + def process_bind_param( + self, value: dict[str, Any] | list[Any] | None, dialect: Dialect + ) -> dict[str, Any] | list[Any] | None: return value - def process_result_value(self, value: dict | list | None, dialect: Dialect) -> dict | list | None: + def process_result_value( + self, value: dict[str, Any] | list[Any] | None, dialect: Dialect + ) -> dict[str, Any] | list[Any] | None: return value diff --git a/api/models/utils/file_input_compat.py b/api/models/utils/file_input_compat.py index 8b767779ce..a2dc8f6157 100644 --- a/api/models/utils/file_input_compat.py +++ b/api/models/utils/file_input_compat.py @@ -4,9 +4,8 @@ from collections.abc import Callable, Mapping from functools import lru_cache from typing import Any -from graphon.file import File, FileTransferMethod - from core.workflow.file_reference import parse_file_reference +from graphon.file import File, FileTransferMethod @lru_cache(maxsize=1) diff --git a/api/models/workflow.py b/api/models/workflow.py index 63abf8c3b6..dfda03c2ee 100644 --- a/api/models/workflow.py +++ b/api/models/workflow.py @@ -8,19 +8,6 @@ from typing import TYPE_CHECKING, Any, Optional, TypedDict, cast from uuid import uuid4 import sqlalchemy as sa -from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter -from graphon.entities.pause_reason import HumanInputRequired, PauseReason, PauseReasonType, SchedulingPause -from graphon.enums import ( - BuiltinNodeTypes, - NodeType, - WorkflowExecutionStatus, - WorkflowNodeExecutionMetadataKey, - WorkflowNodeExecutionStatus, -) -from graphon.file import File -from graphon.file.constants import maybe_file_object -from graphon.variables import utils as variable_utils -from graphon.variables.variables import FloatVariable, IntegerVariable, RAGPipelineVariable, StringVariable from sqlalchemy import ( DateTime, Index, @@ -44,6 +31,19 @@ from core.workflow.variable_prefixes import ( ) from extensions.ext_storage import Storage from factories.variable_factory import TypeMismatchError, build_segment_with_type +from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter +from graphon.entities.pause_reason import HumanInputRequired, PauseReason, PauseReasonType, SchedulingPause +from graphon.enums import ( + BuiltinNodeTypes, + NodeType, + WorkflowExecutionStatus, + WorkflowNodeExecutionMetadataKey, + WorkflowNodeExecutionStatus, +) +from graphon.file import File +from graphon.file.constants import maybe_file_object +from graphon.variables import utils as variable_utils +from graphon.variables.variables import FloatVariable, IntegerVariable, RAGPipelineVariable, StringVariable from libs.datetime_utils import naive_utc_now from libs.uuid_utils import uuidv7 @@ -53,11 +53,10 @@ if TYPE_CHECKING: from .model import AppMode, UploadFile -from graphon.variables import SecretVariable, Segment, SegmentType, VariableBase - from constants import DEFAULT_FILE_NUMBER_LIMITS, HIDDEN_VALUE from core.helper import encrypter from factories import variable_factory +from graphon.variables import SecretVariable, Segment, SegmentType, VariableBase from libs import helper from .account import Account @@ -490,7 +489,7 @@ class Workflow(Base): # bug :return: hash """ - entity = {"graph": self.graph_dict, "features": self.features_dict} + entity = {"graph": self.graph_dict} return helper.generate_text_hash(json.dumps(entity, sort_keys=True)) diff --git a/api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/alibabacloud_mysql_vector.py b/api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/alibabacloud_mysql_vector.py index 6e76827a42..37ffd11063 100644 --- a/api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/alibabacloud_mysql_vector.py +++ b/api/providers/vdb/vdb-alibabacloud-mysql/src/dify_vdb_alibabacloud_mysql/alibabacloud_mysql_vector.py @@ -35,7 +35,7 @@ class AlibabaCloudMySQLVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values.get("host"): raise ValueError("config ALIBABACLOUD_MYSQL_HOST is required") if not values.get("port"): diff --git a/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_openapi.py b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_openapi.py index 726ee8c050..f13d9c0817 100644 --- a/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_openapi.py +++ b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_openapi.py @@ -34,7 +34,7 @@ class AnalyticdbVectorOpenAPIConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["access_key_id"]: raise ValueError("config ANALYTICDB_KEY_ID is required") if not values["access_key_secret"]: diff --git a/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_sql.py b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_sql.py index 41c33a3ab1..b2908ebdae 100644 --- a/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_sql.py +++ b/api/providers/vdb/vdb-analyticdb/src/dify_vdb_analyticdb/analyticdb_vector_sql.py @@ -24,7 +24,7 @@ class AnalyticdbVectorBySqlConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config ANALYTICDB_HOST is required") if not values["port"]: diff --git a/api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/baidu_vector.py b/api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/baidu_vector.py index 99ab0d82f2..bdd5a42c87 100644 --- a/api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/baidu_vector.py +++ b/api/providers/vdb/vdb-baidu/src/dify_vdb_baidu/baidu_vector.py @@ -59,7 +59,7 @@ class BaiduConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["endpoint"]: raise ValueError("config BAIDU_VECTOR_DB_ENDPOINT is required") if not values["account"]: diff --git a/api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/clickzetta_vector.py b/api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/clickzetta_vector.py index a4dddc68f0..72b8c5e9eb 100644 --- a/api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/clickzetta_vector.py +++ b/api/providers/vdb/vdb-clickzetta/src/dify_vdb_clickzetta/clickzetta_vector.py @@ -51,7 +51,7 @@ class ClickzettaConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): """ Validate the configuration values. """ diff --git a/api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/couchbase_vector.py b/api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/couchbase_vector.py index 9a4a65cf6f..815ac30c0b 100644 --- a/api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/couchbase_vector.py +++ b/api/providers/vdb/vdb-couchbase/src/dify_vdb_couchbase/couchbase_vector.py @@ -36,7 +36,7 @@ class CouchbaseConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values.get("connection_string"): raise ValueError("config COUCHBASE_CONNECTION_STRING is required") if not values.get("user"): diff --git a/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_ja_vector.py b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_ja_vector.py index 87b9d813ec..e2f390402a 100644 --- a/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_ja_vector.py +++ b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_ja_vector.py @@ -23,7 +23,7 @@ class ElasticSearchJaVector(ElasticSearchVector): self, embeddings: list[list[float]], metadatas: list[dict[Any, Any]] | None = None, - index_params: dict | None = None, + index_params: dict[str, Any] | None = None, ): lock_name = f"vector_indexing_lock_{self._collection_name}" with redis_client.lock(lock_name, timeout=20): diff --git a/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_vector.py b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_vector.py index 1470713b88..11463b6c58 100644 --- a/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_vector.py +++ b/api/providers/vdb/vdb-elasticsearch/src/dify_vdb_elasticsearch/elasticsearch_vector.py @@ -43,7 +43,7 @@ class ElasticSearchConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): use_cloud = values.get("use_cloud", False) cloud_url = values.get("cloud_url") @@ -258,7 +258,7 @@ class ElasticSearchVector(BaseVector): self, embeddings: list[list[float]], metadatas: list[dict[Any, Any]] | None = None, - index_params: dict | None = None, + index_params: dict[str, Any] | None = None, ): lock_name = f"vector_indexing_lock_{self._collection_name}" with redis_client.lock(lock_name, timeout=20): diff --git a/api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/hologres_vector.py b/api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/hologres_vector.py index 2509260d41..80c0ed582e 100644 --- a/api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/hologres_vector.py +++ b/api/providers/vdb/vdb-hologres/src/dify_vdb_hologres/hologres_vector.py @@ -43,7 +43,7 @@ class HologresVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values.get("host"): raise ValueError("config HOLOGRES_HOST is required") if not values.get("database"): diff --git a/api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/huawei_cloud_vector.py b/api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/huawei_cloud_vector.py index 90d6d98c63..d51075d2e8 100644 --- a/api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/huawei_cloud_vector.py +++ b/api/providers/vdb/vdb-huawei-cloud/src/dify_vdb_huawei_cloud/huawei_cloud_vector.py @@ -44,7 +44,7 @@ class HuaweiCloudVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["hosts"]: raise ValueError("config HOSTS is required") return values @@ -169,7 +169,7 @@ class HuaweiCloudVector(BaseVector): self, embeddings: list[list[float]], metadatas: list[dict[Any, Any]] | None = None, - index_params: dict | None = None, + index_params: dict[str, Any] | None = None, ): lock_name = f"vector_indexing_lock_{self._collection_name}" with redis_client.lock(lock_name, timeout=20): diff --git a/api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/lindorm_vector.py b/api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/lindorm_vector.py index fbe0bcad02..9187ca943d 100644 --- a/api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/lindorm_vector.py +++ b/api/providers/vdb/vdb-lindorm/src/dify_vdb_lindorm/lindorm_vector.py @@ -44,7 +44,7 @@ class LindormVectorStoreConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["hosts"]: raise ValueError("config URL is required") if not values["username"]: @@ -336,7 +336,10 @@ class LindormVectorStore(BaseVector): return docs def create_collection( - self, embeddings: list, metadatas: list[dict] | None = None, index_params: dict | None = None + self, + embeddings: list[list[float]], + metadatas: list[dict[str, Any]] | None = None, + index_params: dict[str, Any] | None = None, ): if not embeddings: raise ValueError(f"Embeddings list cannot be empty for collection create '{self._collection_name}'") diff --git a/api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/matrixone_vector.py b/api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/matrixone_vector.py index c6ebccd204..75fb54e6f4 100644 --- a/api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/matrixone_vector.py +++ b/api/providers/vdb/vdb-matrixone/src/dify_vdb_matrixone/matrixone_vector.py @@ -43,7 +43,7 @@ class MatrixoneConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config host is required") if not values["port"]: diff --git a/api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/milvus_vector.py b/api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/milvus_vector.py index 7cdb2d3a99..46f3224a95 100644 --- a/api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/milvus_vector.py +++ b/api/providers/vdb/vdb-milvus/src/dify_vdb_milvus/milvus_vector.py @@ -45,7 +45,7 @@ class MilvusConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): """ Validate the configuration values. Raises ValueError if required fields are missing. @@ -302,7 +302,10 @@ class MilvusVector(BaseVector): ) def create_collection( - self, embeddings: list, metadatas: list[dict] | None = None, index_params: dict | None = None + self, + embeddings: list[list[float]], + metadatas: list[dict[str, Any]] | None = None, + index_params: dict[str, Any] | None = None, ): """ Create a new collection in Milvus with the specified schema and index parameters. diff --git a/api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/oceanbase_vector.py b/api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/oceanbase_vector.py index 82f419871c..69dc42169a 100644 --- a/api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/oceanbase_vector.py +++ b/api/providers/vdb/vdb-oceanbase/src/dify_vdb_oceanbase/oceanbase_vector.py @@ -49,7 +49,7 @@ class OceanBaseVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config OCEANBASE_VECTOR_HOST is required") if not values["port"]: diff --git a/api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/opengauss.py b/api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/opengauss.py index f9dbfbeeaf..acd2471cf6 100644 --- a/api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/opengauss.py +++ b/api/providers/vdb/vdb-opengauss/src/dify_vdb_opengauss/opengauss.py @@ -29,7 +29,7 @@ class OpenGaussConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config OPENGAUSS_HOST is required") if not values["port"]: diff --git a/api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/opensearch_vector.py b/api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/opensearch_vector.py index 50d18cdc4c..843c495d82 100644 --- a/api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/opensearch_vector.py +++ b/api/providers/vdb/vdb-opensearch/src/dify_vdb_opensearch/opensearch_vector.py @@ -49,7 +49,7 @@ class OpenSearchConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values.get("host"): raise ValueError("config OPENSEARCH_HOST is required") if not values.get("port"): @@ -252,7 +252,10 @@ class OpenSearchVector(BaseVector): return docs def create_collection( - self, embeddings: list, metadatas: list[dict] | None = None, index_params: dict | None = None + self, + embeddings: list[list[float]], + metadatas: list[dict[str, Any]] | None = None, + index_params: dict[str, Any] | None = None, ): lock_name = f"vector_indexing_lock_{self._collection_name.lower()}" with redis_client.lock(lock_name, timeout=20): diff --git a/api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/oraclevector.py b/api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/oraclevector.py index cb05c22b55..70377c82c8 100644 --- a/api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/oraclevector.py +++ b/api/providers/vdb/vdb-oracle/src/dify_vdb_oracle/oraclevector.py @@ -36,7 +36,7 @@ class OracleVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["user"]: raise ValueError("config ORACLE_USER is required") if not values["password"]: diff --git a/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/collection.py b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/collection.py index c335bc610d..e087ec30a5 100644 --- a/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/collection.py +++ b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/collection.py @@ -1,3 +1,4 @@ +from typing import Any from uuid import UUID from numpy import ndarray @@ -8,5 +9,5 @@ class CollectionORM(DeclarativeBase): __tablename__: str id: Mapped[UUID] text: Mapped[str] - meta: Mapped[dict] + meta: Mapped[dict[str, Any]] vector: Mapped[ndarray] diff --git a/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/pgvecto_rs.py b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/pgvecto_rs.py index 2f52af5681..9c721c8bde 100644 --- a/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/pgvecto_rs.py +++ b/api/providers/vdb/vdb-pgvecto-rs/src/dify_vdb_pgvecto_rs/pgvecto_rs.py @@ -33,7 +33,7 @@ class PgvectoRSConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config PGVECTO_RS_HOST is required") if not values["port"]: @@ -67,7 +67,7 @@ class PGVectoRS(BaseVector): primary_key=True, ) text: Mapped[str] - meta: Mapped[dict] = mapped_column(postgresql.JSONB) + meta: Mapped[dict[str, Any]] = mapped_column(postgresql.JSONB) vector: Mapped[ndarray] = mapped_column(VECTOR(dim)) self._table = _Table diff --git a/api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/pgvector.py b/api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/pgvector.py index 0615b8312c..b1bdce0ad4 100644 --- a/api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/pgvector.py +++ b/api/providers/vdb/vdb-pgvector/src/dify_vdb_pgvector/pgvector.py @@ -34,7 +34,7 @@ class PGVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config PGVECTOR_HOST is required") if not values["port"]: diff --git a/api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/relyt_vector.py b/api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/relyt_vector.py index 64b45bf28b..336c2d3c8a 100644 --- a/api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/relyt_vector.py +++ b/api/providers/vdb/vdb-relyt/src/dify_vdb_relyt/relyt_vector.py @@ -38,7 +38,7 @@ class RelytConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config RELYT_HOST is required") if not values["port"]: @@ -239,7 +239,7 @@ class RelytVector(BaseVector): self, embedding: list[float], k: int = 4, - filter: dict | None = None, + filter: dict[str, Any] | None = None, ) -> list[tuple[Document, float]]: # Add the filter if provided diff --git a/api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/tablestore_vector.py b/api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/tablestore_vector.py index 4a734232ec..f9deac11e5 100644 --- a/api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/tablestore_vector.py +++ b/api/providers/vdb/vdb-tablestore/src/dify_vdb_tablestore/tablestore_vector.py @@ -30,7 +30,7 @@ class TableStoreConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["access_key_id"]: raise ValueError("config ACCESS_KEY_ID is required") if not values["access_key_secret"]: diff --git a/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_on_qdrant_vector.py b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_on_qdrant_vector.py index bb8a580ebf..abca55f540 100644 --- a/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_on_qdrant_vector.py +++ b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_on_qdrant_vector.py @@ -1,4 +1,5 @@ import json +import logging import os import uuid from collections.abc import Generator, Iterable, Sequence @@ -7,6 +8,8 @@ from typing import TYPE_CHECKING, Any import httpx import qdrant_client + +logger = logging.getLogger(__name__) from flask import current_app from httpx import DigestAuth from pydantic import BaseModel @@ -421,13 +424,16 @@ class TidbOnQdrantVector(BaseVector): class TidbOnQdrantVectorFactory(AbstractVectorFactory): def init_vector(self, dataset: Dataset, attributes: list, embeddings: Embeddings) -> TidbOnQdrantVector: + logger.info("init_vector: tenant_id=%s, dataset_id=%s", dataset.tenant_id, dataset.id) stmt = select(TidbAuthBinding).where(TidbAuthBinding.tenant_id == dataset.tenant_id) tidb_auth_binding = db.session.scalars(stmt).one_or_none() if not tidb_auth_binding: + logger.info("No existing TidbAuthBinding for tenant %s, acquiring lock", dataset.tenant_id) with redis_client.lock("create_tidb_serverless_cluster_lock", timeout=900): stmt = select(TidbAuthBinding).where(TidbAuthBinding.tenant_id == dataset.tenant_id) tidb_auth_binding = db.session.scalars(stmt).one_or_none() if tidb_auth_binding: + logger.info("Found binding after lock: cluster_id=%s", tidb_auth_binding.cluster_id) TIDB_ON_QDRANT_API_KEY = f"{tidb_auth_binding.account}:{tidb_auth_binding.password}" else: @@ -437,11 +443,18 @@ class TidbOnQdrantVectorFactory(AbstractVectorFactory): .limit(1) ) if idle_tidb_auth_binding: + logger.info( + "Assigning idle cluster %s to tenant %s", + idle_tidb_auth_binding.cluster_id, + dataset.tenant_id, + ) idle_tidb_auth_binding.active = True idle_tidb_auth_binding.tenant_id = dataset.tenant_id db.session.commit() + tidb_auth_binding = idle_tidb_auth_binding TIDB_ON_QDRANT_API_KEY = f"{idle_tidb_auth_binding.account}:{idle_tidb_auth_binding.password}" else: + logger.info("No idle clusters available, creating new cluster for tenant %s", dataset.tenant_id) new_cluster = TidbService.create_tidb_serverless_cluster( dify_config.TIDB_PROJECT_ID or "", dify_config.TIDB_API_URL or "", @@ -450,21 +463,39 @@ class TidbOnQdrantVectorFactory(AbstractVectorFactory): dify_config.TIDB_PRIVATE_KEY or "", dify_config.TIDB_REGION or "", ) + logger.info( + "New cluster created: cluster_id=%s, qdrant_endpoint=%s", + new_cluster["cluster_id"], + new_cluster.get("qdrant_endpoint"), + ) new_tidb_auth_binding = TidbAuthBinding( cluster_id=new_cluster["cluster_id"], cluster_name=new_cluster["cluster_name"], account=new_cluster["account"], password=new_cluster["password"], + qdrant_endpoint=new_cluster.get("qdrant_endpoint"), tenant_id=dataset.tenant_id, active=True, status=TidbAuthBindingStatus.ACTIVE, ) db.session.add(new_tidb_auth_binding) db.session.commit() + tidb_auth_binding = new_tidb_auth_binding TIDB_ON_QDRANT_API_KEY = f"{new_tidb_auth_binding.account}:{new_tidb_auth_binding.password}" else: + logger.info("Existing binding found: cluster_id=%s", tidb_auth_binding.cluster_id) TIDB_ON_QDRANT_API_KEY = f"{tidb_auth_binding.account}:{tidb_auth_binding.password}" + qdrant_url = ( + (tidb_auth_binding.qdrant_endpoint if tidb_auth_binding else None) or dify_config.TIDB_ON_QDRANT_URL or "" + ) + logger.info( + "Using qdrant endpoint: %s (from_binding=%s, fallback_global=%s)", + qdrant_url, + tidb_auth_binding.qdrant_endpoint if tidb_auth_binding else None, + dify_config.TIDB_ON_QDRANT_URL, + ) + if dataset.index_struct_dict: class_prefix: str = dataset.index_struct_dict["vector_store"]["class_prefix"] collection_name = class_prefix @@ -479,7 +510,7 @@ class TidbOnQdrantVectorFactory(AbstractVectorFactory): collection_name=collection_name, group_id=dataset.id, config=TidbOnQdrantConfig( - endpoint=dify_config.TIDB_ON_QDRANT_URL or "", + endpoint=qdrant_url, api_key=TIDB_ON_QDRANT_API_KEY, root_path=str(config.root_path), timeout=dify_config.TIDB_ON_QDRANT_CLIENT_TIMEOUT, diff --git a/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_service.py b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_service.py index 37114be6e7..ece061db67 100644 --- a/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_service.py +++ b/api/providers/vdb/vdb-tidb-on-qdrant/src/dify_vdb_tidb_on_qdrant/tidb_service.py @@ -1,3 +1,4 @@ +import logging import time import uuid from collections.abc import Sequence @@ -12,6 +13,8 @@ from extensions.ext_redis import redis_client from models.dataset import TidbAuthBinding from models.enums import TidbAuthBindingStatus +logger = logging.getLogger(__name__) + # Reuse a pooled HTTP client for all TiDB Cloud requests to minimize connection churn _tidb_http_client: httpx.Client = get_pooled_http_client( "tidb:cloud", @@ -20,6 +23,46 @@ _tidb_http_client: httpx.Client = get_pooled_http_client( class TidbService: + @staticmethod + def extract_qdrant_endpoint(cluster_response: dict) -> str | None: + """Extract the qdrant endpoint URL from a Get Cluster API response. + + Reads ``endpoints.public.host`` (e.g. ``gateway01.xx.tidbcloud.com``), + prepends ``qdrant-`` and wraps it as an ``https://`` URL. + """ + endpoints = cluster_response.get("endpoints") or {} + public = endpoints.get("public") or {} + host = public.get("host") + if host: + return f"https://qdrant-{host}" + return None + + @staticmethod + def fetch_qdrant_endpoint(api_url: str, public_key: str, private_key: str, cluster_id: str) -> str | None: + """Call Get Cluster API and extract the qdrant endpoint. + + Use ``extract_qdrant_endpoint`` instead when you already have + the cluster response to avoid a redundant API call. + """ + try: + logger.info("Fetching qdrant endpoint for cluster %s", cluster_id) + cluster_response = TidbService.get_tidb_serverless_cluster(api_url, public_key, private_key, cluster_id) + if not cluster_response: + logger.warning("Empty response from Get Cluster API for cluster %s", cluster_id) + return None + qdrant_url = TidbService.extract_qdrant_endpoint(cluster_response) + if qdrant_url: + logger.info("Resolved qdrant endpoint for cluster %s: %s", cluster_id, qdrant_url) + return qdrant_url + logger.warning( + "No endpoints.public.host found for cluster %s, response keys: %s", + cluster_id, + list(cluster_response.keys()), + ) + except Exception: + logger.exception("Failed to fetch qdrant endpoint for cluster %s", cluster_id) + return None + @staticmethod def create_tidb_serverless_cluster( project_id: str, api_url: str, iam_url: str, public_key: str, private_key: str, region: str @@ -57,6 +100,7 @@ class TidbService: "rootPassword": password, } + logger.info("Creating TiDB serverless cluster: display_name=%s, region=%s", display_name, region) response = _tidb_http_client.post( f"{api_url}/clusters", json=cluster_data, auth=DigestAuth(public_key, private_key) ) @@ -64,21 +108,39 @@ class TidbService: if response.status_code == 200: response_data = response.json() cluster_id = response_data["clusterId"] + logger.info("Cluster created, cluster_id=%s, waiting for ACTIVE state", cluster_id) retry_count = 0 max_retries = 30 while retry_count < max_retries: cluster_response = TidbService.get_tidb_serverless_cluster(api_url, public_key, private_key, cluster_id) if cluster_response["state"] == "ACTIVE": user_prefix = cluster_response["userPrefix"] + qdrant_endpoint = TidbService.extract_qdrant_endpoint(cluster_response) + logger.info( + "Cluster %s is ACTIVE, user_prefix=%s, qdrant_endpoint=%s", + cluster_id, + user_prefix, + qdrant_endpoint, + ) return { "cluster_id": cluster_id, "cluster_name": display_name, "account": f"{user_prefix}.root", "password": password, + "qdrant_endpoint": qdrant_endpoint, } - time.sleep(30) # wait 30 seconds before retrying + logger.info( + "Cluster %s state=%s, retry %d/%d", + cluster_id, + cluster_response["state"], + retry_count + 1, + max_retries, + ) + time.sleep(30) retry_count += 1 + logger.error("Cluster %s did not become ACTIVE after %d retries", cluster_id, max_retries) else: + logger.error("Failed to create cluster: status=%d, body=%s", response.status_code, response.text) response.raise_for_status() @staticmethod @@ -243,19 +305,29 @@ class TidbService: if response.status_code == 200: response_data = response.json() cluster_infos = [] + logger.info("Batch created %d clusters", len(response_data.get("clusters", []))) for item in response_data["clusters"]: cache_key = f"tidb_serverless_cluster_password:{item['displayName']}" cached_password = redis_client.get(cache_key) if not cached_password: + logger.warning("No cached password for cluster %s, skipping", item["displayName"]) continue + qdrant_endpoint = TidbService.fetch_qdrant_endpoint(api_url, public_key, private_key, item["clusterId"]) + logger.info( + "Batch cluster %s: qdrant_endpoint=%s", + item["clusterId"], + qdrant_endpoint, + ) cluster_info = { "cluster_id": item["clusterId"], "cluster_name": item["displayName"], "account": "root", "password": cached_password.decode("utf-8"), + "qdrant_endpoint": qdrant_endpoint, } cluster_infos.append(cluster_info) return cluster_infos else: + logger.error("Batch create failed: status=%d, body=%s", response.status_code, response.text) response.raise_for_status() return [] diff --git a/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_on_qdrant_vector.py b/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_on_qdrant_vector.py index 3e9229fea5..76802de62e 100644 --- a/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_on_qdrant_vector.py +++ b/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_on_qdrant_vector.py @@ -114,14 +114,12 @@ class TestTidbOnQdrantVectorDeleteByIds: assert exc_info.value.status_code == 500 - def test_delete_by_ids_with_large_batch(self, vector_instance): - """Test deletion with a large batch of IDs.""" - # Create 1000 IDs + def test_delete_by_ids_with_exactly_1000(self, vector_instance): + """Test deletion with exactly 1000 IDs triggers a single batch.""" ids = [f"doc_{i}" for i in range(1000)] vector_instance.delete_by_ids(ids) - # Verify single delete call with all IDs vector_instance._client.delete.assert_called_once() call_args = vector_instance._client.delete.call_args @@ -129,11 +127,28 @@ class TestTidbOnQdrantVectorDeleteByIds: filter_obj = filter_selector.filter field_condition = filter_obj.must[0] - # Verify all 1000 IDs are in the batch assert len(field_condition.match.any) == 1000 assert "doc_0" in field_condition.match.any assert "doc_999" in field_condition.match.any + def test_delete_by_ids_splits_into_batches(self, vector_instance): + """Test deletion with >1000 IDs triggers multiple batched calls.""" + ids = [f"doc_{i}" for i in range(2500)] + + vector_instance.delete_by_ids(ids) + + assert vector_instance._client.delete.call_count == 3 + + batches = [] + for call in vector_instance._client.delete.call_args_list: + filter_selector = call[1]["points_selector"] + field_condition = filter_selector.filter.must[0] + batches.append(field_condition.match.any) + + assert len(batches[0]) == 1000 + assert len(batches[1]) == 1000 + assert len(batches[2]) == 500 + def test_delete_by_ids_filter_structure(self, vector_instance): """Test that the filter structure is correctly constructed.""" ids = ["doc1", "doc2"] @@ -157,3 +172,57 @@ class TestTidbOnQdrantVectorDeleteByIds: # Verify MatchAny structure assert isinstance(field_condition.match, rest.MatchAny) assert field_condition.match.any == ids + + +class TestInitVectorEndpointSelection: + """Test that init_vector selects the correct qdrant endpoint. + + We avoid importing the full module (which triggers Flask app context) + by testing the endpoint selection logic directly on TidbOnQdrantConfig. + """ + + def test_uses_binding_endpoint_when_present(self): + binding_endpoint = "https://qdrant-custom.tidb.com" + global_url = "https://qdrant-global.tidb.com" + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "https://qdrant-custom.tidb.com" + config = TidbOnQdrantConfig(endpoint=qdrant_url) + assert config.endpoint == "https://qdrant-custom.tidb.com" + + def test_falls_back_to_global_when_binding_endpoint_is_none(self): + binding_endpoint = None + global_url = "https://qdrant-global.tidb.com" + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "https://qdrant-global.tidb.com" + config = TidbOnQdrantConfig(endpoint=qdrant_url) + assert config.endpoint == "https://qdrant-global.tidb.com" + + def test_falls_back_to_empty_when_both_none(self): + binding_endpoint = None + global_url = None + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "" + config = TidbOnQdrantConfig(endpoint=qdrant_url) + assert config.endpoint == "" + + def test_binding_endpoint_takes_precedence_over_global(self): + binding_endpoint = "https://qdrant-ap-southeast.tidb.com" + global_url = "https://qdrant-us-east.tidb.com" + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "https://qdrant-ap-southeast.tidb.com" + + def test_empty_string_binding_endpoint_falls_back_to_global(self): + binding_endpoint = "" + global_url = "https://qdrant-global.tidb.com" + + qdrant_url = binding_endpoint or global_url or "" + + assert qdrant_url == "https://qdrant-global.tidb.com" diff --git a/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_service.py b/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_service.py new file mode 100644 index 0000000000..c1ffbacbbc --- /dev/null +++ b/api/providers/vdb/vdb-tidb-on-qdrant/tests/unit_tests/test_tidb_service.py @@ -0,0 +1,218 @@ +from unittest.mock import MagicMock, patch + +import pytest +from dify_vdb_tidb_on_qdrant.tidb_service import TidbService + + +class TestExtractQdrantEndpoint: + """Unit tests for TidbService.extract_qdrant_endpoint.""" + + def test_returns_endpoint_when_host_present(self): + response = {"endpoints": {"public": {"host": "gateway01.us-east-1.tidbcloud.com", "port": 4000}}} + result = TidbService.extract_qdrant_endpoint(response) + assert result == "https://qdrant-gateway01.us-east-1.tidbcloud.com" + + def test_returns_none_when_host_missing(self): + response = {"endpoints": {"public": {}}} + assert TidbService.extract_qdrant_endpoint(response) is None + + def test_returns_none_when_public_missing(self): + response = {"endpoints": {}} + assert TidbService.extract_qdrant_endpoint(response) is None + + def test_returns_none_when_endpoints_missing(self): + assert TidbService.extract_qdrant_endpoint({}) is None + + +class TestFetchQdrantEndpoint: + """Unit tests for TidbService.fetch_qdrant_endpoint.""" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_endpoint_when_host_present(self, mock_get_cluster): + mock_get_cluster.return_value = { + "endpoints": {"public": {"host": "gateway01.us-east-1.tidbcloud.com", "port": 4000}} + } + result = TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") + assert result == "https://qdrant-gateway01.us-east-1.tidbcloud.com" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_none_when_cluster_response_is_none(self, mock_get_cluster): + mock_get_cluster.return_value = None + assert TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") is None + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_none_when_host_missing(self, mock_get_cluster): + mock_get_cluster.return_value = {"endpoints": {"public": {}}} + assert TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") is None + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_none_when_endpoints_missing(self, mock_get_cluster): + mock_get_cluster.return_value = {} + assert TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") is None + + @patch.object(TidbService, "get_tidb_serverless_cluster") + def test_returns_none_on_exception(self, mock_get_cluster): + mock_get_cluster.side_effect = RuntimeError("network error") + assert TidbService.fetch_qdrant_endpoint("url", "pub", "priv", "c-123") is None + + +class TestCreateTidbServerlessClusterQdrantEndpoint: + """Verify that create_tidb_serverless_cluster includes qdrant_endpoint in its result.""" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_result_contains_qdrant_endpoint(self, mock_config, mock_http, mock_get_cluster): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock(status_code=200, json=lambda: {"clusterId": "c-1"}) + mock_get_cluster.return_value = { + "state": "ACTIVE", + "userPrefix": "pfx", + "endpoints": {"public": {"host": "gw.tidbcloud.com", "port": 4000}}, + } + + result = TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + assert result is not None + assert result["qdrant_endpoint"] == "https://qdrant-gw.tidbcloud.com" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_result_qdrant_endpoint_none_when_no_endpoints(self, mock_config, mock_http, mock_get_cluster): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock(status_code=200, json=lambda: {"clusterId": "c-1"}) + mock_get_cluster.return_value = {"state": "ACTIVE", "userPrefix": "pfx"} + + result = TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + assert result is not None + assert result["qdrant_endpoint"] is None + + +class TestBatchCreateTidbServerlessClusterQdrantEndpoint: + """Verify that batch_create includes qdrant_endpoint per cluster.""" + + @patch.object(TidbService, "fetch_qdrant_endpoint", return_value="https://qdrant-gw.tidbcloud.com") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.redis_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_batch_result_contains_qdrant_endpoint(self, mock_config, mock_http, mock_redis, mock_fetch_ep): + mock_config.TIDB_SPEND_LIMIT = 10 + cluster_name = "abc123" + mock_http.post.return_value = MagicMock( + status_code=200, + json=lambda: {"clusters": [{"clusterId": "c-1", "displayName": cluster_name}]}, + ) + mock_redis.setex = MagicMock() + mock_redis.get.return_value = b"password123" + + result = TidbService.batch_create_tidb_serverless_cluster( + batch_size=1, + project_id="proj", + api_url="url", + iam_url="iam", + public_key="pub", + private_key="priv", + region="us-east-1", + ) + + assert len(result) == 1 + assert result[0]["qdrant_endpoint"] == "https://qdrant-gw.tidbcloud.com" + + +class TestCreateTidbServerlessClusterRetry: + """Cover retry/logging paths in create_tidb_serverless_cluster.""" + + @patch.object(TidbService, "get_tidb_serverless_cluster") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_polls_until_active(self, mock_config, mock_http, mock_get_cluster): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock(status_code=200, json=lambda: {"clusterId": "c-1"}) + mock_get_cluster.side_effect = [ + {"state": "CREATING", "userPrefix": ""}, + {"state": "ACTIVE", "userPrefix": "pfx", "endpoints": {"public": {"host": "gw.tidb.com"}}}, + ] + + with patch("dify_vdb_tidb_on_qdrant.tidb_service.time.sleep"): + result = TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + assert result is not None + assert result["qdrant_endpoint"] == "https://qdrant-gw.tidb.com" + assert mock_get_cluster.call_count == 2 + + @patch.object(TidbService, "get_tidb_serverless_cluster") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_returns_none_after_max_retries(self, mock_config, mock_http, mock_get_cluster): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock(status_code=200, json=lambda: {"clusterId": "c-1"}) + mock_get_cluster.return_value = {"state": "CREATING", "userPrefix": ""} + + with patch("dify_vdb_tidb_on_qdrant.tidb_service.time.sleep"): + result = TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + assert result is None + + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_raises_on_post_failure(self, mock_config, mock_http): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_response = MagicMock(status_code=400, text="Bad Request") + mock_response.raise_for_status.side_effect = Exception("HTTP 400") + mock_http.post.return_value = mock_response + + with pytest.raises(Exception, match="HTTP 400"): + TidbService.create_tidb_serverless_cluster("proj", "url", "iam", "pub", "priv", "us-east-1") + + +class TestBatchCreateEdgeCases: + """Cover logging/edge-case branches in batch_create.""" + + @patch.object(TidbService, "fetch_qdrant_endpoint", return_value=None) + @patch("dify_vdb_tidb_on_qdrant.tidb_service.redis_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_skips_cluster_when_no_cached_password(self, mock_config, mock_http, mock_redis, mock_fetch_ep): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_http.post.return_value = MagicMock( + status_code=200, + json=lambda: {"clusters": [{"clusterId": "c-1", "displayName": "name1"}]}, + ) + mock_redis.setex = MagicMock() + mock_redis.get.return_value = None + + result = TidbService.batch_create_tidb_serverless_cluster( + batch_size=1, + project_id="proj", + api_url="url", + iam_url="iam", + public_key="pub", + private_key="priv", + region="us-east-1", + ) + + assert len(result) == 0 + mock_fetch_ep.assert_not_called() + + @patch("dify_vdb_tidb_on_qdrant.tidb_service.redis_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service._tidb_http_client") + @patch("dify_vdb_tidb_on_qdrant.tidb_service.dify_config") + def test_raises_on_post_failure(self, mock_config, mock_http, mock_redis): + mock_config.TIDB_SPEND_LIMIT = 10 + mock_response = MagicMock(status_code=500, text="Server Error") + mock_response.raise_for_status.side_effect = Exception("HTTP 500") + mock_http.post.return_value = mock_response + mock_redis.setex = MagicMock() + + with pytest.raises(Exception, match="HTTP 500"): + TidbService.batch_create_tidb_serverless_cluster( + batch_size=1, + project_id="proj", + api_url="url", + iam_url="iam", + public_key="pub", + private_key="priv", + region="us-east-1", + ) diff --git a/api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/tidb_vector.py b/api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/tidb_vector.py index e321681093..c696a685dd 100644 --- a/api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/tidb_vector.py +++ b/api/providers/vdb/vdb-tidb-vector/src/dify_vdb_tidb_vector/tidb_vector.py @@ -31,7 +31,7 @@ class TiDBVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config TIDB_VECTOR_HOST is required") if not values["port"]: diff --git a/api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/upstash_vector.py b/api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/upstash_vector.py index 289d971853..75d70a1964 100644 --- a/api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/upstash_vector.py +++ b/api/providers/vdb/vdb-upstash/src/dify_vdb_upstash/upstash_vector.py @@ -20,7 +20,7 @@ class UpstashVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["url"]: raise ValueError("Upstash URL is required") if not values["token"]: diff --git a/api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/vastbase_vector.py b/api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/vastbase_vector.py index d080e8da58..ab00f9db28 100644 --- a/api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/vastbase_vector.py +++ b/api/providers/vdb/vdb-vastbase/src/dify_vdb_vastbase/vastbase_vector.py @@ -28,7 +28,7 @@ class VastbaseVectorConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict): + def validate_config(cls, values: dict[str, Any]): if not values["host"]: raise ValueError("config VASTBASE_HOST is required") if not values["port"]: diff --git a/api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/weaviate_vector.py b/api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/weaviate_vector.py index 25b65b82a9..902e6a03a8 100644 --- a/api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/weaviate_vector.py +++ b/api/providers/vdb/vdb-weaviate/src/dify_vdb_weaviate/weaviate_vector.py @@ -20,7 +20,7 @@ from pydantic import BaseModel, model_validator from weaviate.classes.data import DataObject from weaviate.classes.init import Auth from weaviate.classes.query import Filter, MetadataQuery -from weaviate.exceptions import UnexpectedStatusCodeError +from weaviate.exceptions import UnexpectedStatusCodeError, WeaviateQueryError from configs import dify_config from core.rag.datasource.vdb.field import Field @@ -82,7 +82,7 @@ class WeaviateConfig(BaseModel): @model_validator(mode="before") @classmethod - def validate_config(cls, values: dict) -> dict: + def validate_config(cls, values: dict[str, Any]) -> dict[str, Any]: """Validates that required configuration values are present.""" if not values["endpoint"]: raise ValueError("config WEAVIATE_ENDPOINT is required") @@ -230,6 +230,8 @@ class WeaviateVector(BaseVector): wc.Property(name="doc_id", data_type=wc.DataType.TEXT), wc.Property(name="doc_type", data_type=wc.DataType.TEXT), wc.Property(name="chunk_index", data_type=wc.DataType.INT), + wc.Property(name="is_summary", data_type=wc.DataType.BOOL), + wc.Property(name="original_chunk_id", data_type=wc.DataType.TEXT), ], vector_config=wc.Configure.Vectors.self_provided(), ) @@ -262,6 +264,10 @@ class WeaviateVector(BaseVector): to_add.append(wc.Property(name="doc_type", data_type=wc.DataType.TEXT)) if "chunk_index" not in existing: to_add.append(wc.Property(name="chunk_index", data_type=wc.DataType.INT)) + if "is_summary" not in existing: + to_add.append(wc.Property(name="is_summary", data_type=wc.DataType.BOOL)) + if "original_chunk_id" not in existing: + to_add.append(wc.Property(name="original_chunk_id", data_type=wc.DataType.TEXT)) for prop in to_add: try: @@ -400,15 +406,27 @@ class WeaviateVector(BaseVector): top_k = int(kwargs.get("top_k", 4)) score_threshold = float(kwargs.get("score_threshold") or 0.0) - res = col.query.near_vector( - near_vector=query_vector, - limit=top_k, - return_properties=props, - return_metadata=MetadataQuery(distance=True), - include_vector=False, - filters=where, - target_vector="default", - ) + try: + res = col.query.near_vector( + near_vector=query_vector, + limit=top_k, + return_properties=props, + return_metadata=MetadataQuery(distance=True), + include_vector=False, + filters=where, + target_vector="default", + ) + except WeaviateQueryError: + self._ensure_properties() + res = col.query.near_vector( + near_vector=query_vector, + limit=top_k, + return_properties=props, + return_metadata=MetadataQuery(distance=True), + include_vector=False, + filters=where, + target_vector="default", + ) docs: list[Document] = [] for obj in res.objects: @@ -446,14 +464,25 @@ class WeaviateVector(BaseVector): top_k = int(kwargs.get("top_k", 4)) - res = col.query.bm25( - query=query, - query_properties=[Field.TEXT_KEY.value], - limit=top_k, - return_properties=props, - include_vector=True, - filters=where, - ) + try: + res = col.query.bm25( + query=query, + query_properties=[Field.TEXT_KEY.value], + limit=top_k, + return_properties=props, + include_vector=True, + filters=where, + ) + except WeaviateQueryError: + self._ensure_properties() + res = col.query.bm25( + query=query, + query_properties=[Field.TEXT_KEY.value], + limit=top_k, + return_properties=props, + include_vector=True, + filters=where, + ) docs: list[Document] = [] for obj in res.objects: diff --git a/api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weaviate_vector.py b/api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weaviate_vector.py index b43a4a20c8..b40f7e52ca 100644 --- a/api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weaviate_vector.py +++ b/api/providers/vdb/vdb-weaviate/tests/unit_tests/test_weaviate_vector.py @@ -326,7 +326,7 @@ class TestWeaviateVector(unittest.TestCase): add_calls = mock_col.config.add_property.call_args_list added_names = [call.args[0].name for call in add_calls] - assert added_names == ["document_id", "doc_id", "doc_type", "chunk_index"] + assert added_names == ["document_id", "doc_id", "doc_type", "chunk_index", "is_summary", "original_chunk_id"] @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_ensure_properties_skips_existing_doc_type(self, mock_weaviate_module): @@ -346,6 +346,8 @@ class TestWeaviateVector(unittest.TestCase): SimpleNamespace(name="doc_id"), SimpleNamespace(name="doc_type"), SimpleNamespace(name="chunk_index"), + SimpleNamespace(name="is_summary"), + SimpleNamespace(name="original_chunk_id"), ] mock_cfg = MagicMock() mock_cfg.properties = existing_props @@ -383,7 +385,7 @@ class TestWeaviateVector(unittest.TestCase): with patch.object(weaviate_vector_module.logger, "warning") as mock_warning: wv._ensure_properties() - assert mock_warning.call_count == 4 + assert mock_warning.call_count == 6 @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_search_by_vector_returns_doc_type_in_metadata(self, mock_weaviate_module): @@ -484,6 +486,56 @@ class TestWeaviateVector(unittest.TestCase): assert wv.search_by_vector(query_vector=[0.1] * 3) == [] + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") + def test_search_by_vector_retries_on_weaviate_query_error(self, mock_weaviate_module): + """Test that search_by_vector catches WeaviateQueryError, calls _ensure_properties, and retries.""" + from weaviate.exceptions import WeaviateQueryError + + mock_client = MagicMock() + mock_client.is_ready.return_value = True + mock_weaviate_module.connect_to_custom.return_value = mock_client + + mock_client.collections.exists.return_value = True + mock_col = MagicMock() + mock_client.collections.use.return_value = mock_col + + # First call raises WeaviateQueryError, second call succeeds + mock_obj = MagicMock() + mock_obj.properties = {"text": "retry result", "document_id": "doc-1"} + mock_obj.metadata.distance = 0.2 + + mock_result = MagicMock() + mock_result.objects = [mock_obj] + + mock_col.query.near_vector.side_effect = [ + WeaviateQueryError("missing property", "gRPC"), + mock_result, + ] + + # Mock _ensure_properties dependencies + mock_cfg = MagicMock() + mock_cfg.properties = [ + SimpleNamespace(name="text"), + SimpleNamespace(name="document_id"), + SimpleNamespace(name="doc_id"), + SimpleNamespace(name="doc_type"), + SimpleNamespace(name="chunk_index"), + SimpleNamespace(name="is_summary"), + SimpleNamespace(name="original_chunk_id"), + ] + mock_col.config.get.return_value = mock_cfg + + wv = WeaviateVector( + collection_name=self.collection_name, + config=self.config, + attributes=self.attributes, + ) + docs = wv.search_by_vector(query_vector=[0.1] * 3, top_k=1) + + assert mock_col.query.near_vector.call_count == 2 + assert len(docs) == 1 + assert docs[0].metadata["score"] == pytest.approx(0.8) + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_search_by_full_text_returns_doc_type_in_metadata(self, mock_weaviate_module): """Test that search_by_full_text also returns doc_type in document metadata.""" @@ -569,6 +621,56 @@ class TestWeaviateVector(unittest.TestCase): assert wv.search_by_full_text(query="missing") == [] + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") + def test_search_by_full_text_retries_on_weaviate_query_error(self, mock_weaviate_module): + """Test that search_by_full_text catches WeaviateQueryError, calls _ensure_properties, and retries.""" + from weaviate.exceptions import WeaviateQueryError + + mock_client = MagicMock() + mock_client.is_ready.return_value = True + mock_weaviate_module.connect_to_custom.return_value = mock_client + + mock_client.collections.exists.return_value = True + mock_col = MagicMock() + mock_client.collections.use.return_value = mock_col + + # First call raises WeaviateQueryError, second call succeeds + mock_obj = MagicMock() + mock_obj.properties = {"text": "retry bm25 result", "doc_id": "segment-1"} + mock_obj.vector = {"default": [0.5, 0.6]} + + mock_result = MagicMock() + mock_result.objects = [mock_obj] + + mock_col.query.bm25.side_effect = [ + WeaviateQueryError("missing property", "gRPC"), + mock_result, + ] + + # Mock _ensure_properties dependencies + mock_cfg = MagicMock() + mock_cfg.properties = [ + SimpleNamespace(name="text"), + SimpleNamespace(name="document_id"), + SimpleNamespace(name="doc_id"), + SimpleNamespace(name="doc_type"), + SimpleNamespace(name="chunk_index"), + SimpleNamespace(name="is_summary"), + SimpleNamespace(name="original_chunk_id"), + ] + mock_col.config.get.return_value = mock_cfg + + wv = WeaviateVector( + collection_name=self.collection_name, + config=self.config, + attributes=self.attributes, + ) + docs = wv.search_by_full_text(query="retry", top_k=1) + + assert mock_col.query.bm25.call_count == 2 + assert len(docs) == 1 + assert docs[0].page_content == "retry bm25 result" + @patch("dify_vdb_weaviate.weaviate_vector.weaviate") def test_add_texts_stores_doc_type_in_properties(self, mock_weaviate_module): """Test that add_texts includes doc_type from document metadata in stored properties.""" diff --git a/api/pyproject.toml b/api/pyproject.toml index 3b7e5f8e1f..a1ceea181e 100644 --- a/api/pyproject.toml +++ b/api/pyproject.toml @@ -4,92 +4,55 @@ version = "1.13.3" requires-python = "~=3.12.0" dependencies = [ - "aliyun-log-python-sdk~=0.9.44", - "arize-phoenix-otel~=0.15.0", - "azure-identity==1.25.3", - "beautifulsoup4==4.14.3", - "boto3==1.42.88", - "bs4~=0.0.1", - "cachetools~=7.0.5", - "celery~=5.6.3", - "charset-normalizer>=3.4.7", - "flask~=3.1.3", - "flask-compress>=1.24,<1.25", - "flask-cors~=6.0.2", - "flask-login~=0.6.3", - "flask-migrate~=4.1.0", - "flask-orjson~=2.0.0", - "flask-sqlalchemy~=3.1.1", - "gevent~=26.4.0", - "gmpy2~=2.3.0", - "google-api-core>=2.30.3", - "google-api-python-client==2.194.0", - "google-auth>=2.49.2", - "google-auth-httplib2==0.3.1", - "google-cloud-aiplatform>=1.147.0", - "googleapis-common-protos>=1.74.0", - "graphon>=0.1.2", - "gunicorn~=25.3.0", - "httpx[socks]~=0.28.1", - "jieba==0.42.1", - "json-repair>=0.59.2", - "langfuse>=4.2.0,<5.0.0", - "langsmith~=0.7.30", - "markdown~=3.10.2", - "mlflow-skinny>=3.11.1", - "numpy~=2.4.4", - "openpyxl~=3.1.5", - "opik~=1.11.2", - "litellm==1.83.0", # Pinned to avoid madoka dependency issue - "opentelemetry-api==1.41.0", - "opentelemetry-distro==0.62b0", - "opentelemetry-exporter-otlp==1.41.0", - "opentelemetry-exporter-otlp-proto-common==1.41.0", - "opentelemetry-exporter-otlp-proto-grpc==1.41.0", - "opentelemetry-exporter-otlp-proto-http==1.41.0", - "opentelemetry-instrumentation==0.62b0", - "opentelemetry-instrumentation-celery==0.62b0", - "opentelemetry-instrumentation-flask==0.62b0", - "opentelemetry-instrumentation-httpx==0.62b0", - "opentelemetry-instrumentation-redis==0.62b0", - "opentelemetry-instrumentation-sqlalchemy==0.62b0", - "opentelemetry-propagator-b3==1.41.0", - "opentelemetry-proto==1.41.0", - "opentelemetry-sdk==1.41.0", - "opentelemetry-semantic-conventions==0.62b0", - "opentelemetry-util-http==0.62b0", - "pandas[excel,output-formatting,performance]~=3.0.2", - "psycogreen~=1.0.2", - "psycopg2-binary~=2.9.11", - "pycryptodome==3.23.0", - "pydantic~=2.12.5", - "pydantic-settings~=2.13.1", - "pyjwt~=2.12.1", - "pypdfium2==5.6.0", - "python-docx~=1.2.0", - "python-dotenv==1.2.2", - "pyyaml~=6.0.1", - "readabilipy~=0.3.0", - "redis[hiredis]~=7.4.0", - "resend~=2.27.0", - "sentry-sdk[flask]~=2.57.0", - "sqlalchemy~=2.0.49", - "starlette==1.0.0", - "tiktoken~=0.12.0", - "transformers~=5.3.0", - "unstructured[docx,epub,md,ppt,pptx]~=0.21.5", - "pypandoc~=1.13", - "yarl~=1.23.0", - "sseclient-py~=1.9.0", - "httpx-sse~=0.4.0", - "sendgrid~=6.12.5", - "flask-restx~=1.3.2", - "packaging~=26.0", + # Legacy: mature and widely deployed + "bleach>=6.3.0", + "boto3>=1.42.88", + "celery>=5.6.3", "croniter>=6.2.2", - "apscheduler>=3.11.2", - "weave>=0.52.36", - "fastopenapi[flask]>=0.7.0", - "bleach~=6.3.0", + "flask-cors>=6.0.2", + "gevent>=26.4.0", + "gevent-websocket>=0.10.1", + "gmpy2>=2.3.0", + "google-api-python-client>=2.194.0", + "gunicorn>=25.3.0", + "psycogreen>=1.0.2", + "psycopg2-binary>=2.9.11", + "python-socketio>=5.13.0", + "redis[hiredis]>=7.4.0", + "sendgrid>=6.12.5", + "sseclient-py>=1.8.0", + + # Stable: production-proven, cap below the next major + "aliyun-log-python-sdk>=0.9.44,<1.0.0", + "azure-identity>=1.25.3,<2.0.0", + "flask-compress>=1.24,<2.0.0", + "flask-login>=0.6.3,<1.0.0", + "flask-migrate>=4.1.0,<5.0.0", + "flask-orjson>=2.0.0,<3.0.0", + "flask-restx>=1.3.2,<2.0.0", + "google-cloud-aiplatform>=1.147.0,<2.0.0", + "httpx[socks]>=0.28.1,<1.0.0", + "langfuse>=4.2.0,<5.0.0", + "langsmith>=0.7.31,<1.0.0", + "mlflow-skinny>=3.11.1,<4.0.0", + "opentelemetry-distro>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-celery>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-flask>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-httpx>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-redis>=0.62b0,<1.0.0", + "opentelemetry-instrumentation-sqlalchemy>=0.62b0,<1.0.0", + "opentelemetry-propagator-b3>=1.41.0,<2.0.0", + "readabilipy>=0.3.0,<1.0.0", + "resend>=2.27.0,<3.0.0", + "weave>=0.52.36,<1.0.0", + + # Emerging: newer and fast-moving, use compatible pins + "arize-phoenix-otel~=0.15.0", + "fastopenapi[flask]~=0.7.0", + "graphon~=0.1.2", + "httpx-sse~=0.4.0", + "json-repair~=0.59.2", + "opik~=1.11.2", ] # Before adding new dependency, consider place it in # alphabet order (a-z) and suitable group. @@ -136,6 +99,9 @@ dify-vdb-weaviate = { workspace = true } [tool.uv] default-groups = ["storage", "tools", "vdb-all"] package = false +override-dependencies = [ + "pyarrow>=18.0.0", +] [dependency-groups] @@ -144,46 +110,46 @@ package = false # Required for development and running tests ############################################################ dev = [ - "coverage~=7.13.4", - "dotenv-linter~=0.7.0", - "faker~=40.13.0", - "lxml-stubs~=0.5.1", - "basedpyright~=1.39.0", - "ruff~=0.15.10", - "pytest~=9.0.3", - "pytest-benchmark~=5.2.3", - "pytest-cov~=7.1.0", - "pytest-env~=1.6.0", - "pytest-mock~=3.15.1", - "testcontainers~=4.14.2", - "types-aiofiles~=25.1.0", - "types-beautifulsoup4~=4.12.0", - "types-cachetools~=6.2.0", - "types-colorama~=0.4.15", - "types-defusedxml~=0.7.0", - "types-deprecated~=1.3.1", - "types-docutils~=0.22.3", - "types-flask-cors~=6.0.0", - "types-flask-migrate~=4.1.0", - "types-gevent~=26.4.0", - "types-greenlet~=3.4.0", - "types-html5lib~=1.1.11", - "types-markdown~=3.10.2", - "types-oauthlib~=3.3.0", - "types-objgraph~=3.6.0", - "types-olefile~=0.47.0", - "types-openpyxl~=3.1.5", - "types-pexpect~=4.9.0", - "types-protobuf~=7.34.1", - "types-psutil~=7.2.2", - "types-psycopg2~=2.9.21", - "types-pygments~=2.20.0", - "types-pymysql~=1.1.0", - "types-python-dateutil~=2.9.0", - "types-pywin32~=311.0.0", - "types-pyyaml~=6.0.12", - "types-regex~=2026.4.4", - "types-shapely~=2.1.0", + "coverage>=7.13.4", + "dotenv-linter>=0.7.0", + "faker>=20.1.0", + "lxml-stubs>=0.5.1", + "basedpyright>=1.39.0", + "ruff>=0.15.10", + "pytest>=9.0.3", + "pytest-benchmark>=5.2.3", + "pytest-cov>=7.1.0", + "pytest-env>=1.6.0", + "pytest-mock>=3.15.1", + "testcontainers>=4.14.2", + "types-aiofiles>=25.1.0", + "types-beautifulsoup4>=4.12.0", + "types-cachetools>=6.2.0", + "types-colorama>=0.4.15", + "types-defusedxml>=0.7.0", + "types-deprecated>=1.3.1", + "types-docutils>=0.22.3", + "types-flask-cors>=6.0.0", + "types-flask-migrate>=4.1.0", + "types-gevent>=26.4.0", + "types-greenlet>=3.4.0", + "types-html5lib>=1.1.11", + "types-markdown>=3.10.2", + "types-oauthlib>=3.3.0", + "types-objgraph>=3.6.0", + "types-olefile>=0.47.0", + "types-openpyxl>=3.1.5", + "types-pexpect>=4.9.0", + "types-protobuf>=7.34.1", + "types-psutil>=7.2.2", + "types-psycopg2>=2.9.21", + "types-pygments>=2.20.0", + "types-pymysql>=1.1.0", + "types-python-dateutil>=2.9.0", + "types-pywin32>=311.0.0", + "types-pyyaml>=6.0.12", + "types-regex>=2026.4.4", + "types-shapely>=2.1.0", "types-simplejson>=3.20.0.20260408", "types-six>=1.17.0.20260408", "types-tensorflow>=2.18.0.20260408", @@ -195,19 +161,18 @@ dev = [ "types_pyOpenSSL>=24.1.0", "types_cffi>=2.0.0.20260408", "types_setuptools>=82.0.0.20260408", - "pandas-stubs~=3.0.0", + "pandas-stubs>=3.0.0", "scipy-stubs>=1.15.3.0", "types-python-http-client>=3.3.7.20260408", "import-linter>=2.3", "types-redis>=4.6.0.20241004", "celery-types>=0.23.0", - "mypy~=1.20.1", + "mypy>=1.20.1", # "locust>=2.40.4", # Temporarily removed due to compatibility issues. Uncomment when resolved. - "sseclient-py>=1.8.0", "pytest-timeout>=2.4.0", "pytest-xdist>=3.8.0", "pyrefly>=0.60.0", - "xinference-client~=2.4.0", + "xinference-client>=2.4.0", ] ############################################################ @@ -215,21 +180,21 @@ dev = [ # Required for storage clients ############################################################ storage = [ - "azure-storage-blob==12.28.0", - "bce-python-sdk~=0.9.69", - "cos-python-sdk-v5==1.9.41", - "esdk-obs-python==3.26.2", + "azure-storage-blob>=12.28.0", + "bce-python-sdk>=0.9.69", + "cos-python-sdk-v5>=1.9.41", + "esdk-obs-python>=3.22.2", "google-cloud-storage>=3.10.1", - "opendal~=0.46.0", - "oss2==2.19.1", - "supabase~=2.18.1", - "tos~=2.9.0", + "opendal>=0.46.0", + "oss2>=2.19.1", + "supabase>=2.18.1", + "tos>=2.9.0", ] ############################################################ # [ Tools ] dependency group ############################################################ -tools = ["cloudscraper~=1.2.71", "nltk~=3.9.1"] +tools = ["cloudscraper>=1.2.71", "nltk>=3.9.1"] ############################################################ # [ VDB ] workspace plugins — hollow packages under providers/vdb/* @@ -299,7 +264,7 @@ vdb-vastbase = ["dify-vdb-vastbase"] vdb-vikingdb = ["dify-vdb-vikingdb"] vdb-weaviate = ["dify-vdb-weaviate"] # Optional client used by some tests / integrations (not a vector backend plugin) -vdb-xinference = ["xinference-client~=2.4.0"] +vdb-xinference = ["xinference-client>=2.4.0"] [tool.pyrefly] project-includes = ["."] diff --git a/api/repositories/api_workflow_run_repository.py b/api/repositories/api_workflow_run_repository.py index 100589804c..72b38e7906 100644 --- a/api/repositories/api_workflow_run_repository.py +++ b/api/repositories/api_workflow_run_repository.py @@ -38,11 +38,11 @@ from collections.abc import Callable, Sequence from datetime import datetime from typing import Protocol, TypedDict -from graphon.entities.pause_reason import PauseReason -from graphon.enums import WorkflowType from sqlalchemy.orm import Session from core.repositories.factory import WorkflowExecutionRepository +from graphon.entities.pause_reason import PauseReason +from graphon.enums import WorkflowType from libs.infinite_scroll_pagination import InfiniteScrollPagination from models.enums import WorkflowRunTriggeredFrom from models.workflow import WorkflowAppLog, WorkflowArchiveLog, WorkflowPause, WorkflowPauseReason, WorkflowRun diff --git a/api/repositories/sqlalchemy_api_workflow_node_execution_repository.py b/api/repositories/sqlalchemy_api_workflow_node_execution_repository.py index d5c6a203b1..44735eb769 100644 --- a/api/repositories/sqlalchemy_api_workflow_node_execution_repository.py +++ b/api/repositories/sqlalchemy_api_workflow_node_execution_repository.py @@ -10,11 +10,11 @@ from collections.abc import Sequence from datetime import datetime from typing import Protocol, cast -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from sqlalchemy import asc, delete, desc, func, select from sqlalchemy.engine import CursorResult from sqlalchemy.orm import Session, sessionmaker +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionOffload from repositories.api_workflow_node_execution_repository import ( DifyAPIWorkflowNodeExecutionRepository, diff --git a/api/repositories/sqlalchemy_api_workflow_run_repository.py b/api/repositories/sqlalchemy_api_workflow_run_repository.py index b760696c5e..474b200fc5 100644 --- a/api/repositories/sqlalchemy_api_workflow_run_repository.py +++ b/api/repositories/sqlalchemy_api_workflow_run_repository.py @@ -28,15 +28,15 @@ from decimal import Decimal from typing import Any, cast import sqlalchemy as sa -from graphon.entities.pause_reason import HumanInputRequired, PauseReason, PauseReasonType, SchedulingPause -from graphon.enums import WorkflowExecutionStatus, WorkflowType -from graphon.nodes.human_input.entities import FormDefinition from pydantic import ValidationError from sqlalchemy import and_, delete, func, null, or_, select, tuple_ from sqlalchemy.engine import CursorResult from sqlalchemy.orm import Session, selectinload, sessionmaker from extensions.ext_storage import storage +from graphon.entities.pause_reason import HumanInputRequired, PauseReason, PauseReasonType, SchedulingPause +from graphon.enums import WorkflowExecutionStatus, WorkflowType +from graphon.nodes.human_input.entities import FormDefinition from libs.datetime_utils import naive_utc_now from libs.helper import convert_datetime_to_date from libs.infinite_scroll_pagination import InfiniteScrollPagination diff --git a/api/repositories/sqlalchemy_execution_extra_content_repository.py b/api/repositories/sqlalchemy_execution_extra_content_repository.py index feba5f7eb6..67f8795d3f 100644 --- a/api/repositories/sqlalchemy_execution_extra_content_repository.py +++ b/api/repositories/sqlalchemy_execution_extra_content_repository.py @@ -7,9 +7,6 @@ from collections import defaultdict from collections.abc import Sequence from typing import Any -from graphon.nodes.human_input.entities import FormDefinition -from graphon.nodes.human_input.enums import HumanInputFormStatus -from graphon.nodes.human_input.human_input_node import HumanInputNode from sqlalchemy import select from sqlalchemy.orm import Session, selectinload, sessionmaker @@ -21,6 +18,9 @@ from core.entities.execution_extra_content import ( from core.entities.execution_extra_content import ( HumanInputContent as HumanInputContentDomainModel, ) +from graphon.nodes.human_input.entities import FormDefinition +from graphon.nodes.human_input.enums import HumanInputFormStatus +from graphon.nodes.human_input.human_input_node import HumanInputNode from models.execution_extra_content import ( ExecutionExtraContent as ExecutionExtraContentModel, ) diff --git a/api/repositories/workflow_collaboration_repository.py b/api/repositories/workflow_collaboration_repository.py new file mode 100644 index 0000000000..000f80496d --- /dev/null +++ b/api/repositories/workflow_collaboration_repository.py @@ -0,0 +1,147 @@ +from __future__ import annotations + +import json +from typing import TypedDict + +from extensions.ext_redis import redis_client + +SESSION_STATE_TTL_SECONDS = 3600 +WORKFLOW_ONLINE_USERS_PREFIX = "workflow_online_users:" +WORKFLOW_LEADER_PREFIX = "workflow_leader:" +WS_SID_MAP_PREFIX = "ws_sid_map:" + + +class WorkflowSessionInfo(TypedDict): + user_id: str + username: str + avatar: str | None + sid: str + connected_at: int + + +class SidMapping(TypedDict): + workflow_id: str + user_id: str + + +class WorkflowCollaborationRepository: + def __init__(self) -> None: + self._redis = redis_client + + def __repr__(self) -> str: + return f"{self.__class__.__name__}(redis_client={self._redis})" + + @staticmethod + def workflow_key(workflow_id: str) -> str: + return f"{WORKFLOW_ONLINE_USERS_PREFIX}{workflow_id}" + + @staticmethod + def leader_key(workflow_id: str) -> str: + return f"{WORKFLOW_LEADER_PREFIX}{workflow_id}" + + @staticmethod + def sid_key(sid: str) -> str: + return f"{WS_SID_MAP_PREFIX}{sid}" + + @staticmethod + def _decode(value: str | bytes | None) -> str | None: + if value is None: + return None + if isinstance(value, bytes): + return value.decode("utf-8") + return value + + def refresh_session_state(self, workflow_id: str, sid: str) -> None: + workflow_key = self.workflow_key(workflow_id) + sid_key = self.sid_key(sid) + if self._redis.exists(workflow_key): + self._redis.expire(workflow_key, SESSION_STATE_TTL_SECONDS) + if self._redis.exists(sid_key): + self._redis.expire(sid_key, SESSION_STATE_TTL_SECONDS) + + def set_session_info(self, workflow_id: str, session_info: WorkflowSessionInfo) -> None: + workflow_key = self.workflow_key(workflow_id) + self._redis.hset(workflow_key, session_info["sid"], json.dumps(session_info)) + self._redis.set( + self.sid_key(session_info["sid"]), + json.dumps({"workflow_id": workflow_id, "user_id": session_info["user_id"]}), + ex=SESSION_STATE_TTL_SECONDS, + ) + self.refresh_session_state(workflow_id, session_info["sid"]) + + def get_sid_mapping(self, sid: str) -> SidMapping | None: + raw = self._redis.get(self.sid_key(sid)) + if not raw: + return None + value = self._decode(raw) + if not value: + return None + try: + return json.loads(value) + except (TypeError, json.JSONDecodeError): + return None + + def delete_session(self, workflow_id: str, sid: str) -> None: + self._redis.hdel(self.workflow_key(workflow_id), sid) + self._redis.delete(self.sid_key(sid)) + + def session_exists(self, workflow_id: str, sid: str) -> bool: + return bool(self._redis.hexists(self.workflow_key(workflow_id), sid)) + + def sid_mapping_exists(self, sid: str) -> bool: + return bool(self._redis.exists(self.sid_key(sid))) + + def get_session_sids(self, workflow_id: str) -> list[str]: + raw_sids = self._redis.hkeys(self.workflow_key(workflow_id)) + decoded_sids: list[str] = [] + for sid in raw_sids: + decoded = self._decode(sid) + if decoded: + decoded_sids.append(decoded) + return decoded_sids + + def list_sessions(self, workflow_id: str) -> list[WorkflowSessionInfo]: + sessions_json = self._redis.hgetall(self.workflow_key(workflow_id)) + users: list[WorkflowSessionInfo] = [] + + for session_info_json in sessions_json.values(): + value = self._decode(session_info_json) + if not value: + continue + try: + session_info = json.loads(value) + except (TypeError, json.JSONDecodeError): + continue + + if not isinstance(session_info, dict): + continue + if "user_id" not in session_info or "username" not in session_info or "sid" not in session_info: + continue + + users.append( + { + "user_id": str(session_info["user_id"]), + "username": str(session_info["username"]), + "avatar": session_info.get("avatar"), + "sid": str(session_info["sid"]), + "connected_at": int(session_info.get("connected_at") or 0), + } + ) + + return users + + def get_current_leader(self, workflow_id: str) -> str | None: + raw = self._redis.get(self.leader_key(workflow_id)) + return self._decode(raw) + + def set_leader_if_absent(self, workflow_id: str, sid: str) -> bool: + return bool(self._redis.set(self.leader_key(workflow_id), sid, nx=True, ex=SESSION_STATE_TTL_SECONDS)) + + def set_leader(self, workflow_id: str, sid: str) -> None: + self._redis.set(self.leader_key(workflow_id), sid, ex=SESSION_STATE_TTL_SECONDS) + + def delete_leader(self, workflow_id: str) -> None: + self._redis.delete(self.leader_key(workflow_id)) + + def expire_leader(self, workflow_id: str) -> None: + self._redis.expire(self.leader_key(workflow_id), SESSION_STATE_TTL_SECONDS) diff --git a/api/schedule/create_tidb_serverless_task.py b/api/schedule/create_tidb_serverless_task.py index c4c203c150..e242b0c667 100644 --- a/api/schedule/create_tidb_serverless_task.py +++ b/api/schedule/create_tidb_serverless_task.py @@ -57,6 +57,7 @@ def create_clusters(batch_size): cluster_name=new_cluster["cluster_name"], account=new_cluster["account"], password=new_cluster["password"], + qdrant_endpoint=new_cluster.get("qdrant_endpoint"), active=False, status=TidbAuthBindingStatus.CREATING, ) diff --git a/api/services/app_dsl_service.py b/api/services/app_dsl_service.py index 40e1e5f8ab..78806927bc 100644 --- a/api/services/app_dsl_service.py +++ b/api/services/app_dsl_service.py @@ -3,19 +3,13 @@ import hashlib import logging import uuid from collections.abc import Mapping -from typing import cast +from typing import Any, cast from urllib.parse import urlparse from uuid import uuid4 import yaml from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad -from graphon.enums import BuiltinNodeTypes -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.nodes.llm.entities import LLMNodeData -from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData -from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData -from graphon.nodes.tool.entities import ToolNodeData from packaging import version from packaging.version import parse as parse_version from pydantic import BaseModel @@ -35,6 +29,12 @@ from core.workflow.nodes.trigger_schedule.trigger_schedule_node import TriggerSc from events.app_event import app_model_config_was_updated, app_was_created from extensions.ext_redis import redis_client from factories import variable_factory +from graphon.enums import BuiltinNodeTypes +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.nodes.llm.entities import LLMNodeData +from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData +from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData +from graphon.nodes.tool.entities import ToolNodeData from libs.datetime_utils import naive_utc_now from models import Account, App, AppMode from models.model import AppModelConfig, AppModelConfigDict, IconType @@ -400,7 +400,7 @@ class AppDslService: self, *, app: App | None, - data: dict, + data: dict[str, Any], account: Account, name: str | None = None, description: str | None = None, @@ -455,7 +455,7 @@ class AppDslService: app.updated_by = account.id self._session.add(app) - self._session.commit() + self._session.flush() app_was_created.send(app, account=account) # save dependencies @@ -567,7 +567,7 @@ class AppDslService: @classmethod def _append_workflow_export_data( - cls, *, export_data: dict, app_model: App, include_secret: bool, workflow_id: str | None = None + cls, *, export_data: dict[str, Any], app_model: App, include_secret: bool, workflow_id: str | None = None ): """ Append workflow export data @@ -620,7 +620,7 @@ class AppDslService: ] @classmethod - def _append_model_config_export_data(cls, export_data: dict, app_model: App): + def _append_model_config_export_data(cls, export_data: dict[str, Any], app_model: App): """ Append model config export data :param export_data: export data diff --git a/api/services/app_model_config_service.py b/api/services/app_model_config_service.py index 2013c869af..8252de7753 100644 --- a/api/services/app_model_config_service.py +++ b/api/services/app_model_config_service.py @@ -1,3 +1,5 @@ +from typing import Any + from core.app.apps.agent_chat.app_config_manager import AgentChatAppConfigManager from core.app.apps.chat.app_config_manager import ChatAppConfigManager from core.app.apps.completion.app_config_manager import CompletionAppConfigManager @@ -6,7 +8,7 @@ from models.model import AppMode, AppModelConfigDict class AppModelConfigService: @classmethod - def validate_configuration(cls, tenant_id: str, config: dict, app_mode: AppMode) -> AppModelConfigDict: + def validate_configuration(cls, tenant_id: str, config: dict[str, Any], app_mode: AppMode) -> AppModelConfigDict: match app_mode: case AppMode.CHAT: return ChatAppConfigManager.config_validate(tenant_id, config) diff --git a/api/services/app_service.py b/api/services/app_service.py index 87d52a3159..afd98e2975 100644 --- a/api/services/app_service.py +++ b/api/services/app_service.py @@ -4,8 +4,6 @@ from typing import Any, TypedDict, cast import sqlalchemy as sa from flask_sqlalchemy.pagination import Pagination -from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from sqlalchemy import select from configs import dify_config @@ -17,6 +15,8 @@ from core.tools.tool_manager import ToolManager from core.tools.utils.configuration import ToolParameterConfigurationManager from events.app_event import app_was_created, app_was_deleted, app_was_updated from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelPropertyKey, ModelType +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from libs.datetime_utils import naive_utc_now from libs.login import current_user from models import Account @@ -32,7 +32,7 @@ logger = logging.getLogger(__name__) class AppService: - def get_paginate_apps(self, user_id: str, tenant_id: str, args: dict) -> Pagination | None: + def get_paginate_apps(self, user_id: str, tenant_id: str, args: dict[str, Any]) -> Pagination | None: """ Get app list with pagination :param user_id: user id @@ -78,7 +78,7 @@ class AppService: return app_models - def create_app(self, tenant_id: str, args: dict, account: Account) -> App: + def create_app(self, tenant_id: str, args: dict[str, Any], account: Account) -> App: """ Create app :param tenant_id: tenant id @@ -389,7 +389,7 @@ class AppService: """ app_mode = AppMode.value_of(app_model.mode) - meta: dict = {"tool_icons": {}} + meta: dict[str, Any] = {"tool_icons": {}} if app_mode in {AppMode.ADVANCED_CHAT, AppMode.WORKFLOW}: workflow = app_model.workflow diff --git a/api/services/app_task_service.py b/api/services/app_task_service.py index 0842e9d3e7..6e9d6b1c73 100644 --- a/api/services/app_task_service.py +++ b/api/services/app_task_service.py @@ -5,11 +5,10 @@ like stopping tasks, handling both legacy Redis flag mechanism and new GraphEngine command channel mechanism. """ -from graphon.graph_engine.manager import GraphEngineManager - from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.entities.app_invoke_entities import InvokeFrom from extensions.ext_redis import redis_client +from graphon.graph_engine.manager import GraphEngineManager from models.model import AppMode diff --git a/api/services/audio_service.py b/api/services/audio_service.py index 1c7027efb4..60948e652b 100644 --- a/api/services/audio_service.py +++ b/api/services/audio_service.py @@ -5,12 +5,12 @@ from collections.abc import Generator from typing import cast from flask import Response, stream_with_context -from graphon.model_runtime.entities.model_entities import ModelType from werkzeug.datastructures import FileStorage from constants import AUDIO_EXTENSIONS from core.model_manager import ModelManager from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models.enums import MessageStatus from models.model import App, AppMode, Message from services.errors.audio import ( diff --git a/api/services/auth/api_key_auth_service.py b/api/services/auth/api_key_auth_service.py index 3282dcfb11..36b1517056 100644 --- a/api/services/auth/api_key_auth_service.py +++ b/api/services/auth/api_key_auth_service.py @@ -1,4 +1,5 @@ import json +from typing import Any from sqlalchemy import select @@ -19,7 +20,7 @@ class ApiKeyAuthService: return data_source_api_key_bindings @staticmethod - def create_provider_auth(tenant_id: str, args: dict): + def create_provider_auth(tenant_id: str, args: dict[str, Any]): auth_result = ApiKeyAuthFactory(args["provider"], args["credentials"]).validate_credentials() if auth_result: # Encrypt the api key diff --git a/api/services/billing_service.py b/api/services/billing_service.py index 75dd3519ad..c0e23cdc6f 100644 --- a/api/services/billing_service.py +++ b/api/services/billing_service.py @@ -2,7 +2,7 @@ import json import logging import os from collections.abc import Sequence -from typing import Literal, NotRequired, TypedDict +from typing import Any, Literal, NotRequired, TypedDict import httpx from pydantic import TypeAdapter @@ -637,7 +637,7 @@ class BillingService: start_time / end_time: RFC3339 strings (e.g. "2026-03-01T00:00:00Z"), optional. Returns {"notification_id": str}. """ - payload: dict = { + payload: dict[str, Any] = { "contents": contents, "frequency": frequency, "status": status, diff --git a/api/services/clear_free_plan_tenant_expired_logs.py b/api/services/clear_free_plan_tenant_expired_logs.py index ea12e40420..dcc93b4b0f 100644 --- a/api/services/clear_free_plan_tenant_expired_logs.py +++ b/api/services/clear_free_plan_tenant_expired_logs.py @@ -6,7 +6,6 @@ from concurrent.futures import ThreadPoolExecutor import click from flask import Flask, current_app -from graphon.model_runtime.utils.encoders import jsonable_encoder from sqlalchemy import delete, func, select from sqlalchemy.orm import Session, sessionmaker @@ -14,6 +13,7 @@ from configs import dify_config from enums.cloud_plan import CloudPlan from extensions.ext_database import db from extensions.ext_storage import storage +from graphon.model_runtime.utils.encoders import jsonable_encoder from models.account import Tenant from models.model import ( App, diff --git a/api/services/conversation_service.py b/api/services/conversation_service.py index f5085af59b..ee8a1c4edd 100644 --- a/api/services/conversation_service.py +++ b/api/services/conversation_service.py @@ -3,7 +3,6 @@ import logging from collections.abc import Callable, Sequence from typing import Any -from graphon.variables.types import SegmentType from sqlalchemy import asc, desc, func, or_, select from sqlalchemy.orm import Session @@ -13,6 +12,7 @@ from core.db.session_factory import session_factory from core.llm_generator.llm_generator import LLMGenerator from extensions.ext_database import db from factories import variable_factory +from graphon.variables.types import SegmentType from libs.datetime_utils import naive_utc_now from libs.infinite_scroll_pagination import InfiniteScrollPagination from models import Account, ConversationVariable diff --git a/api/services/conversation_variable_updater.py b/api/services/conversation_variable_updater.py index 95a8951951..287d513f48 100644 --- a/api/services/conversation_variable_updater.py +++ b/api/services/conversation_variable_updater.py @@ -1,7 +1,7 @@ -from graphon.variables.variables import VariableBase from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker +from graphon.variables.variables import VariableBase from models import ConversationVariable diff --git a/api/services/dataset_service.py b/api/services/dataset_service.py index e07e01ad42..e6f5f80a6d 100644 --- a/api/services/dataset_service.py +++ b/api/services/dataset_service.py @@ -10,9 +10,6 @@ from collections.abc import Sequence from typing import Any, Literal, TypedDict, cast import sqlalchemy as sa -from graphon.file import helpers as file_helpers -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType -from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from redis.exceptions import LockNotOwnedError from sqlalchemy import delete, exists, func, select, update from sqlalchemy.orm import Session, sessionmaker @@ -31,6 +28,9 @@ from events.dataset_event import dataset_was_deleted from events.document_event import document_was_deleted from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.file import helpers as file_helpers +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType +from graphon.model_runtime.model_providers.__base.text_embedding_model import TextEmbeddingModel from libs import helper from libs.datetime_utils import naive_utc_now from libs.login import current_user @@ -233,7 +233,7 @@ class DatasetService: embedding_model_provider: str | None = None, embedding_model_name: str | None = None, retrieval_model: RetrievalModel | None = None, - summary_index_setting: dict | None = None, + summary_index_setting: dict[str, Any] | None = None, ): # check if dataset name already exists if db.session.scalar(select(Dataset).where(Dataset.name == name, Dataset.tenant_id == tenant_id).limit(1)): @@ -2493,7 +2493,7 @@ class DocumentService: data_source_type: str, document_form: str, document_language: str, - data_source_info: dict, + data_source_info: dict[str, Any], created_from: str, position: int, account: Account, @@ -2850,7 +2850,7 @@ class DocumentService: raise ValueError("Process rule segmentation max_tokens is invalid") @classmethod - def estimate_args_validate(cls, args: dict): + def estimate_args_validate(cls, args: dict[str, Any]): if "info_list" not in args or not args["info_list"]: raise ValueError("Data source info is required") @@ -3132,7 +3132,7 @@ class DocumentService: class SegmentService: @classmethod - def segment_create_args_validate(cls, args: dict, document: Document): + def segment_create_args_validate(cls, args: dict[str, Any], document: Document): if document.doc_form == IndexStructureType.QA_INDEX: if "answer" not in args or not args["answer"]: raise ValueError("Answer is required") @@ -3149,7 +3149,7 @@ class SegmentService: raise ValueError(f"Exceeded maximum attachment limit of {single_chunk_attachment_limit}") @classmethod - def create_segment(cls, args: dict, document: Document, dataset: Dataset): + def create_segment(cls, args: dict[str, Any], document: Document, dataset: Dataset): assert isinstance(current_user, Account) assert current_user.current_tenant_id is not None diff --git a/api/services/datasource_provider_service.py b/api/services/datasource_provider_service.py index 9e7de36593..416bc8cef9 100644 --- a/api/services/datasource_provider_service.py +++ b/api/services/datasource_provider_service.py @@ -3,7 +3,6 @@ import time from collections.abc import Mapping from typing import Any -from graphon.model_runtime.entities.provider_entities import FormType from sqlalchemy import delete, func, select, update from sqlalchemy.orm import Session, sessionmaker @@ -18,6 +17,7 @@ from core.plugin.impl.oauth import OAuthHandler from core.tools.utils.encryption import ProviderConfigCache, ProviderConfigEncrypter, create_provider_encrypter from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.provider_entities import FormType from models.oauth import DatasourceOauthParamConfig, DatasourceOauthTenantParamConfig, DatasourceProvider from models.provider_ids import DatasourceProviderID from services.plugin.plugin_service import PluginService @@ -318,7 +318,7 @@ class DatasourceProviderService: self, tenant_id: str, datasource_provider_id: DatasourceProviderID, - client_params: dict | None, + client_params: dict[str, Any] | None, enabled: bool | None, ): """ @@ -352,7 +352,7 @@ class DatasourceProviderService: original_params = ( encrypter.decrypt(tenant_oauth_client_params.client_params) if tenant_oauth_client_params else {} ) - new_params: dict = { + new_params: dict[str, Any] = { key: value if value != HIDDEN_VALUE else original_params.get(key, UNKNOWN_VALUE) for key, value in client_params.items() } @@ -500,7 +500,7 @@ class DatasourceProviderService: provider_id: DatasourceProviderID, avatar_url: str | None, expire_at: int, - credentials: dict, + credentials: dict[str, Any], credential_id: str, ) -> None: """ @@ -566,7 +566,7 @@ class DatasourceProviderService: provider_id: DatasourceProviderID, avatar_url: str | None, expire_at: int, - credentials: dict, + credentials: dict[str, Any], ) -> None: """ add datasource oauth provider @@ -634,7 +634,7 @@ class DatasourceProviderService: name: str | None, tenant_id: str, provider_id: DatasourceProviderID, - credentials: dict, + credentials: dict[str, Any], ) -> None: """ validate datasource provider credentials. @@ -947,7 +947,13 @@ class DatasourceProviderService: return copy_credentials_list def update_datasource_credentials( - self, tenant_id: str, auth_id: str, provider: str, plugin_id: str, credentials: dict | None, name: str | None + self, + tenant_id: str, + auth_id: str, + provider: str, + plugin_id: str, + credentials: dict[str, Any] | None, + name: str | None, ) -> None: """ update datasource credentials. diff --git a/api/services/entities/external_knowledge_entities/external_knowledge_entities.py b/api/services/entities/external_knowledge_entities/external_knowledge_entities.py index c9fb1c9e21..110dbe5a5e 100644 --- a/api/services/entities/external_knowledge_entities/external_knowledge_entities.py +++ b/api/services/entities/external_knowledge_entities/external_knowledge_entities.py @@ -1,4 +1,4 @@ -from typing import Literal, Union +from typing import Any, Literal, Union from pydantic import BaseModel @@ -22,5 +22,5 @@ class ProcessStatusSetting(BaseModel): class ExternalKnowledgeApiSetting(BaseModel): url: str request_method: str - headers: dict | None = None - params: dict | None = None + headers: dict[str, Any] | None = None + params: dict[str, Any] | None = None diff --git a/api/services/entities/knowledge_entities/knowledge_entities.py b/api/services/entities/knowledge_entities/knowledge_entities.py index aee6004bff..b1fe352861 100644 --- a/api/services/entities/knowledge_entities/knowledge_entities.py +++ b/api/services/entities/knowledge_entities/knowledge_entities.py @@ -1,4 +1,4 @@ -from typing import Literal +from typing import Any, Literal from pydantic import BaseModel, field_validator @@ -87,7 +87,7 @@ class RetrievalModel(BaseModel): class MetaDataConfig(BaseModel): doc_type: str - doc_metadata: dict + doc_metadata: dict[str, Any] class KnowledgeConfig(BaseModel): @@ -97,7 +97,7 @@ class KnowledgeConfig(BaseModel): data_source: DataSource | None = None process_rule: ProcessRule | None = None retrieval_model: RetrievalModel | None = None - summary_index_setting: dict | None = None + summary_index_setting: dict[str, Any] | None = None doc_form: str = "text_model" doc_language: str = "English" embedding_model: str | None = None diff --git a/api/services/entities/knowledge_entities/rag_pipeline_entities.py b/api/services/entities/knowledge_entities/rag_pipeline_entities.py index 2afe9e1aa1..7fb7ed12bf 100644 --- a/api/services/entities/knowledge_entities/rag_pipeline_entities.py +++ b/api/services/entities/knowledge_entities/rag_pipeline_entities.py @@ -1,4 +1,4 @@ -from typing import Literal +from typing import Any, Literal from pydantic import BaseModel, field_validator @@ -73,7 +73,7 @@ class KnowledgeConfiguration(BaseModel): keyword_number: int | None = 10 retrieval_model: RetrievalSetting # add summary index setting - summary_index_setting: dict | None = None + summary_index_setting: dict[str, Any] | None = None @field_validator("embedding_model_provider", mode="before") @classmethod diff --git a/api/services/entities/model_provider_entities.py b/api/services/entities/model_provider_entities.py index a944ef6acd..6679c08ebd 100644 --- a/api/services/entities/model_provider_entities.py +++ b/api/services/entities/model_provider_entities.py @@ -1,15 +1,6 @@ from collections.abc import Sequence from enum import StrEnum -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - ModelCredentialSchema, - ProviderCredentialSchema, - ProviderHelpEntity, - SimpleProviderEntity, -) from pydantic import BaseModel, ConfigDict, model_validator from configs import dify_config @@ -24,6 +15,15 @@ from core.entities.provider_entities import ( QuotaConfiguration, UnaddedModelConfiguration, ) +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + ModelCredentialSchema, + ProviderCredentialSchema, + ProviderHelpEntity, + SimpleProviderEntity, +) from models.provider import ProviderType diff --git a/api/services/external_knowledge_service.py b/api/services/external_knowledge_service.py index 96db644d44..60b457ecd0 100644 --- a/api/services/external_knowledge_service.py +++ b/api/services/external_knowledge_service.py @@ -4,13 +4,13 @@ from typing import Any, cast from urllib.parse import urlparse import httpx -from graphon.nodes.http_request.exc import InvalidHttpMethodError from sqlalchemy import func, select from constants import HIDDEN_VALUE from core.helper import ssrf_proxy from core.rag.entities import MetadataFilteringCondition from extensions.ext_database import db +from graphon.nodes.http_request.exc import InvalidHttpMethodError from libs.datetime_utils import naive_utc_now from models.dataset import ( Dataset, @@ -47,7 +47,7 @@ class ExternalDatasetService: return external_knowledge_apis.items, external_knowledge_apis.total @classmethod - def validate_api_list(cls, api_settings: dict): + def validate_api_list(cls, api_settings: dict[str, Any]): if not api_settings: raise ValueError("api list is empty") if not api_settings.get("endpoint"): @@ -56,7 +56,7 @@ class ExternalDatasetService: raise ValueError("api_key is required") @staticmethod - def create_external_knowledge_api(tenant_id: str, user_id: str, args: dict) -> ExternalKnowledgeApis: + def create_external_knowledge_api(tenant_id: str, user_id: str, args: dict[str, Any]) -> ExternalKnowledgeApis: settings = args.get("settings") if settings is None: raise ValueError("settings is required") @@ -75,7 +75,7 @@ class ExternalDatasetService: return external_knowledge_api @staticmethod - def check_endpoint_and_api_key(settings: dict): + def check_endpoint_and_api_key(settings: dict[str, Any]): if "endpoint" not in settings or not settings["endpoint"]: raise ValueError("endpoint is required") if "api_key" not in settings or not settings["api_key"]: @@ -178,7 +178,9 @@ class ExternalDatasetService: return external_knowledge_binding @staticmethod - def document_create_args_validate(tenant_id: str, external_knowledge_api_id: str, process_parameter: dict): + def document_create_args_validate( + tenant_id: str, external_knowledge_api_id: str, process_parameter: dict[str, Any] + ): external_knowledge_api = db.session.scalar( select(ExternalKnowledgeApis) .where(ExternalKnowledgeApis.id == external_knowledge_api_id, ExternalKnowledgeApis.tenant_id == tenant_id) @@ -222,7 +224,7 @@ class ExternalDatasetService: return response @staticmethod - def assembling_headers(authorization: Authorization, headers: dict | None = None) -> dict[str, Any]: + def assembling_headers(authorization: Authorization, headers: dict[str, Any] | None = None) -> dict[str, Any]: authorization = deepcopy(authorization) if headers: headers = deepcopy(headers) @@ -248,11 +250,11 @@ class ExternalDatasetService: return headers @staticmethod - def get_external_knowledge_api_settings(settings: dict) -> ExternalKnowledgeApiSetting: + def get_external_knowledge_api_settings(settings: dict[str, Any]) -> ExternalKnowledgeApiSetting: return ExternalKnowledgeApiSetting.model_validate(settings) @staticmethod - def create_external_dataset(tenant_id: str, user_id: str, args: dict) -> Dataset: + def create_external_dataset(tenant_id: str, user_id: str, args: dict[str, Any]) -> Dataset: # check if dataset name already exists if db.session.scalar( select(Dataset).where(Dataset.name == args.get("name"), Dataset.tenant_id == tenant_id).limit(1) @@ -304,7 +306,7 @@ class ExternalDatasetService: tenant_id: str, dataset_id: str, query: str, - external_retrieval_parameters: dict, + external_retrieval_parameters: dict[str, Any], metadata_condition: MetadataFilteringCondition | None = None, ): external_knowledge_binding = db.session.scalar( diff --git a/api/services/feature_service.py b/api/services/feature_service.py index 9216a7fb99..e4eb9e7582 100644 --- a/api/services/feature_service.py +++ b/api/services/feature_service.py @@ -164,6 +164,7 @@ class SystemFeatureModel(BaseModel): enable_email_code_login: bool = False enable_email_password_login: bool = True enable_social_oauth_login: bool = False + enable_collaboration_mode: bool = False is_allow_register: bool = False is_allow_create_workspace: bool = False is_email_setup: bool = False @@ -244,6 +245,7 @@ class FeatureService: system_features.enable_email_code_login = dify_config.ENABLE_EMAIL_CODE_LOGIN system_features.enable_email_password_login = dify_config.ENABLE_EMAIL_PASSWORD_LOGIN system_features.enable_social_oauth_login = dify_config.ENABLE_SOCIAL_OAUTH_LOGIN + system_features.enable_collaboration_mode = dify_config.ENABLE_COLLABORATION_MODE system_features.is_allow_register = dify_config.ALLOW_REGISTER system_features.is_allow_create_workspace = dify_config.ALLOW_CREATE_WORKSPACE system_features.is_email_setup = dify_config.MAIL_TYPE is not None and dify_config.MAIL_TYPE != "" diff --git a/api/services/file_service.py b/api/services/file_service.py index 79a935de4b..52da2a7951 100644 --- a/api/services/file_service.py +++ b/api/services/file_service.py @@ -8,7 +8,6 @@ from tempfile import NamedTemporaryFile from typing import Literal from zipfile import ZIP_DEFLATED, ZipFile -from graphon.file import helpers as file_helpers from sqlalchemy import Engine, select from sqlalchemy.orm import Session, sessionmaker from werkzeug.exceptions import NotFound @@ -24,6 +23,7 @@ from core.rag.extractor.extract_processor import ExtractProcessor from extensions.ext_database import db from extensions.ext_storage import storage from extensions.storage.storage_type import StorageType +from graphon.file import helpers as file_helpers from libs.datetime_utils import naive_utc_now from libs.helper import extract_tenant_id from models import Account diff --git a/api/services/hit_testing_service.py b/api/services/hit_testing_service.py index 4a21e3c5bd..ca84b2a3d8 100644 --- a/api/services/hit_testing_service.py +++ b/api/services/hit_testing_service.py @@ -3,8 +3,6 @@ import logging import time from typing import Any, TypedDict -from graphon.model_runtime.entities import LLMMode - from core.app.app_config.entities import ModelConfig from core.rag.datasource.retrieval_service import RetrievalService from core.rag.index_processor.constant.query_type import QueryType @@ -12,6 +10,7 @@ from core.rag.models.document import Document from core.rag.retrieval.dataset_retrieval import DatasetRetrieval from core.rag.retrieval.retrieval_methods import RetrievalMethod from extensions.ext_database import db +from graphon.model_runtime.entities import LLMMode from models import Account from models.dataset import Dataset, DatasetQuery from models.enums import CreatorUserRole, DatasetQuerySource @@ -45,7 +44,7 @@ class HitTestingService: query: str, account: Account, retrieval_model: dict[str, Any] | None, - external_retrieval_model: dict, + external_retrieval_model: dict[str, Any], attachment_ids: list | None = None, limit: int = 10, ): @@ -125,8 +124,8 @@ class HitTestingService: dataset: Dataset, query: str, account: Account, - external_retrieval_model: dict | None = None, - metadata_filtering_conditions: dict | None = None, + external_retrieval_model: dict[str, Any] | None = None, + metadata_filtering_conditions: dict[str, Any] | None = None, ): if dataset.provider != "external": return { diff --git a/api/services/human_input_delivery_test_service.py b/api/services/human_input_delivery_test_service.py index 77576fa4c0..68ef67dec1 100644 --- a/api/services/human_input_delivery_test_service.py +++ b/api/services/human_input_delivery_test_service.py @@ -4,7 +4,6 @@ from dataclasses import dataclass, field from enum import StrEnum from typing import Protocol -from graphon.runtime import VariablePool from sqlalchemy import Engine, select from sqlalchemy.orm import sessionmaker @@ -18,6 +17,7 @@ from core.workflow.human_input_compat import ( ) from extensions.ext_database import db from extensions.ext_mail import mail +from graphon.runtime import VariablePool from libs.email_template_renderer import render_email_template from models import Account, TenantAccountJoin from services.feature_service import FeatureService diff --git a/api/services/human_input_service.py b/api/services/human_input_service.py index 02a6620fc7..76598d31ac 100644 --- a/api/services/human_input_service.py +++ b/api/services/human_input_service.py @@ -3,12 +3,6 @@ from collections.abc import Mapping from datetime import datetime, timedelta from typing import Any -from graphon.nodes.human_input.entities import ( - FormDefinition, - HumanInputSubmissionValidationError, - validate_human_input_submission, -) -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from sqlalchemy import Engine, select from sqlalchemy.orm import Session, sessionmaker @@ -17,6 +11,12 @@ from core.repositories.human_input_repository import ( HumanInputFormRecord, HumanInputFormSubmissionRepository, ) +from graphon.nodes.human_input.entities import ( + FormDefinition, + HumanInputSubmissionValidationError, + validate_human_input_submission, +) +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import ensure_naive_utc, naive_utc_now from libs.exception import BaseHTTPException from models.human_input import RecipientType diff --git a/api/services/message_service.py b/api/services/message_service.py index 5b133b0c04..98f24dd6a6 100644 --- a/api/services/message_service.py +++ b/api/services/message_service.py @@ -1,6 +1,5 @@ from collections.abc import Sequence -from graphon.model_runtime.entities.model_entities import ModelType from pydantic import TypeAdapter from sqlalchemy import select from sqlalchemy.orm import sessionmaker @@ -14,6 +13,7 @@ from core.ops.entities.trace_entity import TraceTaskName from core.ops.ops_trace_manager import TraceQueueManager, TraceTask from core.ops.utils import measure_time from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from libs.infinite_scroll_pagination import InfiniteScrollPagination from models import Account from models.enums import FeedbackFromSource, FeedbackRating diff --git a/api/services/model_load_balancing_service.py b/api/services/model_load_balancing_service.py index 41b6b885b2..c269346f5f 100644 --- a/api/services/model_load_balancing_service.py +++ b/api/services/model_load_balancing_service.py @@ -2,12 +2,6 @@ import json import logging from typing import Any, TypedDict -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.provider_entities import ( - ModelCredentialSchema, - ProviderCredentialSchema, -) -from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory from sqlalchemy import or_, select from constants import HIDDEN_VALUE @@ -18,6 +12,12 @@ from core.model_manager import LBModelManager from core.plugin.impl.model_runtime_factory import create_plugin_model_assembly, create_plugin_provider_manager from core.provider_manager import ProviderManager from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.provider_entities import ( + ModelCredentialSchema, + ProviderCredentialSchema, +) +from graphon.model_runtime.model_providers.model_provider_factory import ModelProviderFactory from libs.datetime_utils import naive_utc_now from models.enums import CredentialSourceType from models.provider import LoadBalancingModelConfig, ProviderCredential, ProviderModelCredential @@ -502,7 +502,7 @@ class ModelLoadBalancingService: provider: str, model: str, model_type: str, - credentials: dict, + credentials: dict[str, Any], config_id: str | None = None, ): """ @@ -561,7 +561,7 @@ class ModelLoadBalancingService: provider_configuration: ProviderConfiguration, model_type: ModelType, model: str, - credentials: dict, + credentials: dict[str, Any], load_balancing_model_config: LoadBalancingModelConfig | None = None, model_provider_factory: ModelProviderFactory | None = None, validate: bool = True, diff --git a/api/services/model_provider_service.py b/api/services/model_provider_service.py index 3f37c9b176..51cda79661 100644 --- a/api/services/model_provider_service.py +++ b/api/services/model_provider_service.py @@ -1,10 +1,10 @@ import logging - -from graphon.model_runtime.entities.model_entities import ModelType, ParameterRule +from typing import Any from core.entities.model_entities import ModelWithProviderEntity, ProviderModelWithStatusEntity from core.plugin.impl.model_runtime_factory import create_plugin_model_provider_factory, create_plugin_provider_manager from core.provider_manager import ProviderManager +from graphon.model_runtime.entities.model_entities import ModelType, ParameterRule from models.provider import ProviderType from services.entities.model_provider_entities import ( CustomConfigurationResponse, @@ -168,7 +168,9 @@ class ModelProviderService: model_name=model, ) - def get_provider_credential(self, tenant_id: str, provider: str, credential_id: str | None = None) -> dict | None: + def get_provider_credential( + self, tenant_id: str, provider: str, credential_id: str | None = None + ) -> dict[str, Any] | None: """ get provider credentials. @@ -180,7 +182,7 @@ class ModelProviderService: provider_configuration = self._get_provider_configuration(tenant_id, provider) return provider_configuration.get_provider_credential(credential_id=credential_id) - def validate_provider_credentials(self, tenant_id: str, provider: str, credentials: dict): + def validate_provider_credentials(self, tenant_id: str, provider: str, credentials: dict[str, Any]): """ validate provider credentials before saving. @@ -192,7 +194,7 @@ class ModelProviderService: provider_configuration.validate_provider_credentials(credentials) def create_provider_credential( - self, tenant_id: str, provider: str, credentials: dict, credential_name: str | None + self, tenant_id: str, provider: str, credentials: dict[str, Any], credential_name: str | None ) -> None: """ Create and save new provider credentials. @@ -210,7 +212,7 @@ class ModelProviderService: self, tenant_id: str, provider: str, - credentials: dict, + credentials: dict[str, Any], credential_id: str, credential_name: str | None, ) -> None: @@ -254,7 +256,7 @@ class ModelProviderService: def get_model_credential( self, tenant_id: str, provider: str, model_type: str, model: str, credential_id: str | None - ) -> dict | None: + ) -> dict[str, Any] | None: """ Retrieve model-specific credentials. @@ -270,7 +272,9 @@ class ModelProviderService: model_type=ModelType.value_of(model_type), model=model, credential_id=credential_id ) - def validate_model_credentials(self, tenant_id: str, provider: str, model_type: str, model: str, credentials: dict): + def validate_model_credentials( + self, tenant_id: str, provider: str, model_type: str, model: str, credentials: dict[str, Any] + ): """ validate model credentials. @@ -287,7 +291,13 @@ class ModelProviderService: ) def create_model_credential( - self, tenant_id: str, provider: str, model_type: str, model: str, credentials: dict, credential_name: str | None + self, + tenant_id: str, + provider: str, + model_type: str, + model: str, + credentials: dict[str, Any], + credential_name: str | None, ) -> None: """ create and save model credentials. @@ -314,7 +324,7 @@ class ModelProviderService: provider: str, model_type: str, model: str, - credentials: dict, + credentials: dict[str, Any], credential_id: str, credential_name: str | None, ) -> None: diff --git a/api/services/plugin/oauth_service.py b/api/services/plugin/oauth_service.py index 88dec062a0..789b5fa5b7 100644 --- a/api/services/plugin/oauth_service.py +++ b/api/services/plugin/oauth_service.py @@ -1,5 +1,6 @@ import json import uuid +from typing import Any from core.plugin.impl.base import BasePluginClient from extensions.ext_redis import redis_client @@ -16,7 +17,7 @@ class OAuthProxyService(BasePluginClient): tenant_id: str, plugin_id: str, provider: str, - extra_data: dict = {}, + extra_data: dict[str, Any] = {}, credential_id: str | None = None, ): """ diff --git a/api/services/rag_pipeline/rag_pipeline.py b/api/services/rag_pipeline/rag_pipeline.py index 5fc5b412b3..968600d1bc 100644 --- a/api/services/rag_pipeline/rag_pipeline.py +++ b/api/services/rag_pipeline/rag_pipeline.py @@ -9,15 +9,6 @@ from typing import Any, cast from uuid import uuid4 from flask_login import current_user -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes, ErrorStrategy, NodeType, WorkflowNodeExecutionStatus -from graphon.errors import WorkflowNodeRunFailedError -from graphon.graph_events import GraphNodeEventBase, NodeRunFailedEvent, NodeRunSucceededEvent -from graphon.node_events import NodeRunResult -from graphon.nodes.base.node import Node -from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, build_http_request_config -from graphon.runtime import VariablePool -from graphon.variables.variables import Variable, VariableBase from sqlalchemy import func, select from sqlalchemy.orm import Session, sessionmaker @@ -53,6 +44,15 @@ from core.workflow.variable_pool_initializer import add_variables_to_pool from core.workflow.workflow_entry import WorkflowEntry from enterprise.telemetry.draft_trace import enqueue_draft_node_execution_trace from extensions.ext_database import db +from graphon.entities import WorkflowNodeExecution +from graphon.enums import BuiltinNodeTypes, ErrorStrategy, NodeType, WorkflowNodeExecutionStatus +from graphon.errors import WorkflowNodeRunFailedError +from graphon.graph_events import GraphNodeEventBase, NodeRunFailedEvent, NodeRunSucceededEvent +from graphon.node_events import NodeRunResult +from graphon.nodes.base.node import Node +from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, build_http_request_config +from graphon.runtime import VariablePool +from graphon.variables.variables import Variable, VariableBase from libs.infinite_scroll_pagination import InfiniteScrollPagination from models import Account from models.dataset import ( # type: ignore @@ -104,7 +104,7 @@ class RagPipelineService: self._workflow_run_repo = DifyAPIRepositoryFactory.create_api_workflow_run_repository(session_maker) @classmethod - def get_pipeline_templates(cls, type: str = "built-in", language: str = "en-US") -> dict: + def get_pipeline_templates(cls, type: str = "built-in", language: str = "en-US") -> dict[str, Any]: if type == "built-in": mode = dify_config.HOSTED_FETCH_PIPELINE_TEMPLATES_MODE retrieval_instance = PipelineTemplateRetrievalFactory.get_pipeline_template_factory(mode)() @@ -120,7 +120,7 @@ class RagPipelineService: return result @classmethod - def get_pipeline_template_detail(cls, template_id: str, type: str = "built-in") -> dict | None: + def get_pipeline_template_detail(cls, template_id: str, type: str = "built-in") -> dict[str, Any] | None: """ Get pipeline template detail. @@ -131,7 +131,7 @@ class RagPipelineService: if type == "built-in": mode = dify_config.HOSTED_FETCH_PIPELINE_TEMPLATES_MODE retrieval_instance = PipelineTemplateRetrievalFactory.get_pipeline_template_factory(mode)() - built_in_result: dict | None = retrieval_instance.get_pipeline_template_detail(template_id) + built_in_result: dict[str, Any] | None = retrieval_instance.get_pipeline_template_detail(template_id) if built_in_result is None: logger.warning( "pipeline template retrieval returned empty result, template_id: %s, mode: %s", @@ -142,7 +142,7 @@ class RagPipelineService: else: mode = "customized" retrieval_instance = PipelineTemplateRetrievalFactory.get_pipeline_template_factory(mode)() - customized_result: dict | None = retrieval_instance.get_pipeline_template_detail(template_id) + customized_result: dict[str, Any] | None = retrieval_instance.get_pipeline_template_detail(template_id) return customized_result @classmethod @@ -297,7 +297,7 @@ class RagPipelineService: self, *, pipeline: Pipeline, - graph: dict, + graph: dict[str, Any], unique_hash: str | None, account: Account, environment_variables: Sequence[VariableBase], @@ -467,7 +467,9 @@ class RagPipelineService: return default_block_configs - def get_default_block_config(self, node_type: str, filters: dict | None = None) -> Mapping[str, object] | None: + def get_default_block_config( + self, node_type: str, filters: dict[str, Any] | None = None + ) -> Mapping[str, object] | None: """ Get default config of node. :param node_type: node type @@ -500,7 +502,7 @@ class RagPipelineService: return default_config def run_draft_workflow_node( - self, pipeline: Pipeline, node_id: str, user_inputs: dict, account: Account + self, pipeline: Pipeline, node_id: str, user_inputs: dict[str, Any], account: Account ) -> WorkflowNodeExecutionModel | None: """ Run draft workflow node @@ -582,7 +584,7 @@ class RagPipelineService: self, pipeline: Pipeline, node_id: str, - user_inputs: dict, + user_inputs: dict[str, Any], account: Account, datasource_type: str, is_published: bool, @@ -749,7 +751,7 @@ class RagPipelineService: self, pipeline: Pipeline, node_id: str, - user_inputs: dict, + user_inputs: dict[str, Any], account: Account, datasource_type: str, is_published: bool, @@ -979,7 +981,7 @@ class RagPipelineService: return workflow_node_execution def update_workflow( - self, *, session: Session, workflow_id: str, tenant_id: str, account_id: str, data: dict + self, *, session: Session, workflow_id: str, tenant_id: str, account_id: str, data: dict[str, Any] ) -> Workflow | None: """ Update workflow attributes @@ -1099,7 +1101,9 @@ class RagPipelineService: ] return datasource_provider_variables - def get_rag_pipeline_paginate_workflow_runs(self, pipeline: Pipeline, args: dict) -> InfiniteScrollPagination: + def get_rag_pipeline_paginate_workflow_runs( + self, pipeline: Pipeline, args: dict[str, Any] + ) -> InfiniteScrollPagination: """ Get debug workflow run list Only return triggered_from == debugging @@ -1169,7 +1173,7 @@ class RagPipelineService: return list(node_executions) @classmethod - def publish_customized_pipeline_template(cls, pipeline_id: str, args: dict): + def publish_customized_pipeline_template(cls, pipeline_id: str, args: dict[str, Any]): """ Publish customized pipeline template """ @@ -1259,7 +1263,7 @@ class RagPipelineService: ) return node_exec - def set_datasource_variables(self, pipeline: Pipeline, args: dict, current_user: Account): + def set_datasource_variables(self, pipeline: Pipeline, args: dict[str, Any], current_user: Account): """ Set datasource variables """ @@ -1346,7 +1350,7 @@ class RagPipelineService: ) return workflow_node_execution_db_model - def get_recommended_plugins(self, type: str) -> dict: + def get_recommended_plugins(self, type: str) -> dict[str, Any]: # Query active recommended plugins stmt = select(PipelineRecommendedPlugin).where(PipelineRecommendedPlugin.active == True) if type and type != "all": diff --git a/api/services/rag_pipeline/rag_pipeline_dsl_service.py b/api/services/rag_pipeline/rag_pipeline_dsl_service.py index 65bdf43af5..f315d053cb 100644 --- a/api/services/rag_pipeline/rag_pipeline_dsl_service.py +++ b/api/services/rag_pipeline/rag_pipeline_dsl_service.py @@ -5,7 +5,7 @@ import logging import uuid from collections.abc import Mapping from datetime import UTC, datetime -from typing import cast +from typing import Any, cast from urllib.parse import urlparse from uuid import uuid4 @@ -13,12 +13,6 @@ import yaml # type: ignore from Crypto.Cipher import AES from Crypto.Util.Padding import pad, unpad from flask_login import current_user -from graphon.enums import BuiltinNodeTypes -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.nodes.llm.entities import LLMNodeData -from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData -from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData -from graphon.nodes.tool.entities import ToolNodeData from packaging import version from pydantic import BaseModel from sqlalchemy import select @@ -33,6 +27,12 @@ from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE from core.workflow.nodes.knowledge_retrieval.entities import KnowledgeRetrievalNodeData from extensions.ext_redis import redis_client from factories import variable_factory +from graphon.enums import BuiltinNodeTypes +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.nodes.llm.entities import LLMNodeData +from graphon.nodes.parameter_extractor.entities import ParameterExtractorNodeData +from graphon.nodes.question_classifier.entities import QuestionClassifierNodeData +from graphon.nodes.tool.entities import ToolNodeData from models import Account from models.dataset import Dataset, DatasetCollectionBinding, Pipeline from models.enums import CollectionBindingType, DatasetRuntimeMode @@ -526,7 +526,7 @@ class RagPipelineDslService: self, *, pipeline: Pipeline | None, - data: dict, + data: dict[str, Any], account: Account, dependencies: list[PluginDependency] | None = None, ) -> Pipeline: @@ -660,7 +660,9 @@ class RagPipelineDslService: return yaml.dump(export_data, allow_unicode=True) # type: ignore - def _append_workflow_export_data(self, *, export_data: dict, pipeline: Pipeline, include_secret: bool) -> None: + def _append_workflow_export_data( + self, *, export_data: dict[str, Any], pipeline: Pipeline, include_secret: bool + ) -> None: """ Append workflow export data :param export_data: export data diff --git a/api/services/rag_pipeline/rag_pipeline_transform_service.py b/api/services/rag_pipeline/rag_pipeline_transform_service.py index c3b00fe109..f08ec7474b 100644 --- a/api/services/rag_pipeline/rag_pipeline_transform_service.py +++ b/api/services/rag_pipeline/rag_pipeline_transform_service.py @@ -2,6 +2,7 @@ import json import logging from datetime import UTC, datetime from pathlib import Path +from typing import Any from uuid import uuid4 import yaml @@ -154,7 +155,7 @@ class RagPipelineTransformService: raise ValueError("Unsupported doc form") return pipeline_yaml - def _deal_file_extensions(self, node: dict): + def _deal_file_extensions(self, node: dict[str, Any]): file_extensions = node.get("data", {}).get("fileExtensions", []) if not file_extensions: return node @@ -167,7 +168,7 @@ class RagPipelineTransformService: dataset: Dataset, indexing_technique: str | None, retrieval_model: RetrievalSetting | None, - node: dict, + node: dict[str, Any], ): knowledge_configuration_dict = node.get("data", {}) @@ -191,7 +192,7 @@ class RagPipelineTransformService: def _create_pipeline( self, - data: dict, + data: dict[str, Any], ) -> Pipeline: """Create a new app or update an existing one.""" pipeline_data = data.get("rag_pipeline", {}) @@ -258,7 +259,7 @@ class RagPipelineTransformService: db.session.add(pipeline) return pipeline - def _deal_dependencies(self, pipeline_yaml: dict, tenant_id: str): + def _deal_dependencies(self, pipeline_yaml: dict[str, Any], tenant_id: str): installer_manager = PluginInstaller() installed_plugins = installer_manager.list_plugins(tenant_id) diff --git a/api/services/recommend_app/buildin/buildin_retrieval.py b/api/services/recommend_app/buildin/buildin_retrieval.py index 64751d186c..16dc66cd76 100644 --- a/api/services/recommend_app/buildin/buildin_retrieval.py +++ b/api/services/recommend_app/buildin/buildin_retrieval.py @@ -1,6 +1,7 @@ import json from os import path from pathlib import Path +from typing import Any from flask import current_app @@ -13,7 +14,7 @@ class BuildInRecommendAppRetrieval(RecommendAppRetrievalBase): Retrieval recommended app from buildin, the location is constants/recommended_apps.json """ - builtin_data: dict | None = None + builtin_data: dict[str, Any] | None = None def get_type(self) -> str: return RecommendAppType.BUILDIN @@ -53,7 +54,7 @@ class BuildInRecommendAppRetrieval(RecommendAppRetrievalBase): return builtin_data.get("recommended_apps", {}).get(language, {}) @classmethod - def fetch_recommended_app_detail_from_builtin(cls, app_id: str) -> dict | None: + def fetch_recommended_app_detail_from_builtin(cls, app_id: str) -> dict[str, Any] | None: """ Fetch recommended app detail from builtin. :param app_id: App ID diff --git a/api/services/recommend_app/remote/remote_retrieval.py b/api/services/recommend_app/remote/remote_retrieval.py index b217c9026a..5818be0480 100644 --- a/api/services/recommend_app/remote/remote_retrieval.py +++ b/api/services/recommend_app/remote/remote_retrieval.py @@ -1,4 +1,5 @@ import logging +from typing import Any import httpx @@ -35,7 +36,7 @@ class RemoteRecommendAppRetrieval(RecommendAppRetrievalBase): return RecommendAppType.REMOTE @classmethod - def fetch_recommended_app_detail_from_dify_official(cls, app_id: str) -> dict | None: + def fetch_recommended_app_detail_from_dify_official(cls, app_id: str) -> dict[str, Any] | None: """ Fetch recommended app detail from dify official. :param app_id: App ID @@ -46,7 +47,7 @@ class RemoteRecommendAppRetrieval(RecommendAppRetrievalBase): response = httpx.get(url, timeout=httpx.Timeout(10.0, connect=3.0)) if response.status_code != 200: return None - data: dict = response.json() + data: dict[str, Any] = response.json() return data @classmethod @@ -62,7 +63,7 @@ class RemoteRecommendAppRetrieval(RecommendAppRetrievalBase): if response.status_code != 200: raise ValueError(f"fetch recommended apps failed, status code: {response.status_code}") - result: dict = response.json() + result: dict[str, Any] = response.json() if "categories" in result: result["categories"] = sorted(result["categories"]) diff --git a/api/services/recommended_app_service.py b/api/services/recommended_app_service.py index 9819822103..134dd37a3e 100644 --- a/api/services/recommended_app_service.py +++ b/api/services/recommended_app_service.py @@ -1,3 +1,5 @@ +from typing import Any + from sqlalchemy import select from configs import dify_config @@ -37,7 +39,7 @@ class RecommendedAppService: return result @classmethod - def get_recommend_app_detail(cls, app_id: str) -> dict | None: + def get_recommend_app_detail(cls, app_id: str) -> dict[str, Any] | None: """ Get recommend app detail. :param app_id: app id @@ -45,7 +47,7 @@ class RecommendedAppService: """ mode = dify_config.HOSTED_FETCH_APP_TEMPLATES_MODE retrieval_instance = RecommendAppRetrievalFactory.get_recommend_app_factory(mode)() - result: dict = retrieval_instance.get_recommend_app_detail(app_id) + result: dict[str, Any] = retrieval_instance.get_recommend_app_detail(app_id) if FeatureService.get_system_features().enable_trial_app: app_id = result["id"] trial_app_model = db.session.scalar(select(TrialApp).where(TrialApp.app_id == app_id).limit(1)) diff --git a/api/services/retention/workflow_run/archive_paid_plan_workflow_run.py b/api/services/retention/workflow_run/archive_paid_plan_workflow_run.py index ab60986bfe..21be411bea 100644 --- a/api/services/retention/workflow_run/archive_paid_plan_workflow_run.py +++ b/api/services/retention/workflow_run/archive_paid_plan_workflow_run.py @@ -27,13 +27,13 @@ from dataclasses import dataclass, field from typing import Any, TypedDict import click -from graphon.enums import WorkflowType from sqlalchemy import inspect from sqlalchemy.orm import Session, sessionmaker from configs import dify_config from enums.cloud_plan import CloudPlan from extensions.ext_database import db +from graphon.enums import WorkflowType from libs.archive_storage import ( ArchiveStorage, ArchiveStorageNotConfiguredError, diff --git a/api/services/summary_index_service.py b/api/services/summary_index_service.py index c906e3bca3..a91f49e9e6 100644 --- a/api/services/summary_index_service.py +++ b/api/services/summary_index_service.py @@ -6,8 +6,6 @@ import uuid from datetime import UTC, datetime from typing import TypedDict, cast -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import select from sqlalchemy.orm import Session @@ -18,6 +16,8 @@ from core.rag.index_processor.constant.doc_type import DocType from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.index_processor.index_processor_base import SummaryIndexSettingDict from core.rag.models.document import Document +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.model_runtime.entities.model_entities import ModelType from libs import helper from models.dataset import Dataset, DocumentSegment, DocumentSegmentSummary from models.dataset import Document as DatasetDocument diff --git a/api/services/tools/api_tools_manage_service.py b/api/services/tools/api_tools_manage_service.py index dfc0c2c63f..5ff2c21749 100644 --- a/api/services/tools/api_tools_manage_service.py +++ b/api/services/tools/api_tools_manage_service.py @@ -2,9 +2,9 @@ import json import logging from typing import Any, TypedDict, cast -from graphon.model_runtime.utils.encoders import jsonable_encoder from httpx import get from sqlalchemy import select +from sqlalchemy.orm import sessionmaker from core.entities.provider_entities import ProviderConfig from core.tools.__base.tool_runtime import ToolRuntime @@ -16,11 +16,13 @@ from core.tools.entities.tool_entities import ( ApiProviderAuthType, ApiProviderSchemaType, ) +from core.tools.errors import ApiToolProviderNotFoundError from core.tools.tool_label_manager import ToolLabelManager from core.tools.tool_manager import ToolManager from core.tools.utils.encryption import create_tool_provider_encrypter from core.tools.utils.parser import ApiBasedToolSchemaParser from extensions.ext_database import db +from graphon.model_runtime.utils.encoders import jsonable_encoder from models.tools import ApiToolProvider from services.tools.tools_transform_service import ToolTransformService @@ -92,7 +94,7 @@ class ApiToolManageService: @staticmethod def convert_schema_to_tool_bundles( - schema: str, extra_info: dict | None = None + schema: str, extra_info: dict[str, Any] | None = None ) -> tuple[list[ApiToolBundle], ApiProviderSchemaType]: """ convert schema to tool bundles @@ -109,78 +111,92 @@ class ApiToolManageService: user_id: str, tenant_id: str, provider_name: str, - icon: dict, - credentials: dict, + icon: dict[str, Any], + credentials: dict[str, Any], schema_type: ApiProviderSchemaType, schema: str, privacy_policy: str, custom_disclaimer: str, labels: list[str], - ): + ) -> dict[str, Any]: """ - create api tool provider + Create a new API tool provider. + + :param user_id: The ID of the user creating the provider. + :param tenant_id: The ID of the workspace/tenant. + :param provider_name: The name of the API tool provider. + :param icon: The icon configuration for the provider. + :param credentials: The credentials for the provider. + :param schema_type: The type of schema (e.g., OpenAPI). + :param schema: The raw schema string. + :param privacy_policy: The privacy policy URL or text. + :param custom_disclaimer: Custom disclaimer text. + :param labels: A list of labels for the provider. + :return: A dictionary indicating the result status. """ + provider_name = provider_name.strip() # check if the provider exists - provider = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == provider_name, + # Create new session with automatic transaction management + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider: ApiToolProvider | None = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == provider_name, + ) + .limit(1) ) - .limit(1) - ) - if provider is not None: - raise ValueError(f"provider {provider_name} already exists") + if provider is not None: + raise ValueError(f"provider {provider_name} already exists") - # parse openapi to tool bundle - extra_info: dict[str, str] = {} - # extra info like description will be set here - tool_bundles, schema_type = ApiToolManageService.convert_schema_to_tool_bundles(schema, extra_info) + # parse openapi to tool bundle + extra_info: dict[str, str] = {} + # extra info like description will be set here + tool_bundles, schema_type = ApiToolManageService.convert_schema_to_tool_bundles(schema, extra_info) - if len(tool_bundles) > 100: - raise ValueError("the number of apis should be less than 100") + if len(tool_bundles) > 100: + raise ValueError("the number of apis should be less than 100") - # create db provider - db_provider = ApiToolProvider( - tenant_id=tenant_id, - user_id=user_id, - name=provider_name, - icon=json.dumps(icon), - schema=schema, - description=extra_info.get("description", ""), - schema_type_str=schema_type, - tools_str=json.dumps(jsonable_encoder(tool_bundles)), - credentials_str="{}", - privacy_policy=privacy_policy, - custom_disclaimer=custom_disclaimer, - ) + # create API tool provider + api_tool_provider = ApiToolProvider( + tenant_id=tenant_id, + user_id=user_id, + name=provider_name, + icon=json.dumps(icon), + schema=schema, + description=extra_info.get("description", ""), + schema_type_str=schema_type, + tools_str=json.dumps(jsonable_encoder(tool_bundles)), + credentials_str="{}", + privacy_policy=privacy_policy, + custom_disclaimer=custom_disclaimer, + ) - if "auth_type" not in credentials: - raise ValueError("auth_type is required") + if "auth_type" not in credentials: + raise ValueError("auth_type is required") - # get auth type, none or api key - auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) + # get auth type, none or api key + auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) - # create provider entity - provider_controller = ApiToolProviderController.from_db(db_provider, auth_type) - # load tools into provider entity - provider_controller.load_bundled_tools(tool_bundles) + # create provider entity + provider_controller = ApiToolProviderController.from_db(api_tool_provider, auth_type) + # load tools into provider entity + provider_controller.load_bundled_tools(tool_bundles) - # encrypt credentials - encrypter, _ = create_tool_provider_encrypter( - tenant_id=tenant_id, - controller=provider_controller, - ) - db_provider.credentials_str = json.dumps(encrypter.encrypt(credentials)) + # encrypt credentials + encrypter, _ = create_tool_provider_encrypter( + tenant_id=tenant_id, + controller=provider_controller, + ) + api_tool_provider.credentials_str = json.dumps(encrypter.encrypt(credentials)) - db.session.add(db_provider) - db.session.commit() + _session.add(api_tool_provider) - # update labels - ToolLabelManager.update_tool_labels(provider_controller, labels) + # update labels + ToolLabelManager.update_tool_labels(provider_controller, labels, _session) return {"result": "success"} @@ -212,16 +228,25 @@ class ApiToolManageService: @staticmethod def list_api_tool_provider_tools(user_id: str, tenant_id: str, provider_name: str) -> list[ToolApiEntity]: """ - list api tool provider tools + List tools provided by a specific API tool provider. + + :param user_id: The ID of the user requesting the list. + :param tenant_id: The ID of the workspace/tenant. + :param provider_name: The name of the API tool provider. + :return: A list of ToolApiEntity objects. """ - provider: ApiToolProvider | None = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == provider_name, + + # create new session with automatic transaction management + provider: ApiToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == provider_name, + ) + .limit(1) ) - .limit(1) - ) if provider is None: raise ValueError(f"you have not added provider {provider_name}") @@ -244,110 +269,140 @@ class ApiToolManageService: tenant_id: str, provider_name: str, original_provider: str, - icon: dict, - credentials: dict, + icon: dict[str, Any], + credentials: dict[str, Any], _schema_type: ApiProviderSchemaType, schema: str, privacy_policy: str | None, custom_disclaimer: str, labels: list[str], - ): + ) -> dict[str, Any]: """ - update api tool provider + Update an existing API tool provider. + + :param user_id: The ID of the user updating the provider. + :param tenant_id: The ID of the workspace/tenant. + :param provider_name: The new name of the API tool provider. + :param original_provider: The original name of the API tool provider. + :param icon: The icon configuration for the provider. + :param credentials: The credentials for the provider. + :param _schema_type: The type of schema (e.g., OpenAPI). + :param schema: The raw schema string. + :param privacy_policy: The privacy policy URL or text. + :param custom_disclaimer: Custom disclaimer text. + :param labels: A list of labels for the provider. + :return: A dictionary indicating the result status. """ + provider_name = provider_name.strip() # check if the provider exists - provider = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == original_provider, + # create new session with automatic transaction management + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider: ApiToolProvider | None = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == original_provider, + ) + .limit(1) ) - .limit(1) - ) - if provider is None: - raise ValueError(f"api provider {provider_name} does not exists") - # parse openapi to tool bundle - extra_info: dict[str, str] = {} - # extra info like description will be set here - tool_bundles, schema_type = ApiToolManageService.convert_schema_to_tool_bundles(schema, extra_info) + if provider is None: + raise ApiToolProviderNotFoundError(provider_name=original_provider, tenant_id=tenant_id) - # update db provider - provider.name = provider_name - provider.icon = json.dumps(icon) - provider.schema = schema - provider.description = extra_info.get("description", "") - provider.schema_type_str = schema_type - provider.tools_str = json.dumps(jsonable_encoder(tool_bundles)) - provider.privacy_policy = privacy_policy - provider.custom_disclaimer = custom_disclaimer + # parse openapi to tool bundle + extra_info: dict[str, str] = {} + # extra info like description will be set here + tool_bundles, schema_type = ApiToolManageService.convert_schema_to_tool_bundles(schema, extra_info) - if "auth_type" not in credentials: - raise ValueError("auth_type is required") + # update db provider + provider.name = provider_name + provider.icon = json.dumps(icon) + provider.schema = schema + provider.description = extra_info.get("description", "") + provider.schema_type_str = schema_type + provider.tools_str = json.dumps(jsonable_encoder(tool_bundles)) + provider.privacy_policy = privacy_policy + provider.custom_disclaimer = custom_disclaimer - # get auth type, none or api key - auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) + if "auth_type" not in credentials: + raise ValueError("auth_type is required") - # create provider entity - provider_controller = ApiToolProviderController.from_db(provider, auth_type) - # load tools into provider entity - provider_controller.load_bundled_tools(tool_bundles) + # get auth type, none or api key + auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) - # get original credentials if exists - encrypter, cache = create_tool_provider_encrypter( - tenant_id=tenant_id, - controller=provider_controller, - ) + # create provider entity + provider_controller = ApiToolProviderController.from_db(provider, auth_type) + # load tools into provider entity + provider_controller.load_bundled_tools(tool_bundles) - original_credentials = encrypter.decrypt(provider.credentials) - masked_credentials = encrypter.mask_plugin_credentials(original_credentials) - # check if the credential has changed, save the original credential - for name, value in credentials.items(): - if name in masked_credentials and value == masked_credentials[name]: - credentials[name] = original_credentials[name] + # get original credentials if exists + encrypter, cache = create_tool_provider_encrypter( + tenant_id=tenant_id, + controller=provider_controller, + ) - credentials = dict(encrypter.encrypt(credentials)) - provider.credentials_str = json.dumps(credentials) + original_credentials = encrypter.decrypt(provider.credentials) + masked_credentials = encrypter.mask_plugin_credentials(original_credentials) - db.session.add(provider) - db.session.commit() + # check if the credential has changed, save the original credential + for name, value in credentials.items(): + if name in masked_credentials and value == masked_credentials[name]: + credentials[name] = original_credentials[name] + + credentials = dict(encrypter.encrypt(credentials)) + provider.credentials_str = json.dumps(credentials) + + _session.add(provider) + + # update labels + ToolLabelManager.update_tool_labels(provider_controller, labels, _session) # delete cache cache.delete() - # update labels - ToolLabelManager.update_tool_labels(provider_controller, labels) - return {"result": "success"} @staticmethod def delete_api_tool_provider(user_id: str, tenant_id: str, provider_name: str): """ - delete tool provider + Delete an API tool provider. + + :param user_id: The ID of the user performing the deletion operation. + :param tenant_id: The ID of the workspace/tenant where the provider belongs. + :param provider_name: The unique name of the API tool provider to be deleted. + :raises ValueError: If the specified provider does not exist in the tenant. + :return: A dictionary indicating the result status. """ - provider = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == provider_name, + + # create new session with automatic transaction management + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider: ApiToolProvider | None = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == provider_name, + ) + .limit(1) ) - .limit(1) - ) - if provider is None: - raise ValueError(f"you have not added provider {provider_name}") + if provider is None: + raise ValueError(f"you have not added provider {provider_name}") - db.session.delete(provider) - db.session.commit() + _session.delete(provider) return {"result": "success"} @staticmethod - def get_api_tool_provider(user_id: str, tenant_id: str, provider: str): + def get_api_tool_provider(user_id: str, tenant_id: str, provider: str) -> dict[str, Any]: """ - get api tool provider + Get API tool provider details. + + :param user_id: The ID of the user requesting the provider. + :param tenant_id: The ID of the workspace/tenant. + :param provider: The name of the API tool provider. + :return: A dictionary containing the provider details. """ return ToolManager.user_get_api_provider(provider=provider, tenant_id=tenant_id) @@ -356,14 +411,24 @@ class ApiToolManageService: tenant_id: str, provider_name: str, tool_name: str, - credentials: dict, - parameters: dict, + credentials: dict[str, Any], + parameters: dict[str, Any], schema_type: ApiProviderSchemaType, schema: str, - ): + ) -> dict[str, Any]: """ - test api tool before adding api tool provider + Test an API tool before adding the API tool provider. + + :param tenant_id: The ID of the workspace/tenant. + :param provider_name: The name of the API tool provider. + :param tool_name: The name of the specific tool to test. + :param credentials: The credentials for the provider. + :param parameters: The parameters to pass to the tool. + :param schema_type: The type of schema (e.g., OpenAPI). + :param schema: The raw schema string. + :return: A dictionary containing the result or error message. """ + if schema_type not in [member.value for member in ApiProviderSchemaType]: raise ValueError(f"invalid schema type {schema_type}") @@ -377,18 +442,21 @@ class ApiToolManageService: if tool_bundle is None: raise ValueError(f"invalid tool name {tool_name}") - db_provider = db.session.scalar( - select(ApiToolProvider) - .where( - ApiToolProvider.tenant_id == tenant_id, - ApiToolProvider.name == provider_name, + # create new session with automatic transaction management to get the provider + provider: ApiToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + provider = _session.scalar( + select(ApiToolProvider) + .where( + ApiToolProvider.tenant_id == tenant_id, + ApiToolProvider.name == provider_name, + ) + .limit(1) ) - .limit(1) - ) - if not db_provider: + if provider is None: # create a fake db provider - db_provider = ApiToolProvider( + provider = ApiToolProvider( tenant_id="", user_id="", name="", @@ -407,12 +475,12 @@ class ApiToolManageService: auth_type = ApiProviderAuthType.value_of(credentials["auth_type"]) # create provider entity - provider_controller = ApiToolProviderController.from_db(db_provider, auth_type) + provider_controller = ApiToolProviderController.from_db(provider, auth_type) # load tools into provider entity provider_controller.load_bundled_tools(tool_bundles) # decrypt credentials - if db_provider.id: + if provider.id: encrypter, _ = create_tool_provider_encrypter( tenant_id=tenant_id, controller=provider_controller, @@ -443,14 +511,21 @@ class ApiToolManageService: @staticmethod def list_api_tools(tenant_id: str) -> list[ToolProviderApiEntity]: """ - list api tools + List all API tools for a specific tenant. + + :param tenant_id: The ID of the workspace/tenant. + :return: A list of ToolProviderApiEntity objects. """ # get all api providers - db_providers = db.session.scalars(select(ApiToolProvider).where(ApiToolProvider.tenant_id == tenant_id)).all() + # create new session with automatic transaction management + providers: list[ApiToolProvider] = [] + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + providers = list( + _session.scalars(select(ApiToolProvider).where(ApiToolProvider.tenant_id == tenant_id)).all() + ) result: list[ToolProviderApiEntity] = [] - - for provider in db_providers: + for provider in providers: # convert provider controller to user provider provider_controller = ToolTransformService.api_provider_to_controller(db_provider=provider) labels = ToolLabelManager.get_tool_labels(provider_controller) diff --git a/api/services/tools/builtin_tools_manage_service.py b/api/services/tools/builtin_tools_manage_service.py index 202432007a..7bd056b8a0 100644 --- a/api/services/tools/builtin_tools_manage_service.py +++ b/api/services/tools/builtin_tools_manage_service.py @@ -147,7 +147,7 @@ class BuiltinToolManageService: tenant_id: str, provider: str, credential_id: str, - credentials: dict | None = None, + credentials: dict[str, Any] | None = None, name: str | None = None, ): """ @@ -177,7 +177,7 @@ class BuiltinToolManageService: ) original_credentials = encrypter.decrypt(db_provider.credentials) - new_credentials: dict = { + new_credentials: dict[str, Any] = { key: value if value != HIDDEN_VALUE else original_credentials.get(key, UNKNOWN_VALUE) for key, value in credentials.items() } @@ -216,7 +216,7 @@ class BuiltinToolManageService: api_type: CredentialType, tenant_id: str, provider: str, - credentials: dict, + credentials: dict[str, Any], expires_at: int = -1, name: str | None = None, ): @@ -657,7 +657,7 @@ class BuiltinToolManageService: def save_custom_oauth_client_params( tenant_id: str, provider: str, - client_params: dict | None = None, + client_params: dict[str, Any] | None = None, enable_oauth_custom_client: bool | None = None, ): """ diff --git a/api/services/tools/tools_transform_service.py b/api/services/tools/tools_transform_service.py index 72954a3102..47aca9b0af 100644 --- a/api/services/tools/tools_transform_service.py +++ b/api/services/tools/tools_transform_service.py @@ -69,7 +69,9 @@ class ToolTransformService: return "" @staticmethod - def repack_provider(tenant_id: str, provider: dict | ToolProviderApiEntity | PluginDatasourceProviderEntity): + def repack_provider( + tenant_id: str, provider: dict[str, Any] | ToolProviderApiEntity | PluginDatasourceProviderEntity + ): """ repack provider @@ -426,7 +428,7 @@ class ToolTransformService: @staticmethod def convert_builtin_provider_to_credential_entity( - provider: BuiltinToolProvider, credentials: dict + provider: BuiltinToolProvider, credentials: dict[str, Any] ) -> ToolProviderCredentialApiEntity: return ToolProviderCredentialApiEntity( id=provider.id, diff --git a/api/services/tools/workflow_tools_manage_service.py b/api/services/tools/workflow_tools_manage_service.py index f7c35fa64e..8f6600af03 100644 --- a/api/services/tools/workflow_tools_manage_service.py +++ b/api/services/tools/workflow_tools_manage_service.py @@ -1,8 +1,8 @@ import json import logging from datetime import datetime +from typing import Any -from graphon.model_runtime.utils.encoders import jsonable_encoder from sqlalchemy import delete, or_, select from sqlalchemy.orm import sessionmaker @@ -14,6 +14,7 @@ from core.tools.utils.workflow_configuration_sync import WorkflowToolConfigurati from core.tools.workflow_as_tool.provider import WorkflowToolProviderController from core.tools.workflow_as_tool.tool import WorkflowTool from extensions.ext_database import db +from graphon.model_runtime.utils.encoders import jsonable_encoder from models.model import App from models.tools import WorkflowToolProvider from models.workflow import Workflow @@ -35,7 +36,7 @@ class WorkflowToolManageService: workflow_app_id: str, name: str, label: str, - icon: dict, + icon: dict[str, Any], description: str, parameters: list[WorkflowToolParameterConfiguration], privacy_policy: str = "", @@ -117,7 +118,7 @@ class WorkflowToolManageService: workflow_tool_id: str, name: str, label: str, - icon: dict, + icon: dict[str, Any], description: str, parameters: list[WorkflowToolParameterConfiguration], privacy_policy: str = "", @@ -138,62 +139,82 @@ class WorkflowToolManageService: :param labels: labels :return: the updated tool """ - # check if the name is unique - existing_workflow_tool_provider = db.session.scalar( - select(WorkflowToolProvider) - .where( - WorkflowToolProvider.tenant_id == tenant_id, - WorkflowToolProvider.name == name, - WorkflowToolProvider.id != workflow_tool_id, - ) - .limit(1) - ) + existing_workflow_tool_provider: WorkflowToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + # query if the name exists for other tools + existing_workflow_tool_provider = _session.scalar( + select(WorkflowToolProvider) + .where( + WorkflowToolProvider.tenant_id == tenant_id, + WorkflowToolProvider.name == name, + WorkflowToolProvider.id != workflow_tool_id, + ) + .limit(1) + ) + + # if the name exists raise error if existing_workflow_tool_provider is not None: raise ValueError(f"Tool with name {name} already exists") - workflow_tool_provider: WorkflowToolProvider | None = db.session.scalar( - select(WorkflowToolProvider) - .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id) - .limit(1) - ) + # query the workflow tool provider + workflow_tool_provider: WorkflowToolProvider | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + workflow_tool_provider = _session.scalar( + select(WorkflowToolProvider) + .where(WorkflowToolProvider.tenant_id == tenant_id, WorkflowToolProvider.id == workflow_tool_id) + .limit(1) + ) + # if not found raise error if workflow_tool_provider is None: raise ValueError(f"Tool {workflow_tool_id} not found") - app: App | None = db.session.scalar( - select(App).where(App.id == workflow_tool_provider.app_id, App.tenant_id == tenant_id).limit(1) - ) + # query the app + app: App | None = None + with sessionmaker(db.engine, expire_on_commit=False).begin() as _session: + app = _session.scalar( + select(App).where(App.id == workflow_tool_provider.app_id, App.tenant_id == tenant_id).limit(1) + ) + # if not found raise error if app is None: raise ValueError(f"App {workflow_tool_provider.app_id} not found") + # query the workflow workflow: Workflow | None = app.workflow + + # if not found raise error if workflow is None: raise ValueError(f"Workflow not found for app {workflow_tool_provider.app_id}") + # check if workflow configuration is synced WorkflowToolConfigurationUtils.ensure_no_human_input_nodes(workflow.graph_dict) - workflow_tool_provider.name = name - workflow_tool_provider.label = label - workflow_tool_provider.icon = json.dumps(icon) - workflow_tool_provider.description = description - workflow_tool_provider.parameter_configuration = json.dumps([p.model_dump() for p in parameters]) - workflow_tool_provider.privacy_policy = privacy_policy - workflow_tool_provider.version = workflow.version - workflow_tool_provider.updated_at = datetime.now() + with sessionmaker(db.engine).begin() as _session: + _session.add(workflow_tool_provider) - try: - WorkflowToolProviderController.from_db(workflow_tool_provider) - except Exception as e: - raise ValueError(str(e)) + # update workflow tool provider + workflow_tool_provider.name = name + workflow_tool_provider.label = label + workflow_tool_provider.icon = json.dumps(icon) + workflow_tool_provider.description = description + workflow_tool_provider.parameter_configuration = json.dumps([p.model_dump() for p in parameters]) + workflow_tool_provider.privacy_policy = privacy_policy + workflow_tool_provider.version = workflow.version + workflow_tool_provider.updated_at = datetime.now() - db.session.commit() + try: + WorkflowToolProviderController.from_db(workflow_tool_provider) + except Exception as e: + raise ValueError(str(e)) - if labels is not None: - ToolLabelManager.update_tool_labels( - ToolTransformService.workflow_provider_to_controller(workflow_tool_provider), labels - ) + if labels is not None: + ToolLabelManager.update_tool_labels( + ToolTransformService.workflow_provider_to_controller(workflow_tool_provider), + labels, + session=_session, + ) return {"result": "success"} diff --git a/api/services/trigger/schedule_service.py b/api/services/trigger/schedule_service.py index 25e80770b8..a827222c1d 100644 --- a/api/services/trigger/schedule_service.py +++ b/api/services/trigger/schedule_service.py @@ -2,7 +2,6 @@ import json import logging from datetime import datetime -from graphon.entities.graph_config import NodeConfigDict from sqlalchemy import select from sqlalchemy.orm import Session @@ -14,6 +13,7 @@ from core.workflow.nodes.trigger_schedule.entities import ( VisualConfig, ) from core.workflow.nodes.trigger_schedule.exc import ScheduleConfigError, ScheduleNotFoundError +from graphon.entities.graph_config import NodeConfigDict from libs.schedule_utils import calculate_next_run_at, convert_12h_to_24h from models.account import Account, TenantAccountJoin from models.trigger import WorkflowSchedulePlan diff --git a/api/services/trigger/trigger_service.py b/api/services/trigger/trigger_service.py index 5a5d13b96d..911331e357 100644 --- a/api/services/trigger/trigger_service.py +++ b/api/services/trigger/trigger_service.py @@ -5,7 +5,6 @@ from collections.abc import Mapping from typing import Any from flask import Request, Response -from graphon.entities.graph_config import NodeConfigDict from pydantic import BaseModel from sqlalchemy import select from sqlalchemy.orm import sessionmaker @@ -21,6 +20,7 @@ from core.trigger.utils.encryption import create_trigger_provider_encrypter_for_ from core.workflow.nodes.trigger_plugin.entities import TriggerEventNodeData from extensions.ext_database import db from extensions.ext_redis import redis_client +from graphon.entities.graph_config import NodeConfigDict from models.model import App from models.provider_ids import TriggerProviderID from models.trigger import TriggerSubscription, WorkflowPluginTrigger diff --git a/api/services/trigger/webhook_service.py b/api/services/trigger/webhook_service.py index c782bffad4..d562220fa7 100644 --- a/api/services/trigger/webhook_service.py +++ b/api/services/trigger/webhook_service.py @@ -7,9 +7,6 @@ from typing import Any, NotRequired, TypedDict import orjson from flask import request -from graphon.entities.graph_config import NodeConfigDict -from graphon.file import FileTransferMethod -from graphon.variables.types import ArrayValidation, SegmentType from pydantic import BaseModel from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -31,6 +28,9 @@ from enums.quota_type import QuotaType from extensions.ext_database import db from extensions.ext_redis import redis_client from factories import file_factory +from graphon.entities.graph_config import NodeConfigDict +from graphon.file import FileTransferMethod +from graphon.variables.types import ArrayValidation, SegmentType from models.enums import AppTriggerStatus, AppTriggerType from models.model import App from models.trigger import AppTrigger, WorkflowWebhookTrigger diff --git a/api/services/variable_truncator.py b/api/services/variable_truncator.py index 4d58a9cf12..c96050ce13 100644 --- a/api/services/variable_truncator.py +++ b/api/services/variable_truncator.py @@ -5,6 +5,7 @@ from abc import ABC, abstractmethod from collections.abc import Mapping from typing import Any, overload +from configs import dify_config from graphon.file import File from graphon.nodes.variable_assigner.common.helpers import UpdatedVariable from graphon.variables.segments import ( @@ -21,8 +22,6 @@ from graphon.variables.segments import ( ) from graphon.variables.utils import dumps_with_segments -from configs import dify_config - _MAX_DEPTH = 100 diff --git a/api/services/vector_service.py b/api/services/vector_service.py index 9827c8dfbc..58193d75a9 100644 --- a/api/services/vector_service.py +++ b/api/services/vector_service.py @@ -1,6 +1,5 @@ import logging -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import delete, select from core.model_manager import ModelInstance, ModelManager @@ -13,6 +12,7 @@ from core.rag.index_processor.index_processor_base import BaseIndexProcessor from core.rag.index_processor.index_processor_factory import IndexProcessorFactory from core.rag.models.document import AttachmentDocument, Document from extensions.ext_database import db +from graphon.model_runtime.entities.model_entities import ModelType from models import UploadFile from models.dataset import ChildChunk, Dataset, DatasetProcessRule, DocumentSegment, SegmentAttachmentBinding from models.dataset import Document as DatasetDocument diff --git a/api/services/website_service.py b/api/services/website_service.py index 2471c2cee8..ea584088bb 100644 --- a/api/services/website_service.py +++ b/api/services/website_service.py @@ -91,7 +91,7 @@ class WebsiteCrawlApiRequest: return CrawlRequest(url=self.url, provider=self.provider, options=options) @classmethod - def from_args(cls, args: dict) -> WebsiteCrawlApiRequest: + def from_args(cls, args: dict[str, Any]) -> WebsiteCrawlApiRequest: """Create from Flask-RESTful parsed arguments.""" provider = args.get("provider") url = args.get("url") @@ -115,7 +115,7 @@ class WebsiteCrawlStatusApiRequest: job_id: str @classmethod - def from_args(cls, args: dict, job_id: str) -> WebsiteCrawlStatusApiRequest: + def from_args(cls, args: dict[str, Any], job_id: str) -> WebsiteCrawlStatusApiRequest: """Create from Flask-RESTful parsed arguments.""" provider = args.get("provider") if not provider: @@ -163,7 +163,7 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _get_decrypted_api_key(cls, tenant_id: str, config: dict) -> str: + def _get_decrypted_api_key(cls, tenant_id: str, config: dict[str, Any]) -> str: """Decrypt and return the API key from config.""" api_key = config.get("api_key") if not api_key: @@ -171,7 +171,7 @@ class WebsiteService: return encrypter.decrypt_token(tenant_id=tenant_id, token=api_key) @classmethod - def document_create_args_validate(cls, args: dict): + def document_create_args_validate(cls, args: dict[str, Any]): """Validate arguments for document creation.""" try: WebsiteCrawlApiRequest.from_args(args) @@ -195,7 +195,7 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _crawl_with_firecrawl(cls, request: CrawlRequest, api_key: str, config: dict) -> dict[str, Any]: + def _crawl_with_firecrawl(cls, request: CrawlRequest, api_key: str, config: dict[str, Any]) -> dict[str, Any]: firecrawl_app = FirecrawlApp(api_key=api_key, base_url=config.get("base_url")) params: dict[str, Any] @@ -225,7 +225,7 @@ class WebsiteService: return {"status": "active", "job_id": job_id} @classmethod - def _crawl_with_watercrawl(cls, request: CrawlRequest, api_key: str, config: dict) -> dict[str, Any]: + def _crawl_with_watercrawl(cls, request: CrawlRequest, api_key: str, config: dict[str, Any]) -> dict[str, Any]: # Convert CrawlOptions back to dict format for WaterCrawlProvider options = { "limit": request.options.limit, @@ -290,7 +290,7 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _get_firecrawl_status(cls, job_id: str, api_key: str, config: dict) -> CrawlStatusDict: + def _get_firecrawl_status(cls, job_id: str, api_key: str, config: dict[str, Any]) -> CrawlStatusDict: firecrawl_app = FirecrawlApp(api_key=api_key, base_url=config.get("base_url")) result: CrawlStatusResponse = firecrawl_app.check_crawl_status(job_id) crawl_status_data: CrawlStatusDict = { @@ -364,7 +364,9 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _get_firecrawl_url_data(cls, job_id: str, url: str, api_key: str, config: dict) -> dict[str, Any] | None: + def _get_firecrawl_url_data( + cls, job_id: str, url: str, api_key: str, config: dict[str, Any] + ) -> dict[str, Any] | None: crawl_data: list[FirecrawlDocumentData] | None = None file_key = "website_files/" + job_id + ".txt" if storage.exists(file_key): @@ -438,7 +440,7 @@ class WebsiteService: raise ValueError("Invalid provider") @classmethod - def _scrape_with_firecrawl(cls, request: ScrapeRequest, api_key: str, config: dict) -> dict[str, Any]: + def _scrape_with_firecrawl(cls, request: ScrapeRequest, api_key: str, config: dict[str, Any]) -> dict[str, Any]: firecrawl_app = FirecrawlApp(api_key=api_key, base_url=config.get("base_url")) params = {"onlyMainContent": request.only_main_content} return dict(firecrawl_app.scrape_url(url=request.url, params=params)) diff --git a/api/services/workflow/workflow_converter.py b/api/services/workflow/workflow_converter.py index 1582bcd46c..5dedb9e372 100644 --- a/api/services/workflow/workflow_converter.py +++ b/api/services/workflow/workflow_converter.py @@ -1,11 +1,6 @@ import json from typing import Any, TypedDict -from graphon.file import FileUploadConfig -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.utils.encoders import jsonable_encoder -from graphon.nodes import BuiltinNodeTypes -from graphon.variables.input_entities import VariableEntity from sqlalchemy import select from core.app.app_config.entities import ( @@ -24,6 +19,11 @@ from core.prompt.simple_prompt_transform import SimplePromptTransform from core.prompt.utils.prompt_template_parser import PromptTemplateParser from events.app_event import app_was_created from extensions.ext_database import db +from graphon.file import FileUploadConfig +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.utils.encoders import jsonable_encoder +from graphon.nodes import BuiltinNodeTypes +from graphon.variables.input_entities import VariableEntity from models import Account from models.api_based_extension import APIBasedExtension, APIBasedExtensionPoint from models.model import App, AppMode, AppModelConfig, IconType diff --git a/api/services/workflow_app_service.py b/api/services/workflow_app_service.py index b5ab176ad2..59e02ec9b9 100644 --- a/api/services/workflow_app_service.py +++ b/api/services/workflow_app_service.py @@ -3,10 +3,10 @@ import uuid from datetime import datetime from typing import Any, TypedDict -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import and_, func, or_, select from sqlalchemy.orm import Session +from graphon.enums import WorkflowExecutionStatus from models import Account, App, EndUser, TenantAccountJoin, WorkflowAppLog, WorkflowArchiveLog, WorkflowRun from models.enums import AppTriggerType, CreatorUserRole from models.trigger import WorkflowTriggerLog diff --git a/api/services/workflow_collaboration_service.py b/api/services/workflow_collaboration_service.py new file mode 100644 index 0000000000..cf2f509052 --- /dev/null +++ b/api/services/workflow_collaboration_service.py @@ -0,0 +1,295 @@ +from __future__ import annotations + +import logging +import time +from collections.abc import Mapping + +from sqlalchemy import select + +from core.db.session_factory import session_factory +from models.account import Account +from models.model import App +from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository, WorkflowSessionInfo + +logger = logging.getLogger(__name__) + + +class WorkflowCollaborationService: + def __init__(self, repository: WorkflowCollaborationRepository, socketio) -> None: + self._repository = repository + self._socketio = socketio + + def __repr__(self) -> str: + return f"{self.__class__.__name__}(repository={self._repository})" + + def save_socket_identity(self, sid: str, user: Account) -> None: + """Persist the authenticated console user on the raw socket session.""" + self._socketio.save_session( + sid, + { + "user_id": user.id, + "username": user.name, + "avatar": user.avatar, + "tenant_id": user.current_tenant_id, + }, + ) + + def authorize_and_join_workflow_room(self, workflow_id: str, sid: str) -> tuple[str, bool] | None: + """ + Join a collaboration room only after validating the socket session and tenant-scoped app access. + + The Socket.IO payload still calls the room key `workflow_id`, but the identifier is the workflow app's + `App.id`. Returning `None` lets the controller reject the join before any Redis or room state is created. + """ + session = self._socketio.get_session(sid) + user_id = session.get("user_id") + tenant_id = session.get("tenant_id") + if not user_id or not tenant_id: + return None + + if not self._can_access_workflow(workflow_id, str(tenant_id)): + logger.warning( + "Workflow collaboration join rejected: workflow_id=%s tenant_id=%s user_id=%s sid=%s", + workflow_id, + tenant_id, + user_id, + sid, + ) + return None + + session_info: WorkflowSessionInfo = { + "user_id": str(user_id), + "username": str(session.get("username", "Unknown")), + "avatar": session.get("avatar"), + "sid": sid, + "connected_at": int(time.time()), + } + + self._repository.set_session_info(workflow_id, session_info) + + leader_sid = self.get_or_set_leader(workflow_id, sid) + is_leader = leader_sid == sid + + self._socketio.enter_room(sid, workflow_id) + self.broadcast_online_users(workflow_id) + + self._socketio.emit("status", {"isLeader": is_leader}, room=sid) + + return str(user_id), is_leader + + def _can_access_workflow(self, workflow_id: str, tenant_id: str) -> bool: + """Check room access without relying on Flask's app-context-bound scoped session.""" + with session_factory.create_session() as session: + app_id = session.scalar(select(App.id).where(App.id == workflow_id, App.tenant_id == tenant_id).limit(1)) + return app_id is not None + + def disconnect_session(self, sid: str) -> None: + mapping = self._repository.get_sid_mapping(sid) + if not mapping: + return + + workflow_id = mapping["workflow_id"] + self._repository.delete_session(workflow_id, sid) + + self.handle_leader_disconnect(workflow_id, sid) + self.broadcast_online_users(workflow_id) + + def relay_collaboration_event(self, sid: str, data: Mapping[str, object]) -> tuple[dict[str, str], int]: + mapping = self._repository.get_sid_mapping(sid) + if not mapping: + return {"msg": "unauthorized"}, 401 + + workflow_id = mapping["workflow_id"] + user_id = mapping["user_id"] + self.refresh_session_state(workflow_id, sid) + + event_type = data.get("type") + event_data = data.get("data") + timestamp = data.get("timestamp", int(time.time())) + + if not event_type: + return {"msg": "invalid event type"}, 400 + + if event_type == "sync_request": + leader_sid = self._repository.get_current_leader(workflow_id) + target_sid: str | None + if leader_sid and self.is_session_active(workflow_id, leader_sid): + target_sid = leader_sid + else: + if leader_sid: + self._repository.delete_leader(workflow_id) + target_sid = self._select_graph_leader(workflow_id, preferred_sid=sid) + if target_sid: + self._repository.set_leader(workflow_id, target_sid) + self.broadcast_leader_change(workflow_id, target_sid) + + if not target_sid: + return {"msg": "no_active_leader"}, 200 + + self._socketio.emit( + "collaboration_update", + {"type": event_type, "userId": user_id, "data": event_data, "timestamp": timestamp}, + room=target_sid, + ) + + return {"msg": "sync_request_forwarded"}, 200 + + self._socketio.emit( + "collaboration_update", + {"type": event_type, "userId": user_id, "data": event_data, "timestamp": timestamp}, + room=workflow_id, + skip_sid=sid, + ) + + return {"msg": "event_broadcasted"}, 200 + + def relay_graph_event(self, sid: str, data: object) -> tuple[dict[str, str], int]: + mapping = self._repository.get_sid_mapping(sid) + if not mapping: + return {"msg": "unauthorized"}, 401 + + workflow_id = mapping["workflow_id"] + self.refresh_session_state(workflow_id, sid) + + self._socketio.emit("graph_update", data, room=workflow_id, skip_sid=sid) + + return {"msg": "graph_update_broadcasted"}, 200 + + def get_or_set_leader(self, workflow_id: str, sid: str) -> str: + current_leader = self._repository.get_current_leader(workflow_id) + + if current_leader: + if self.is_session_active(workflow_id, current_leader): + return current_leader + self._repository.delete_session(workflow_id, current_leader) + self._repository.delete_leader(workflow_id) + + was_set = self._repository.set_leader_if_absent(workflow_id, sid) + + if was_set: + if current_leader: + self.broadcast_leader_change(workflow_id, sid) + return sid + + current_leader = self._repository.get_current_leader(workflow_id) + if current_leader: + return current_leader + + return sid + + def handle_leader_disconnect(self, workflow_id: str, disconnected_sid: str) -> None: + current_leader = self._repository.get_current_leader(workflow_id) + if not current_leader: + return + + if current_leader != disconnected_sid: + return + + new_leader_sid = self._select_graph_leader(workflow_id) + if new_leader_sid: + self._repository.set_leader(workflow_id, new_leader_sid) + self.broadcast_leader_change(workflow_id, new_leader_sid) + else: + self._repository.delete_leader(workflow_id) + + def broadcast_leader_change(self, workflow_id: str, new_leader_sid: str | None) -> None: + for sid in self._repository.get_session_sids(workflow_id): + try: + is_leader = new_leader_sid is not None and sid == new_leader_sid + self._socketio.emit("status", {"isLeader": is_leader}, room=sid) + except Exception: + logging.exception("Failed to emit leader status to session %s", sid) + + def get_current_leader(self, workflow_id: str) -> str | None: + return self._repository.get_current_leader(workflow_id) + + def _prune_inactive_sessions(self, workflow_id: str) -> list[WorkflowSessionInfo]: + """Remove inactive sessions from storage and return active sessions only.""" + sessions = self._repository.list_sessions(workflow_id) + if not sessions: + return [] + + active_sessions: list[WorkflowSessionInfo] = [] + stale_sids: list[str] = [] + for session in sessions: + sid = session["sid"] + if self.is_session_active(workflow_id, sid): + active_sessions.append(session) + else: + stale_sids.append(sid) + + for sid in stale_sids: + self._repository.delete_session(workflow_id, sid) + + return active_sessions + + def broadcast_online_users(self, workflow_id: str) -> None: + users = self._prune_inactive_sessions(workflow_id) + users.sort(key=lambda x: x.get("connected_at") or 0) + + leader_sid = self.get_current_leader(workflow_id) + previous_leader = leader_sid + active_sids = {user["sid"] for user in users} + if leader_sid and leader_sid not in active_sids: + self._repository.delete_leader(workflow_id) + leader_sid = None + + if not leader_sid and users: + leader_sid = self._select_graph_leader(workflow_id) + if leader_sid: + self._repository.set_leader(workflow_id, leader_sid) + + if leader_sid != previous_leader: + self.broadcast_leader_change(workflow_id, leader_sid) + + self._socketio.emit( + "online_users", + {"workflow_id": workflow_id, "users": users, "leader": leader_sid}, + room=workflow_id, + ) + + def refresh_session_state(self, workflow_id: str, sid: str) -> None: + self._repository.refresh_session_state(workflow_id, sid) + self._ensure_leader(workflow_id, sid) + + def _ensure_leader(self, workflow_id: str, sid: str) -> None: + current_leader = self._repository.get_current_leader(workflow_id) + if current_leader and self.is_session_active(workflow_id, current_leader): + self._repository.expire_leader(workflow_id) + return + + if current_leader: + self._repository.delete_leader(workflow_id) + + self._repository.set_leader(workflow_id, sid) + self.broadcast_leader_change(workflow_id, sid) + + def _select_graph_leader(self, workflow_id: str, preferred_sid: str | None = None) -> str | None: + session_sids = [ + session["sid"] + for session in self._repository.list_sessions(workflow_id) + if session.get("graph_active", True) and self.is_session_active(workflow_id, session["sid"]) + ] + if not session_sids: + return None + if preferred_sid and preferred_sid in session_sids: + return preferred_sid + return session_sids[0] + + def is_session_active(self, workflow_id: str, sid: str) -> bool: + if not sid: + return False + + try: + if not self._socketio.manager.is_connected(sid, "/"): + return False + except AttributeError: + return False + + if not self._repository.session_exists(workflow_id, sid): + return False + + if not self._repository.sid_mapping_exists(sid): + return False + + return True diff --git a/api/services/workflow_comment_service.py b/api/services/workflow_comment_service.py new file mode 100644 index 0000000000..ff47e4f253 --- /dev/null +++ b/api/services/workflow_comment_service.py @@ -0,0 +1,564 @@ +import logging +from collections.abc import Sequence + +from sqlalchemy import desc, select +from sqlalchemy.orm import Session, selectinload +from werkzeug.exceptions import Forbidden, NotFound + +from configs import dify_config +from extensions.ext_database import db +from libs.datetime_utils import naive_utc_now +from libs.helper import uuid_value +from models import App, TenantAccountJoin, WorkflowComment, WorkflowCommentMention, WorkflowCommentReply +from models.account import Account +from tasks.mail_workflow_comment_task import send_workflow_comment_mention_email_task + +logger = logging.getLogger(__name__) + + +class WorkflowCommentService: + """Service for managing workflow comments.""" + + @staticmethod + def _validate_content(content: str) -> None: + if len(content.strip()) == 0: + raise ValueError("Comment content cannot be empty") + + if len(content) > 1000: + raise ValueError("Comment content cannot exceed 1000 characters") + + @staticmethod + def _filter_valid_mentioned_user_ids( + mentioned_user_ids: Sequence[str], *, session: Session, tenant_id: str + ) -> list[str]: + """Return deduplicated UUID user IDs that belong to the tenant, preserving input order.""" + unique_user_ids: list[str] = [] + seen: set[str] = set() + for user_id in mentioned_user_ids: + if not isinstance(user_id, str): + continue + if not uuid_value(user_id): + continue + if user_id in seen: + continue + seen.add(user_id) + unique_user_ids.append(user_id) + if not unique_user_ids: + return [] + + tenant_member_ids = { + str(account_id) + for account_id in session.scalars( + select(TenantAccountJoin.account_id).where( + TenantAccountJoin.tenant_id == tenant_id, + TenantAccountJoin.account_id.in_(unique_user_ids), + ) + ).all() + } + + return [user_id for user_id in unique_user_ids if user_id in tenant_member_ids] + + @staticmethod + def _format_comment_excerpt(content: str, max_length: int = 200) -> str: + """Trim comment content for email display.""" + trimmed = content.strip() + if len(trimmed) <= max_length: + return trimmed + if max_length <= 3: + return trimmed[:max_length] + return f"{trimmed[: max_length - 3].rstrip()}..." + + @staticmethod + def _build_mention_email_payloads( + session: Session, + tenant_id: str, + app_id: str, + mentioner_id: str, + mentioned_user_ids: Sequence[str], + content: str, + ) -> list[dict[str, str]]: + """Prepare email payloads for mentioned users, including workflow app link.""" + if not mentioned_user_ids: + return [] + + candidate_user_ids = [user_id for user_id in mentioned_user_ids if user_id != mentioner_id] + if not candidate_user_ids: + return [] + + app_name_value = session.scalar(select(App.name).where(App.id == app_id, App.tenant_id == tenant_id)) + app_name = app_name_value if isinstance(app_name_value, str) and app_name_value else "Dify app" + commenter_name_value = session.scalar(select(Account.name).where(Account.id == mentioner_id)) + commenter_name = ( + commenter_name_value if isinstance(commenter_name_value, str) and commenter_name_value else "Dify user" + ) + comment_excerpt = WorkflowCommentService._format_comment_excerpt(content) + base_url = dify_config.CONSOLE_WEB_URL.rstrip("/") + app_url = f"{base_url}/app/{app_id}/workflow" + + accounts = session.scalars( + select(Account) + .join(TenantAccountJoin, TenantAccountJoin.account_id == Account.id) + .where(TenantAccountJoin.tenant_id == tenant_id, Account.id.in_(candidate_user_ids)) + ).all() + + payloads: list[dict[str, str]] = [] + for account in accounts: + email = account.email + if not isinstance(email, str) or not email: + continue + mentioned_name = account.name if isinstance(account.name, str) and account.name else email + language = ( + account.interface_language + if isinstance(account.interface_language, str) and account.interface_language + else "en-US" + ) + payloads.append( + { + "language": language, + "to": email, + "mentioned_name": mentioned_name, + "commenter_name": commenter_name, + "app_name": app_name, + "comment_content": comment_excerpt, + "app_url": app_url, + } + ) + return payloads + + @staticmethod + def _dispatch_mention_emails(payloads: Sequence[dict[str, str]]) -> None: + """Enqueue mention notification emails.""" + for payload in payloads: + send_workflow_comment_mention_email_task.delay(**payload) + + @staticmethod + def get_comments(tenant_id: str, app_id: str) -> Sequence[WorkflowComment]: + """Get all comments for a workflow.""" + with Session(db.engine) as session: + # Get all comments with eager loading + stmt = ( + select(WorkflowComment) + .options(selectinload(WorkflowComment.replies), selectinload(WorkflowComment.mentions)) + .where(WorkflowComment.tenant_id == tenant_id, WorkflowComment.app_id == app_id) + .order_by(desc(WorkflowComment.created_at)) + ) + + comments = session.scalars(stmt).all() + + # Batch preload all Account objects to avoid N+1 queries + WorkflowCommentService._preload_accounts(session, comments) + + return comments + + @staticmethod + def _preload_accounts(session: Session, comments: Sequence[WorkflowComment]) -> None: + """Batch preload Account objects for comments, replies, and mentions.""" + # Collect all user IDs + user_ids: set[str] = set() + for comment in comments: + user_ids.add(comment.created_by) + if comment.resolved_by: + user_ids.add(comment.resolved_by) + user_ids.update(reply.created_by for reply in comment.replies) + user_ids.update(mention.mentioned_user_id for mention in comment.mentions) + + if not user_ids: + return + + # Batch query all accounts + accounts = session.scalars(select(Account).where(Account.id.in_(user_ids))).all() + account_map = {str(account.id): account for account in accounts} + + # Cache accounts on objects + for comment in comments: + comment.cache_created_by_account(account_map.get(comment.created_by)) + comment.cache_resolved_by_account(account_map.get(comment.resolved_by) if comment.resolved_by else None) + for reply in comment.replies: + reply.cache_created_by_account(account_map.get(reply.created_by)) + for mention in comment.mentions: + mention.cache_mentioned_user_account(account_map.get(mention.mentioned_user_id)) + + @staticmethod + def get_comment(tenant_id: str, app_id: str, comment_id: str, session: Session | None = None) -> WorkflowComment: + """Get a specific comment.""" + + def _get_comment(session: Session) -> WorkflowComment: + stmt = ( + select(WorkflowComment) + .options(selectinload(WorkflowComment.replies), selectinload(WorkflowComment.mentions)) + .where( + WorkflowComment.id == comment_id, + WorkflowComment.tenant_id == tenant_id, + WorkflowComment.app_id == app_id, + ) + ) + comment = session.scalar(stmt) + + if not comment: + raise NotFound("Comment not found") + + # Preload accounts to avoid N+1 queries + WorkflowCommentService._preload_accounts(session, [comment]) + + return comment + + if session is not None: + return _get_comment(session) + else: + with Session(db.engine, expire_on_commit=False) as session: + return _get_comment(session) + + @staticmethod + def create_comment( + tenant_id: str, + app_id: str, + created_by: str, + content: str, + position_x: float, + position_y: float, + mentioned_user_ids: list[str] | None = None, + ) -> dict: + """Create a new workflow comment and send mention notification emails.""" + WorkflowCommentService._validate_content(content) + + with Session(db.engine) as session: + comment = WorkflowComment( + tenant_id=tenant_id, + app_id=app_id, + position_x=position_x, + position_y=position_y, + content=content, + created_by=created_by, + ) + + session.add(comment) + session.flush() # Get the comment ID for mentions + + # Create mentions if specified + mentioned_user_ids = WorkflowCommentService._filter_valid_mentioned_user_ids( + mentioned_user_ids or [], + session=session, + tenant_id=tenant_id, + ) + for user_id in mentioned_user_ids: + mention = WorkflowCommentMention( + comment_id=comment.id, + reply_id=None, # This is a comment mention, not reply mention + mentioned_user_id=user_id, + ) + session.add(mention) + + mention_email_payloads = WorkflowCommentService._build_mention_email_payloads( + session=session, + tenant_id=tenant_id, + app_id=app_id, + mentioner_id=created_by, + mentioned_user_ids=mentioned_user_ids, + content=content, + ) + + session.commit() + WorkflowCommentService._dispatch_mention_emails(mention_email_payloads) + + # Return only what we need - id and created_at + return {"id": comment.id, "created_at": comment.created_at} + + @staticmethod + def update_comment( + tenant_id: str, + app_id: str, + comment_id: str, + user_id: str, + content: str, + position_x: float | None = None, + position_y: float | None = None, + mentioned_user_ids: list[str] | None = None, + ) -> dict: + """Update a workflow comment and notify newly mentioned users. + + `mentioned_user_ids=None` means "leave mentions unchanged". + Passing an explicit list replaces the existing comment mentions, including clearing them with `[]`. + """ + WorkflowCommentService._validate_content(content) + + with Session(db.engine, expire_on_commit=False) as session: + # Get comment with validation + stmt = select(WorkflowComment).where( + WorkflowComment.id == comment_id, + WorkflowComment.tenant_id == tenant_id, + WorkflowComment.app_id == app_id, + ) + comment = session.scalar(stmt) + + if not comment: + raise NotFound("Comment not found") + + # Only the creator can update the comment + if comment.created_by != user_id: + raise Forbidden("Only the comment creator can update it") + + # Update comment fields + comment.content = content + if position_x is not None: + comment.position_x = position_x + if position_y is not None: + comment.position_y = position_y + + mention_email_payloads: list[dict[str, str]] = [] + if mentioned_user_ids is not None: + # Replace comment mentions only when the client explicitly sends the mention list. + existing_mentions = session.scalars( + select(WorkflowCommentMention).where( + WorkflowCommentMention.comment_id == comment.id, + WorkflowCommentMention.reply_id.is_(None), # Only comment mentions, not reply mentions + ) + ).all() + existing_mentioned_user_ids = {mention.mentioned_user_id for mention in existing_mentions} + for mention in existing_mentions: + session.delete(mention) + + filtered_mentioned_user_ids = WorkflowCommentService._filter_valid_mentioned_user_ids( + mentioned_user_ids, + session=session, + tenant_id=tenant_id, + ) + new_mentioned_user_ids = [ + mentioned_user_id + for mentioned_user_id in filtered_mentioned_user_ids + if mentioned_user_id not in existing_mentioned_user_ids + ] + for mentioned_user_id in filtered_mentioned_user_ids: + mention = WorkflowCommentMention( + comment_id=comment.id, + reply_id=None, # This is a comment mention + mentioned_user_id=mentioned_user_id, + ) + session.add(mention) + + mention_email_payloads = WorkflowCommentService._build_mention_email_payloads( + session=session, + tenant_id=tenant_id, + app_id=app_id, + mentioner_id=user_id, + mentioned_user_ids=new_mentioned_user_ids, + content=content, + ) + + session.commit() + WorkflowCommentService._dispatch_mention_emails(mention_email_payloads) + + return {"id": comment.id, "updated_at": comment.updated_at} + + @staticmethod + def delete_comment(tenant_id: str, app_id: str, comment_id: str, user_id: str) -> None: + """Delete a workflow comment.""" + with Session(db.engine, expire_on_commit=False) as session: + comment = WorkflowCommentService.get_comment(tenant_id, app_id, comment_id, session) + + # Only the creator can delete the comment + if comment.created_by != user_id: + raise Forbidden("Only the comment creator can delete it") + + # Delete associated mentions (both comment and reply mentions) + mentions = session.scalars( + select(WorkflowCommentMention).where(WorkflowCommentMention.comment_id == comment_id) + ).all() + for mention in mentions: + session.delete(mention) + + # Delete associated replies + replies = session.scalars( + select(WorkflowCommentReply).where(WorkflowCommentReply.comment_id == comment_id) + ).all() + for reply in replies: + session.delete(reply) + + session.delete(comment) + session.commit() + + @staticmethod + def resolve_comment(tenant_id: str, app_id: str, comment_id: str, user_id: str) -> WorkflowComment: + """Resolve a workflow comment.""" + with Session(db.engine, expire_on_commit=False) as session: + comment = WorkflowCommentService.get_comment(tenant_id, app_id, comment_id, session) + if comment.resolved: + return comment + + comment.resolved = True + comment.resolved_at = naive_utc_now() + comment.resolved_by = user_id + session.commit() + + return comment + + @staticmethod + def create_reply( + comment_id: str, content: str, created_by: str, mentioned_user_ids: list[str] | None = None + ) -> dict: + """Add a reply to a workflow comment and notify mentioned users.""" + WorkflowCommentService._validate_content(content) + + with Session(db.engine, expire_on_commit=False) as session: + # Check if comment exists + comment = session.get(WorkflowComment, comment_id) + if not comment: + raise NotFound("Comment not found") + + reply = WorkflowCommentReply(comment_id=comment_id, content=content, created_by=created_by) + + session.add(reply) + session.flush() # Get the reply ID for mentions + + # Create mentions if specified + mentioned_user_ids = WorkflowCommentService._filter_valid_mentioned_user_ids( + mentioned_user_ids or [], + session=session, + tenant_id=comment.tenant_id, + ) + for user_id in mentioned_user_ids: + # Create mention linking to specific reply + mention = WorkflowCommentMention(comment_id=comment_id, reply_id=reply.id, mentioned_user_id=user_id) + session.add(mention) + + mention_email_payloads = WorkflowCommentService._build_mention_email_payloads( + session=session, + tenant_id=comment.tenant_id, + app_id=comment.app_id, + mentioner_id=created_by, + mentioned_user_ids=mentioned_user_ids, + content=content, + ) + + session.commit() + WorkflowCommentService._dispatch_mention_emails(mention_email_payloads) + + return {"id": reply.id, "created_at": reply.created_at} + + @staticmethod + def _get_reply_in_comment_scope( + *, + session: Session, + tenant_id: str, + app_id: str, + comment_id: str, + reply_id: str, + ) -> WorkflowCommentReply: + """Get a reply scoped to tenant/app/comment to prevent cross-thread mutations.""" + stmt = ( + select(WorkflowCommentReply) + .join(WorkflowComment, WorkflowComment.id == WorkflowCommentReply.comment_id) + .where( + WorkflowCommentReply.id == reply_id, + WorkflowCommentReply.comment_id == comment_id, + WorkflowComment.tenant_id == tenant_id, + WorkflowComment.app_id == app_id, + ) + .limit(1) + ) + reply = session.scalar(stmt) + if not reply: + raise NotFound("Reply not found") + return reply + + @staticmethod + def update_reply( + tenant_id: str, + app_id: str, + comment_id: str, + reply_id: str, + user_id: str, + content: str, + mentioned_user_ids: list[str] | None = None, + ) -> dict: + """Update a comment reply and notify newly mentioned users.""" + WorkflowCommentService._validate_content(content) + + with Session(db.engine, expire_on_commit=False) as session: + reply = WorkflowCommentService._get_reply_in_comment_scope( + session=session, + tenant_id=tenant_id, + app_id=app_id, + comment_id=comment_id, + reply_id=reply_id, + ) + + # Only the creator can update the reply + if reply.created_by != user_id: + raise Forbidden("Only the reply creator can update it") + + reply.content = content + + # Update mentions - first remove existing mentions for this reply + existing_mentions = session.scalars( + select(WorkflowCommentMention).where(WorkflowCommentMention.reply_id == reply.id) + ).all() + existing_mentioned_user_ids = {mention.mentioned_user_id for mention in existing_mentions} + for mention in existing_mentions: + session.delete(mention) + + # Add mentions + raw_mentioned_user_ids = mentioned_user_ids or [] + comment = session.get(WorkflowComment, reply.comment_id) + mentioned_user_ids = [] + if comment: + mentioned_user_ids = WorkflowCommentService._filter_valid_mentioned_user_ids( + raw_mentioned_user_ids, + session=session, + tenant_id=comment.tenant_id, + ) + new_mentioned_user_ids = [ + user_id for user_id in mentioned_user_ids if user_id not in existing_mentioned_user_ids + ] + for user_id_str in mentioned_user_ids: + mention = WorkflowCommentMention( + comment_id=reply.comment_id, reply_id=reply.id, mentioned_user_id=user_id_str + ) + session.add(mention) + + mention_email_payloads: list[dict[str, str]] = [] + if comment: + mention_email_payloads = WorkflowCommentService._build_mention_email_payloads( + session=session, + tenant_id=comment.tenant_id, + app_id=comment.app_id, + mentioner_id=user_id, + mentioned_user_ids=new_mentioned_user_ids, + content=content, + ) + + session.commit() + session.refresh(reply) # Refresh to get updated timestamp + WorkflowCommentService._dispatch_mention_emails(mention_email_payloads) + + return {"id": reply.id, "updated_at": reply.updated_at} + + @staticmethod + def delete_reply(tenant_id: str, app_id: str, comment_id: str, reply_id: str, user_id: str) -> None: + """Delete a comment reply.""" + with Session(db.engine, expire_on_commit=False) as session: + reply = WorkflowCommentService._get_reply_in_comment_scope( + session=session, + tenant_id=tenant_id, + app_id=app_id, + comment_id=comment_id, + reply_id=reply_id, + ) + + # Only the creator can delete the reply + if reply.created_by != user_id: + raise Forbidden("Only the reply creator can delete it") + + # Delete associated mentions first + mentions = session.scalars( + select(WorkflowCommentMention).where(WorkflowCommentMention.reply_id == reply_id) + ).all() + for mention in mentions: + session.delete(mention) + + session.delete(reply) + session.commit() + + @staticmethod + def validate_comment_access(comment_id: str, tenant_id: str, app_id: str) -> WorkflowComment: + """Validate that a comment belongs to the specified tenant and app.""" + return WorkflowCommentService.get_comment(tenant_id, app_id, comment_id) diff --git a/api/services/workflow_draft_variable_service.py b/api/services/workflow_draft_variable_service.py index 2cc6e21574..5ec00ee336 100644 --- a/api/services/workflow_draft_variable_service.py +++ b/api/services/workflow_draft_variable_service.py @@ -7,19 +7,6 @@ from datetime import datetime from enum import StrEnum from typing import Any, ClassVar, NotRequired, TypedDict -from graphon.enums import NodeType -from graphon.file import File -from graphon.nodes import BuiltinNodeTypes -from graphon.nodes.variable_assigner.common.helpers import get_updated_variables -from graphon.variable_loader import VariableLoader -from graphon.variables import Segment, StringSegment, VariableBase -from graphon.variables.consts import SELECTORS_LENGTH -from graphon.variables.segments import ( - ArrayFileSegment, - FileSegment, -) -from graphon.variables.types import SegmentType -from graphon.variables.utils import dumps_with_segments from sqlalchemy import Engine, delete, orm, select from sqlalchemy.dialects.mysql import insert as mysql_insert from sqlalchemy.dialects.postgresql import insert as pg_insert @@ -40,6 +27,19 @@ from core.workflow.variable_prefixes import ( from extensions.ext_storage import storage from factories.file_factory import StorageKeyLoader from factories.variable_factory import build_segment, segment_to_variable +from graphon.enums import NodeType +from graphon.file import File +from graphon.nodes import BuiltinNodeTypes +from graphon.nodes.variable_assigner.common.helpers import get_updated_variables +from graphon.variable_loader import VariableLoader +from graphon.variables import Segment, StringSegment, VariableBase +from graphon.variables.consts import SELECTORS_LENGTH +from graphon.variables.segments import ( + ArrayFileSegment, + FileSegment, +) +from graphon.variables.types import SegmentType +from graphon.variables.utils import dumps_with_segments from libs.datetime_utils import naive_utc_now from libs.uuid_utils import uuidv7 from models import Account, App, Conversation diff --git a/api/services/workflow_event_snapshot_service.py b/api/services/workflow_event_snapshot_service.py index 601e9261fc..5fca444723 100644 --- a/api/services/workflow_event_snapshot_service.py +++ b/api/services/workflow_event_snapshot_service.py @@ -9,10 +9,6 @@ from collections.abc import Generator, Mapping, Sequence from dataclasses import dataclass from typing import Any -from graphon.entities import WorkflowStartReason -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy import desc, select from sqlalchemy.orm import Session, sessionmaker @@ -26,6 +22,10 @@ from core.app.entities.task_entities import ( WorkflowStartStreamResponse, ) from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext +from graphon.entities import WorkflowStartReason +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from models.model import AppMode, Message from models.workflow import WorkflowNodeExecutionTriggeredFrom, WorkflowRun from repositories.api_workflow_node_execution_repository import WorkflowNodeExecutionSnapshot diff --git a/api/services/workflow_service.py b/api/services/workflow_service.py index 839b9e3319..d71223314e 100644 --- a/api/services/workflow_service.py +++ b/api/services/workflow_service.py @@ -5,31 +5,6 @@ import uuid from collections.abc import Callable, Generator, Mapping, Sequence from typing import Any, cast -from graphon.entities import WorkflowNodeExecution -from graphon.entities.graph_config import NodeConfigDict -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import ( - ErrorStrategy, - NodeType, - WorkflowNodeExecutionMetadataKey, - WorkflowNodeExecutionStatus, -) -from graphon.errors import WorkflowNodeRunFailedError -from graphon.file import File -from graphon.graph_events import GraphNodeEventBase, NodeRunFailedEvent, NodeRunSucceededEvent -from graphon.node_events import NodeRunResult -from graphon.nodes import BuiltinNodeTypes -from graphon.nodes.base.node import Node -from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, build_http_request_config -from graphon.nodes.human_input.entities import HumanInputNodeData, validate_human_input_submission -from graphon.nodes.human_input.enums import HumanInputFormKind -from graphon.nodes.human_input.human_input_node import HumanInputNode -from graphon.nodes.start.entities import StartNodeData -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variable_loader import load_into_variable_pool -from graphon.variables import VariableBase -from graphon.variables.input_entities import VariableEntityType -from graphon.variables.variables import Variable from sqlalchemy import exists, select from sqlalchemy.orm import Session, sessionmaker @@ -64,6 +39,31 @@ from events.app_event import app_draft_workflow_was_synced, app_published_workfl from extensions.ext_database import db from extensions.ext_storage import storage from factories.file_factory import build_from_mapping, build_from_mappings +from graphon.entities import WorkflowNodeExecution +from graphon.entities.graph_config import NodeConfigDict +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import ( + ErrorStrategy, + NodeType, + WorkflowNodeExecutionMetadataKey, + WorkflowNodeExecutionStatus, +) +from graphon.errors import WorkflowNodeRunFailedError +from graphon.file import File +from graphon.graph_events import GraphNodeEventBase, NodeRunFailedEvent, NodeRunSucceededEvent +from graphon.node_events import NodeRunResult +from graphon.nodes import BuiltinNodeTypes +from graphon.nodes.base.node import Node +from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, build_http_request_config +from graphon.nodes.human_input.entities import HumanInputNodeData, validate_human_input_submission +from graphon.nodes.human_input.enums import HumanInputFormKind +from graphon.nodes.human_input.human_input_node import HumanInputNode +from graphon.nodes.start.entities import StartNodeData +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variable_loader import load_into_variable_pool +from graphon.variables import VariableBase +from graphon.variables.input_entities import VariableEntityType +from graphon.variables.variables import Variable from libs.datetime_utils import naive_utc_now from models import Account from models.human_input import HumanInputFormRecipient, RecipientType @@ -199,6 +199,16 @@ class WorkflowService: return workflow + def get_accessible_app_ids(self, app_ids: Sequence[str], tenant_id: str) -> set[str]: + """ + Return app IDs that belong to the given tenant. + """ + if not app_ids: + return set() + + stmt = select(App.id).where(App.id.in_(app_ids), App.tenant_id == tenant_id) + return {str(app_id) for app_id in db.session.scalars(stmt).all()} + def get_all_published_workflow( self, *, @@ -241,8 +251,8 @@ class WorkflowService: self, *, app_model: App, - graph: dict, - features: dict, + graph: dict[str, Any], + features: dict[str, Any], unique_hash: str | None, account: Account, environment_variables: Sequence[VariableBase], @@ -296,6 +306,78 @@ class WorkflowService: # return draft workflow return workflow + def update_draft_workflow_environment_variables( + self, + *, + app_model: App, + environment_variables: Sequence[VariableBase], + account: Account, + ): + """ + Update draft workflow environment variables + """ + # fetch draft workflow by app_model + workflow = self.get_draft_workflow(app_model=app_model) + + if not workflow: + raise ValueError("No draft workflow found.") + + workflow.environment_variables = environment_variables + workflow.updated_by = account.id + workflow.updated_at = naive_utc_now() + + # commit db session changes + db.session.commit() + + def update_draft_workflow_conversation_variables( + self, + *, + app_model: App, + conversation_variables: Sequence[VariableBase], + account: Account, + ): + """ + Update draft workflow conversation variables + """ + # fetch draft workflow by app_model + workflow = self.get_draft_workflow(app_model=app_model) + + if not workflow: + raise ValueError("No draft workflow found.") + + workflow.conversation_variables = conversation_variables + workflow.updated_by = account.id + workflow.updated_at = naive_utc_now() + + # commit db session changes + db.session.commit() + + def update_draft_workflow_features( + self, + *, + app_model: App, + features: dict, + account: Account, + ): + """ + Update draft workflow features + """ + # fetch draft workflow by app_model + workflow = self.get_draft_workflow(app_model=app_model) + + if not workflow: + raise ValueError("No draft workflow found.") + + # validate features structure + self.validate_features_structure(app_model=app_model, features=features) + + workflow.features = json.dumps(features) + workflow.updated_by = account.id + workflow.updated_at = naive_utc_now() + + # commit db session changes + db.session.commit() + def restore_published_workflow_to_draft( self, *, @@ -576,7 +658,7 @@ class WorkflowService: except Exception as e: raise ValueError(f"Failed to validate default credential for tool provider {provider}: {str(e)}") - def _validate_load_balancing_credentials(self, workflow: Workflow, node_data: dict, node_id: str) -> None: + def _validate_load_balancing_credentials(self, workflow: Workflow, node_data: dict[str, Any], node_id: str) -> None: """ Validate load balancing credentials for a workflow node. @@ -1214,7 +1296,7 @@ class WorkflowService: return variable_pool def run_free_workflow_node( - self, node_data: dict, tenant_id: str, user_id: str, node_id: str, user_inputs: dict[str, Any] + self, node_data: dict[str, Any], tenant_id: str, user_id: str, node_id: str, user_inputs: dict[str, Any] ) -> WorkflowNodeExecution: """ Run free workflow node @@ -1361,7 +1443,7 @@ class WorkflowService: node_execution.status = WorkflowNodeExecutionStatus.FAILED node_execution.error = error - def convert_to_workflow(self, app_model: App, account: Account, args: dict) -> App: + def convert_to_workflow(self, app_model: App, account: Account, args: dict[str, Any]) -> App: """ Basic mode of chatbot app(expert mode) to workflow Completion App to Workflow App @@ -1421,7 +1503,7 @@ class WorkflowService: if node_type == BuiltinNodeTypes.HUMAN_INPUT: self._validate_human_input_node_data(node_data) - def validate_features_structure(self, app_model: App, features: dict): + def validate_features_structure(self, app_model: App, features: dict[str, Any]): match app_model.mode: case AppMode.ADVANCED_CHAT: return AdvancedChatAppConfigManager.config_validate( @@ -1434,7 +1516,7 @@ class WorkflowService: case _: raise ValueError(f"Invalid app mode: {app_model.mode}") - def _validate_human_input_node_data(self, node_data: dict) -> None: + def _validate_human_input_node_data(self, node_data: dict[str, Any]) -> None: """ Validate HumanInput node data format. @@ -1452,7 +1534,7 @@ class WorkflowService: raise ValueError(f"Invalid HumanInput node data: {str(e)}") def update_workflow( - self, *, session: Session, workflow_id: str, tenant_id: str, account_id: str, data: dict + self, *, session: Session, workflow_id: str, tenant_id: str, account_id: str, data: dict[str, Any] ) -> Workflow | None: """ Update workflow attributes diff --git a/api/tasks/app_generate/workflow_execute_task.py b/api/tasks/app_generate/workflow_execute_task.py index 8f2f5f261e..c22e7e9918 100644 --- a/api/tasks/app_generate/workflow_execute_task.py +++ b/api/tasks/app_generate/workflow_execute_task.py @@ -7,7 +7,6 @@ from typing import Annotated, Any from celery import shared_task from flask import current_app, json -from graphon.runtime import GraphRuntimeState from pydantic import BaseModel, Discriminator, Field, Tag from sqlalchemy import Engine, select from sqlalchemy.orm import Session, sessionmaker @@ -23,6 +22,7 @@ from core.app.entities.app_invoke_entities import ( from core.app.layers.pause_state_persist_layer import PauseStateLayerConfig, WorkflowResumptionContext from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.runtime import GraphRuntimeState from libs.flask_utils import set_login_user from models.account import Account from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom diff --git a/api/tasks/async_workflow_tasks.py b/api/tasks/async_workflow_tasks.py index 9ff34c7c48..5809268992 100644 --- a/api/tasks/async_workflow_tasks.py +++ b/api/tasks/async_workflow_tasks.py @@ -10,7 +10,6 @@ from datetime import UTC, datetime from typing import Any, NotRequired from celery import shared_task -from graphon.runtime import GraphRuntimeState from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker from typing_extensions import TypedDict @@ -24,6 +23,7 @@ from core.app.layers.trigger_post_layer import TriggerPostLayer from core.db.session_factory import session_factory from core.repositories import DifyCoreRepositoryFactory from extensions.ext_database import db +from graphon.runtime import GraphRuntimeState from models.account import Account from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom, WorkflowTriggerStatus from models.model import App, EndUser, Tenant diff --git a/api/tasks/batch_create_segment_to_index_task.py b/api/tasks/batch_create_segment_to_index_task.py index 77feea47a2..beb23d8354 100644 --- a/api/tasks/batch_create_segment_to_index_task.py +++ b/api/tasks/batch_create_segment_to_index_task.py @@ -3,11 +3,11 @@ import tempfile import time import uuid from pathlib import Path +from typing import Any import click import pandas as pd from celery import shared_task -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import func, select from core.db.session_factory import session_factory @@ -15,6 +15,7 @@ from core.model_manager import ModelManager from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from extensions.ext_redis import redis_client from extensions.ext_storage import storage +from graphon.model_runtime.entities.model_entities import ModelType from libs import helper from libs.datetime_utils import naive_utc_now from models.dataset import Dataset, Document, DocumentSegment @@ -51,8 +52,8 @@ def batch_create_segment_to_index_task( # Initialize variables with default values upload_file_key: str | None = None - dataset_config: dict | None = None - document_config: dict | None = None + dataset_config: dict[str, Any] | None = None + document_config: dict[str, Any] | None = None with session_factory.create_session() as session: try: diff --git a/api/tasks/human_input_timeout_tasks.py b/api/tasks/human_input_timeout_tasks.py index ca73b4d374..fd743205a1 100644 --- a/api/tasks/human_input_timeout_tasks.py +++ b/api/tasks/human_input_timeout_tasks.py @@ -2,8 +2,6 @@ import logging from datetime import timedelta from celery import shared_task -from graphon.enums import WorkflowExecutionStatus -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from sqlalchemy import or_, select from sqlalchemy.orm import sessionmaker @@ -11,6 +9,8 @@ from configs import dify_config from core.repositories.human_input_repository import HumanInputFormSubmissionRepository from extensions.ext_database import db from extensions.ext_storage import storage +from graphon.enums import WorkflowExecutionStatus +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import ensure_naive_utc, naive_utc_now from models.human_input import HumanInputForm from models.workflow import WorkflowPause, WorkflowRun diff --git a/api/tasks/mail_human_input_delivery_task.py b/api/tasks/mail_human_input_delivery_task.py index a316eec7b9..f8ae3f4b6e 100644 --- a/api/tasks/mail_human_input_delivery_task.py +++ b/api/tasks/mail_human_input_delivery_task.py @@ -6,7 +6,6 @@ from typing import Any import click from celery import shared_task -from graphon.runtime import GraphRuntimeState, VariablePool from sqlalchemy import select from sqlalchemy.orm import Session, sessionmaker @@ -15,6 +14,7 @@ from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext from core.workflow.human_input_compat import EmailDeliveryConfig, EmailDeliveryMethod from extensions.ext_database import db from extensions.ext_mail import mail +from graphon.runtime import GraphRuntimeState, VariablePool from models.human_input import ( DeliveryMethodType, HumanInputDelivery, diff --git a/api/tasks/mail_workflow_comment_task.py b/api/tasks/mail_workflow_comment_task.py new file mode 100644 index 0000000000..36d51f0514 --- /dev/null +++ b/api/tasks/mail_workflow_comment_task.py @@ -0,0 +1,65 @@ +import logging +import time + +import click +from celery import shared_task + +from extensions.ext_mail import mail +from libs.email_i18n import EmailType, get_email_i18n_service + +logger = logging.getLogger(__name__) + + +@shared_task(queue="mail") +def send_workflow_comment_mention_email_task( + language: str, + to: str, + mentioned_name: str, + commenter_name: str, + app_name: str, + comment_content: str, + app_url: str, +): + """ + Send workflow comment mention email with internationalization support. + + Args: + language: Language code for email localization + to: Recipient email address + mentioned_name: Name of the mentioned user + commenter_name: Name of the comment author + app_name: Name of the app where the comment was made + comment_content: Comment content excerpt + app_url: Link to the app workflow page + """ + if not mail.is_inited(): + return + + logger.info(click.style(f"Start workflow comment mention mail to {to}", fg="green")) + start_at = time.perf_counter() + + try: + email_service = get_email_i18n_service() + email_service.send_email( + email_type=EmailType.WORKFLOW_COMMENT_MENTION, + language_code=language, + to=to, + template_context={ + "to": to, + "mentioned_name": mentioned_name, + "commenter_name": commenter_name, + "app_name": app_name, + "comment_content": comment_content, + "app_url": app_url, + }, + ) + + end_at = time.perf_counter() + logger.info( + click.style( + f"Send workflow comment mention mail to {to} succeeded: latency: {end_at - start_at}", + fg="green", + ) + ) + except Exception: + logger.exception("workflow comment mention email to %s failed", to) diff --git a/api/tasks/remove_app_and_related_data_task.py b/api/tasks/remove_app_and_related_data_task.py index 72d824b8c1..5f1f0952af 100644 --- a/api/tasks/remove_app_and_related_data_task.py +++ b/api/tasks/remove_app_and_related_data_task.py @@ -679,7 +679,7 @@ def _delete_workflow_trigger_logs(tenant_id: str, app_id: str): ) -def _delete_records(query_sql: str, params: dict, delete_func: Callable, name: str) -> None: +def _delete_records(query_sql: str, params: dict[str, Any], delete_func: Callable, name: str) -> None: while True: with session_factory.create_session() as session: rs = session.execute(sa.text(query_sql), params) diff --git a/api/tasks/trigger_processing_tasks.py b/api/tasks/trigger_processing_tasks.py index b9f382eccf..b0cbc54db3 100644 --- a/api/tasks/trigger_processing_tasks.py +++ b/api/tasks/trigger_processing_tasks.py @@ -12,7 +12,6 @@ from datetime import UTC, datetime from typing import Any from celery import shared_task -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import func, select from sqlalchemy.orm import Session @@ -29,6 +28,7 @@ from core.trigger.provider import PluginTriggerProviderController from core.trigger.trigger_manager import TriggerManager from core.workflow.nodes.trigger_plugin.entities import TriggerEventNodeData from enums.quota_type import QuotaType +from graphon.enums import WorkflowExecutionStatus from models.enums import ( AppTriggerType, CreatorUserRole, diff --git a/api/tasks/workflow_execution_tasks.py b/api/tasks/workflow_execution_tasks.py index 0c7f74c180..5ca04fd7c2 100644 --- a/api/tasks/workflow_execution_tasks.py +++ b/api/tasks/workflow_execution_tasks.py @@ -7,13 +7,14 @@ improving performance by offloading storage operations to background workers. import json import logging +from typing import Any from celery import shared_task -from graphon.entities import WorkflowExecution -from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from sqlalchemy import select from core.db.session_factory import session_factory +from graphon.entities import WorkflowExecution +from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter from models import CreatorUserRole, WorkflowRun from models.enums import WorkflowRunTriggeredFrom @@ -23,7 +24,7 @@ logger = logging.getLogger(__name__) @shared_task(queue="workflow_storage", bind=True, max_retries=3, default_retry_delay=60) def save_workflow_execution_task( self, - execution_data: dict, + execution_data: dict[str, Any], tenant_id: str, app_id: str, triggered_from: str, diff --git a/api/tasks/workflow_node_execution_tasks.py b/api/tasks/workflow_node_execution_tasks.py index f25ebe3bae..0d5475a56d 100644 --- a/api/tasks/workflow_node_execution_tasks.py +++ b/api/tasks/workflow_node_execution_tasks.py @@ -7,15 +7,16 @@ improving performance by offloading storage operations to background workers. import json import logging +from typing import Any from celery import shared_task +from sqlalchemy import select + +from core.db.session_factory import session_factory from graphon.entities.workflow_node_execution import ( WorkflowNodeExecution, ) from graphon.workflow_type_encoder import WorkflowRuntimeTypeConverter -from sqlalchemy import select - -from core.db.session_factory import session_factory from models import CreatorUserRole, WorkflowNodeExecutionModel from models.workflow import WorkflowNodeExecutionTriggeredFrom @@ -25,7 +26,7 @@ logger = logging.getLogger(__name__) @shared_task(queue="workflow_storage", bind=True, max_retries=3, default_retry_delay=60) def save_workflow_node_execution_task( self, - execution_data: dict, + execution_data: dict[str, Any], tenant_id: str, app_id: str, triggered_from: str, diff --git a/api/templates/without-brand/workflow_comment_mention_template_en-US.html b/api/templates/without-brand/workflow_comment_mention_template_en-US.html new file mode 100644 index 0000000000..1ef8fe4e3f --- /dev/null +++ b/api/templates/without-brand/workflow_comment_mention_template_en-US.html @@ -0,0 +1,119 @@ + + + + + + + + +
+
+ Dify Logo +
+

You were mentioned in a workflow comment

+
+

Hi {{ mentioned_name }},

+

{{ commenter_name }} mentioned you in {{ app_name }}.

+
+
+

{{ comment_content }}

+
+

Open {{ application_title }} to reply to the comment.

+
+ + + diff --git a/api/templates/without-brand/workflow_comment_mention_template_zh-CN.html b/api/templates/without-brand/workflow_comment_mention_template_zh-CN.html new file mode 100644 index 0000000000..8b9b2dbe71 --- /dev/null +++ b/api/templates/without-brand/workflow_comment_mention_template_zh-CN.html @@ -0,0 +1,119 @@ + + + + + + + + +
+
+ Dify Logo +
+

你在工作流评论中被提及

+
+

你好,{{ mentioned_name }}:

+

{{ commenter_name }} 在 {{ app_name }} 中提及了你。

+
+
+

{{ comment_content }}

+
+

请在 {{ application_title }} 中查看并回复此评论。

+
+ + + diff --git a/api/templates/workflow_comment_mention_template_en-US.html b/api/templates/workflow_comment_mention_template_en-US.html new file mode 100644 index 0000000000..1ef8fe4e3f --- /dev/null +++ b/api/templates/workflow_comment_mention_template_en-US.html @@ -0,0 +1,119 @@ + + + + + + + + +
+
+ Dify Logo +
+

You were mentioned in a workflow comment

+
+

Hi {{ mentioned_name }},

+

{{ commenter_name }} mentioned you in {{ app_name }}.

+
+
+

{{ comment_content }}

+
+

Open {{ application_title }} to reply to the comment.

+
+ + + diff --git a/api/templates/workflow_comment_mention_template_zh-CN.html b/api/templates/workflow_comment_mention_template_zh-CN.html new file mode 100644 index 0000000000..8b9b2dbe71 --- /dev/null +++ b/api/templates/workflow_comment_mention_template_zh-CN.html @@ -0,0 +1,119 @@ + + + + + + + + +
+
+ Dify Logo +
+

你在工作流评论中被提及

+
+

你好,{{ mentioned_name }}:

+

{{ commenter_name }} 在 {{ app_name }} 中提及了你。

+
+
+

{{ comment_content }}

+
+

请在 {{ application_title }} 中查看并回复此评论。

+
+ + + diff --git a/api/tests/integration_tests/.env.example b/api/tests/integration_tests/.env.example index f84d39aeb5..c07ab6d6bf 100644 --- a/api/tests/integration_tests/.env.example +++ b/api/tests/integration_tests/.env.example @@ -33,6 +33,7 @@ REDIS_USERNAME= REDIS_PASSWORD=difyai123456 REDIS_USE_SSL=false REDIS_DB=0 +REDIS_KEY_PREFIX= # PostgreSQL database configuration DB_USERNAME=postgres diff --git a/api/tests/integration_tests/conftest.py b/api/tests/integration_tests/conftest.py index b2e8dda443..09078d196d 100644 --- a/api/tests/integration_tests/conftest.py +++ b/api/tests/integration_tests/conftest.py @@ -48,7 +48,7 @@ os.environ["OPENDAL_FS_ROOT"] = "/tmp/dify-storage" os.environ.setdefault("STORAGE_TYPE", "opendal") os.environ.setdefault("OPENDAL_SCHEME", "fs") -_CACHED_APP = create_app() +_SIO_APP, _CACHED_APP = create_app() @pytest.fixture(scope="session") diff --git a/api/tests/integration_tests/controllers/console/app/test_workflow_draft_variable.py b/api/tests/integration_tests/controllers/console/app/test_workflow_draft_variable.py deleted file mode 100644 index 038f37af5f..0000000000 --- a/api/tests/integration_tests/controllers/console/app/test_workflow_draft_variable.py +++ /dev/null @@ -1,47 +0,0 @@ -import uuid -from unittest import mock - -from controllers.console.app import workflow_draft_variable as draft_variable_api -from controllers.console.app import wraps -from factories.variable_factory import build_segment -from models import App, AppMode -from models.workflow import WorkflowDraftVariable -from services.workflow_draft_variable_service import WorkflowDraftVariableList, WorkflowDraftVariableService - - -def _get_mock_srv_class() -> type[WorkflowDraftVariableService]: - return mock.create_autospec(WorkflowDraftVariableService) - - -class TestWorkflowDraftNodeVariableListApi: - def test_get(self, test_client, auth_header, monkeypatch): - srv_class = _get_mock_srv_class() - mock_app_model: App = App() - mock_app_model.id = str(uuid.uuid4()) - test_node_id = "test_node_id" - mock_app_model.mode = AppMode.ADVANCED_CHAT - mock_load_app_model = mock.Mock(return_value=mock_app_model) - - monkeypatch.setattr(draft_variable_api, "WorkflowDraftVariableService", srv_class) - monkeypatch.setattr(wraps, "_load_app_model", mock_load_app_model) - - var1 = WorkflowDraftVariable.new_node_variable( - app_id="test_app_1", - node_id="test_node_1", - name="str_var", - value=build_segment("str_value"), - node_execution_id=str(uuid.uuid4()), - ) - srv_instance = mock.create_autospec(WorkflowDraftVariableService, instance=True) - srv_class.return_value = srv_instance - srv_instance.list_node_variables.return_value = WorkflowDraftVariableList(variables=[var1]) - - response = test_client.get( - f"/console/api/apps/{mock_app_model.id}/workflows/draft/nodes/{test_node_id}/variables", - headers=auth_header, - ) - assert response.status_code == 200 - response_dict = response.json - assert isinstance(response_dict, dict) - assert "items" in response_dict - assert len(response_dict["items"]) == 1 diff --git a/api/tests/integration_tests/controllers/console/workspace/test_trigger_provider_permissions.py b/api/tests/integration_tests/controllers/console/workspace/test_trigger_provider_permissions.py deleted file mode 100644 index e55c12e678..0000000000 --- a/api/tests/integration_tests/controllers/console/workspace/test_trigger_provider_permissions.py +++ /dev/null @@ -1,244 +0,0 @@ -"""Integration tests for Trigger Provider subscription permission verification.""" - -import uuid -from unittest import mock - -import pytest -from flask.testing import FlaskClient - -from controllers.console.workspace import trigger_providers as trigger_providers_api -from libs.datetime_utils import naive_utc_now -from models import Tenant -from models.account import Account, TenantAccountJoin, TenantAccountRole - - -class TestTriggerProviderSubscriptionPermissions: - """Test permission verification for Trigger Provider subscription endpoints.""" - - @pytest.fixture - def mock_account(self, monkeypatch: pytest.MonkeyPatch): - """Create a mock Account for testing.""" - - account = Account(name="Test User", email="test@example.com") - account.id = str(uuid.uuid4()) - account.last_active_at = naive_utc_now() - account.created_at = naive_utc_now() - account.updated_at = naive_utc_now() - - # Create mock tenant - tenant = Tenant(name="Test Tenant") - tenant.id = str(uuid.uuid4()) - - mock_session_instance = mock.Mock() - - mock_tenant_join = TenantAccountJoin(role=TenantAccountRole.OWNER) - monkeypatch.setattr(mock_session_instance, "scalar", mock.Mock(return_value=mock_tenant_join)) - - mock_scalars_result = mock.Mock() - mock_scalars_result.one.return_value = tenant - monkeypatch.setattr(mock_session_instance, "scalars", mock.Mock(return_value=mock_scalars_result)) - - mock_session_context = mock.Mock() - mock_session_context.__enter__.return_value = mock_session_instance - monkeypatch.setattr("models.account.Session", lambda _, expire_on_commit: mock_session_context) - - account.current_tenant = tenant - account.current_tenant_id = tenant.id - return account - - @pytest.mark.parametrize( - ("role", "list_status", "get_status", "update_status", "create_status", "build_status", "delete_status"), - [ - # Admin/Owner can do everything - (TenantAccountRole.OWNER, 200, 200, 200, 200, 200, 200), - (TenantAccountRole.ADMIN, 200, 200, 200, 200, 200, 200), - # Editor can list, get, update (parameters), but not create, build, or delete - (TenantAccountRole.EDITOR, 200, 200, 200, 403, 403, 403), - # Normal user cannot do anything - (TenantAccountRole.NORMAL, 403, 403, 403, 403, 403, 403), - # Dataset operator cannot do anything - (TenantAccountRole.DATASET_OPERATOR, 403, 403, 403, 403, 403, 403), - ], - ) - def test_trigger_subscription_permissions( - self, - test_client: FlaskClient, - auth_header, - monkeypatch, - mock_account, - role: TenantAccountRole, - list_status: int, - get_status: int, - update_status: int, - create_status: int, - build_status: int, - delete_status: int, - ): - """Test that different roles have appropriate permissions for trigger subscription operations.""" - # Set user role - mock_account.role = role - - # Mock current user - monkeypatch.setattr(trigger_providers_api, "current_user", mock_account) - - # Mock AccountService.load_user to prevent authentication issues - from services.account_service import AccountService - - mock_load_user = mock.Mock(return_value=mock_account) - monkeypatch.setattr(AccountService, "load_user", mock_load_user) - - # Test data - provider = "some_provider/some_trigger" - subscription_builder_id = str(uuid.uuid4()) - subscription_id = str(uuid.uuid4()) - - # Mock service methods - mock_list_subscriptions = mock.Mock(return_value=[]) - monkeypatch.setattr( - "services.trigger.trigger_provider_service.TriggerProviderService.list_trigger_provider_subscriptions", - mock_list_subscriptions, - ) - - mock_get_subscription_builder = mock.Mock(return_value={"id": subscription_builder_id}) - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.get_subscription_builder_by_id", - mock_get_subscription_builder, - ) - - mock_update_subscription_builder = mock.Mock(return_value={"id": subscription_builder_id}) - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.update_trigger_subscription_builder", - mock_update_subscription_builder, - ) - - mock_create_subscription_builder = mock.Mock(return_value={"id": subscription_builder_id}) - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.create_trigger_subscription_builder", - mock_create_subscription_builder, - ) - - mock_update_and_build_builder = mock.Mock() - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.update_and_build_builder", - mock_update_and_build_builder, - ) - - mock_delete_provider = mock.Mock() - mock_delete_plugin_trigger = mock.Mock() - mock_db_session = mock.Mock() - mock_db_session.commit = mock.Mock() - - def mock_session_func(engine=None): - return mock_session_context - - mock_session_context = mock.Mock() - mock_session_context.__enter__.return_value = mock_db_session - mock_session_context.__exit__.return_value = None - - monkeypatch.setattr("services.trigger.trigger_provider_service.Session", mock_session_func) - monkeypatch.setattr("services.trigger.trigger_subscription_operator_service.Session", mock_session_func) - - monkeypatch.setattr( - "services.trigger.trigger_provider_service.TriggerProviderService.delete_trigger_provider", - mock_delete_provider, - ) - monkeypatch.setattr( - "services.trigger.trigger_subscription_operator_service.TriggerSubscriptionOperatorService.delete_plugin_trigger_by_subscription", - mock_delete_plugin_trigger, - ) - - # Test 1: List subscriptions (should work for Editor, Admin, Owner) - response = test_client.get( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/list", - headers=auth_header, - ) - assert response.status_code == list_status - - # Test 2: Get subscription builder (should work for Editor, Admin, Owner) - response = test_client.get( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/{subscription_builder_id}", - headers=auth_header, - ) - assert response.status_code == get_status - - # Test 3: Update subscription builder parameters (should work for Editor, Admin, Owner) - response = test_client.post( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/update/{subscription_builder_id}", - headers=auth_header, - json={"parameters": {"webhook_url": "https://example.com/webhook"}}, - ) - assert response.status_code == update_status - - # Test 4: Create subscription builder (should only work for Admin, Owner) - response = test_client.post( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/create", - headers=auth_header, - json={"credential_type": "api_key"}, - ) - assert response.status_code == create_status - - # Test 5: Build/activate subscription (should only work for Admin, Owner) - response = test_client.post( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/build/{subscription_builder_id}", - headers=auth_header, - json={"name": "Test Subscription"}, - ) - assert response.status_code == build_status - - # Test 6: Delete subscription (should only work for Admin, Owner) - response = test_client.post( - f"/console/api/workspaces/current/trigger-provider/{subscription_id}/subscriptions/delete", - headers=auth_header, - ) - assert response.status_code == delete_status - - @pytest.mark.parametrize( - ("role", "status"), - [ - (TenantAccountRole.OWNER, 200), - (TenantAccountRole.ADMIN, 200), - # Editor should be able to access logs for debugging - (TenantAccountRole.EDITOR, 200), - (TenantAccountRole.NORMAL, 403), - (TenantAccountRole.DATASET_OPERATOR, 403), - ], - ) - def test_trigger_subscription_logs_permissions( - self, - test_client: FlaskClient, - auth_header, - monkeypatch, - mock_account, - role: TenantAccountRole, - status: int, - ): - """Test that different roles have appropriate permissions for accessing subscription logs.""" - # Set user role - mock_account.role = role - - # Mock current user - monkeypatch.setattr(trigger_providers_api, "current_user", mock_account) - - # Mock AccountService.load_user to prevent authentication issues - from services.account_service import AccountService - - mock_load_user = mock.Mock(return_value=mock_account) - monkeypatch.setattr(AccountService, "load_user", mock_load_user) - - # Test data - provider = "some_provider/some_trigger" - subscription_builder_id = str(uuid.uuid4()) - - # Mock service method - mock_list_logs = mock.Mock(return_value=[]) - monkeypatch.setattr( - "services.trigger.trigger_subscription_builder_service.TriggerSubscriptionBuilderService.list_logs", - mock_list_logs, - ) - - # Test access to logs - response = test_client.get( - f"/console/api/workspaces/current/trigger-provider/{provider}/subscriptions/builder/logs/{subscription_builder_id}", - headers=auth_header, - ) - assert response.status_code == status diff --git a/api/tests/integration_tests/core/datasource/test_datasource_manager_integration.py b/api/tests/integration_tests/core/datasource/test_datasource_manager_integration.py index 91245e879e..a876b0c4aa 100644 --- a/api/tests/integration_tests/core/datasource/test_datasource_manager_integration.py +++ b/api/tests/integration_tests/core/datasource/test_datasource_manager_integration.py @@ -1,9 +1,8 @@ from collections.abc import Generator -from graphon.node_events import StreamCompletedEvent - from core.datasource.datasource_manager import DatasourceManager from core.datasource.entities.datasource_entities import DatasourceMessage +from graphon.node_events import StreamCompletedEvent def _gen_var_stream() -> Generator[DatasourceMessage, None, None]: diff --git a/api/tests/integration_tests/core/workflow/nodes/datasource/test_datasource_node_integration.py b/api/tests/integration_tests/core/workflow/nodes/datasource/test_datasource_node_integration.py index 3fdea10976..b5318aaa2b 100644 --- a/api/tests/integration_tests/core/workflow/nodes/datasource/test_datasource_node_integration.py +++ b/api/tests/integration_tests/core/workflow/nodes/datasource/test_datasource_node_integration.py @@ -1,8 +1,7 @@ -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.node_events import NodeRunResult, StreamCompletedEvent - from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY from core.workflow.nodes.datasource.datasource_node import DatasourceNode +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.node_events import NodeRunResult, StreamCompletedEvent class _Seg: diff --git a/api/tests/integration_tests/factories/test_storage_key_loader.py b/api/tests/integration_tests/factories/test_storage_key_loader.py deleted file mode 100644 index c1bb8e1245..0000000000 --- a/api/tests/integration_tests/factories/test_storage_key_loader.py +++ /dev/null @@ -1,375 +0,0 @@ -import unittest -from datetime import UTC, datetime -from unittest.mock import patch -from uuid import uuid4 - -import pytest -from graphon.file import File, FileTransferMethod, FileType -from sqlalchemy.orm import Session - -from core.app.file_access import DatabaseFileAccessController -from extensions.ext_database import db -from extensions.storage.storage_type import StorageType -from factories.file_factory import StorageKeyLoader -from models import ToolFile, UploadFile -from models.enums import CreatorUserRole - - -@pytest.mark.usefixtures("flask_req_ctx") -class TestStorageKeyLoader(unittest.TestCase): - """ - Integration tests for StorageKeyLoader class. - - Tests the batched loading of storage keys from the database for files - with different transfer methods: LOCAL_FILE, REMOTE_URL, and TOOL_FILE. - """ - - def setUp(self): - """Set up test data before each test method.""" - self.session = db.session() - self.tenant_id = str(uuid4()) - self.user_id = str(uuid4()) - self.conversation_id = str(uuid4()) - - # Create test data that will be cleaned up after each test - self.test_upload_files = [] - self.test_tool_files = [] - - # Create StorageKeyLoader instance - self.loader = StorageKeyLoader( - self.session, - self.tenant_id, - access_controller=DatabaseFileAccessController(), - ) - - def tearDown(self): - """Clean up test data after each test method.""" - self.session.rollback() - - def _create_upload_file( - self, file_id: str | None = None, storage_key: str | None = None, tenant_id: str | None = None - ) -> UploadFile: - """Helper method to create an UploadFile record for testing.""" - if file_id is None: - file_id = str(uuid4()) - if storage_key is None: - storage_key = f"test_storage_key_{uuid4()}" - if tenant_id is None: - tenant_id = self.tenant_id - - upload_file = UploadFile( - tenant_id=tenant_id, - storage_type=StorageType.LOCAL, - key=storage_key, - name="test_file.txt", - size=1024, - extension=".txt", - mime_type="text/plain", - created_by_role=CreatorUserRole.ACCOUNT, - created_by=self.user_id, - created_at=datetime.now(UTC), - used=False, - ) - upload_file.id = file_id - - self.session.add(upload_file) - self.session.flush() - self.test_upload_files.append(upload_file) - - return upload_file - - def _create_tool_file( - self, file_id: str | None = None, file_key: str | None = None, tenant_id: str | None = None - ) -> ToolFile: - """Helper method to create a ToolFile record for testing.""" - if file_id is None: - file_id = str(uuid4()) - if file_key is None: - file_key = f"test_file_key_{uuid4()}" - if tenant_id is None: - tenant_id = self.tenant_id - - tool_file = ToolFile( - user_id=self.user_id, - tenant_id=tenant_id, - conversation_id=self.conversation_id, - file_key=file_key, - mimetype="text/plain", - original_url="http://example.com/file.txt", - name="test_tool_file.txt", - size=2048, - ) - tool_file.id = file_id - self.session.add(tool_file) - self.session.flush() - self.test_tool_files.append(tool_file) - - return tool_file - - def _create_file(self, related_id: str, transfer_method: FileTransferMethod, tenant_id: str | None = None) -> File: - """Helper method to create a File object for testing.""" - if tenant_id is None: - tenant_id = self.tenant_id - - # Set related_id for LOCAL_FILE and TOOL_FILE transfer methods - file_related_id = None - remote_url = None - - if transfer_method in (FileTransferMethod.LOCAL_FILE, FileTransferMethod.TOOL_FILE): - file_related_id = related_id - elif transfer_method == FileTransferMethod.REMOTE_URL: - remote_url = "https://example.com/test_file.txt" - file_related_id = related_id - - return File( - id=str(uuid4()), # Generate new UUID for File.id - tenant_id=tenant_id, - type=FileType.DOCUMENT, - transfer_method=transfer_method, - related_id=file_related_id, - remote_url=remote_url, - filename="test_file.txt", - extension=".txt", - mime_type="text/plain", - size=1024, - storage_key="initial_key", - ) - - def test_load_storage_keys_local_file(self): - """Test loading storage keys for LOCAL_FILE transfer method.""" - # Create test data - upload_file = self._create_upload_file() - file = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE) - - # Load storage keys - self.loader.load_storage_keys([file]) - - # Verify storage key was loaded correctly - assert file._storage_key == upload_file.key - - def test_load_storage_keys_remote_url(self): - """Test loading storage keys for REMOTE_URL transfer method.""" - # Create test data - upload_file = self._create_upload_file() - file = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.REMOTE_URL) - - # Load storage keys - self.loader.load_storage_keys([file]) - - # Verify storage key was loaded correctly - assert file._storage_key == upload_file.key - - def test_load_storage_keys_tool_file(self): - """Test loading storage keys for TOOL_FILE transfer method.""" - # Create test data - tool_file = self._create_tool_file() - file = self._create_file(related_id=tool_file.id, transfer_method=FileTransferMethod.TOOL_FILE) - - # Load storage keys - self.loader.load_storage_keys([file]) - - # Verify storage key was loaded correctly - assert file._storage_key == tool_file.file_key - - def test_load_storage_keys_mixed_methods(self): - """Test batch loading with mixed transfer methods.""" - # Create test data for different transfer methods - upload_file1 = self._create_upload_file() - upload_file2 = self._create_upload_file() - tool_file = self._create_tool_file() - - file1 = self._create_file(related_id=upload_file1.id, transfer_method=FileTransferMethod.LOCAL_FILE) - file2 = self._create_file(related_id=upload_file2.id, transfer_method=FileTransferMethod.REMOTE_URL) - file3 = self._create_file(related_id=tool_file.id, transfer_method=FileTransferMethod.TOOL_FILE) - - files = [file1, file2, file3] - - # Load storage keys - self.loader.load_storage_keys(files) - - # Verify all storage keys were loaded correctly - assert file1._storage_key == upload_file1.key - assert file2._storage_key == upload_file2.key - assert file3._storage_key == tool_file.file_key - - def test_load_storage_keys_empty_list(self): - """Test with empty file list.""" - # Should not raise any exceptions - self.loader.load_storage_keys([]) - - def test_load_storage_keys_ignores_legacy_file_tenant_id(self): - """Legacy file tenant_id should not override the loader tenant scope.""" - upload_file = self._create_upload_file() - file = self._create_file( - related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE, tenant_id=str(uuid4()) - ) - - self.loader.load_storage_keys([file]) - - assert file._storage_key == upload_file.key - - def test_load_storage_keys_missing_file_id(self): - """Test with None file.related_id.""" - # Create a file with valid parameters first, then manually set related_id to None - file = self._create_file(related_id=str(uuid4()), transfer_method=FileTransferMethod.LOCAL_FILE) - file.related_id = None - - # Should raise ValueError for None file related_id - with pytest.raises(ValueError) as context: - self.loader.load_storage_keys([file]) - - assert str(context.value) == "file id should not be None." - - def test_load_storage_keys_nonexistent_upload_file_records(self): - """Test with missing UploadFile database records.""" - # Create file with non-existent upload file id - non_existent_id = str(uuid4()) - file = self._create_file(related_id=non_existent_id, transfer_method=FileTransferMethod.LOCAL_FILE) - - # Should raise ValueError for missing record - with pytest.raises(ValueError): - self.loader.load_storage_keys([file]) - - def test_load_storage_keys_nonexistent_tool_file_records(self): - """Test with missing ToolFile database records.""" - # Create file with non-existent tool file id - non_existent_id = str(uuid4()) - file = self._create_file(related_id=non_existent_id, transfer_method=FileTransferMethod.TOOL_FILE) - - # Should raise ValueError for missing record - with pytest.raises(ValueError): - self.loader.load_storage_keys([file]) - - def test_load_storage_keys_invalid_uuid(self): - """Test with invalid UUID format.""" - # Create a file with valid parameters first, then manually set invalid related_id - file = self._create_file(related_id=str(uuid4()), transfer_method=FileTransferMethod.LOCAL_FILE) - file.related_id = "invalid-uuid-format" - - # Should raise ValueError for invalid UUID - with pytest.raises(ValueError): - self.loader.load_storage_keys([file]) - - def test_load_storage_keys_batch_efficiency(self): - """Test batched operations use efficient queries.""" - # Create multiple files of different types - upload_files = [self._create_upload_file() for _ in range(3)] - tool_files = [self._create_tool_file() for _ in range(2)] - - files = [] - files.extend( - [self._create_file(related_id=uf.id, transfer_method=FileTransferMethod.LOCAL_FILE) for uf in upload_files] - ) - files.extend( - [self._create_file(related_id=tf.id, transfer_method=FileTransferMethod.TOOL_FILE) for tf in tool_files] - ) - - # Mock the session to count queries - with patch.object(self.session, "scalars", wraps=self.session.scalars) as mock_scalars: - self.loader.load_storage_keys(files) - - # Should make exactly 2 queries (one for upload_files, one for tool_files) - assert mock_scalars.call_count == 2 - - # Verify all storage keys were loaded correctly - for i, file in enumerate(files[:3]): - assert file._storage_key == upload_files[i].key - for i, file in enumerate(files[3:]): - assert file._storage_key == tool_files[i].file_key - - def test_load_storage_keys_tenant_isolation(self): - """Test that tenant isolation works correctly.""" - # Create files for different tenants - other_tenant_id = str(uuid4()) - - # Create upload file for current tenant - upload_file_current = self._create_upload_file() - file_current = self._create_file( - related_id=upload_file_current.id, transfer_method=FileTransferMethod.LOCAL_FILE - ) - - # Create upload file for other tenant (but don't add to cleanup list) - upload_file_other = UploadFile( - tenant_id=other_tenant_id, - storage_type=StorageType.LOCAL, - key="other_tenant_key", - name="other_file.txt", - size=1024, - extension=".txt", - mime_type="text/plain", - created_by_role=CreatorUserRole.ACCOUNT, - created_by=self.user_id, - created_at=datetime.now(UTC), - used=False, - ) - upload_file_other.id = str(uuid4()) - self.session.add(upload_file_other) - self.session.flush() - - # Create file for other tenant but try to load with current tenant's loader - file_other = self._create_file( - related_id=upload_file_other.id, transfer_method=FileTransferMethod.LOCAL_FILE, tenant_id=other_tenant_id - ) - - # Should raise ValueError due to tenant mismatch - with pytest.raises(ValueError) as context: - self.loader.load_storage_keys([file_other]) - - assert "Upload file not found for id:" in str(context.value) - - # Current tenant's file should still work - self.loader.load_storage_keys([file_current]) - assert file_current._storage_key == upload_file_current.key - - def test_load_storage_keys_mixed_tenant_batch(self): - """Test batch with mixed tenant files (should fail on first mismatch).""" - # Create files for current tenant - upload_file_current = self._create_upload_file() - file_current = self._create_file( - related_id=upload_file_current.id, transfer_method=FileTransferMethod.LOCAL_FILE - ) - - # Create file for different tenant - other_tenant_id = str(uuid4()) - file_other = self._create_file( - related_id=str(uuid4()), transfer_method=FileTransferMethod.LOCAL_FILE, tenant_id=other_tenant_id - ) - - # Should raise ValueError on tenant mismatch - with pytest.raises(ValueError) as context: - self.loader.load_storage_keys([file_current, file_other]) - - assert "Upload file not found for id:" in str(context.value) - - def test_load_storage_keys_duplicate_file_ids(self): - """Test handling of duplicate file IDs in the batch.""" - # Create upload file - upload_file = self._create_upload_file() - - # Create two File objects with same related_id - file1 = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE) - file2 = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE) - - # Should handle duplicates gracefully - self.loader.load_storage_keys([file1, file2]) - - # Both files should have the same storage key - assert file1._storage_key == upload_file.key - assert file2._storage_key == upload_file.key - - def test_load_storage_keys_session_isolation(self): - """Test that the loader uses the provided session correctly.""" - # Create test data - upload_file = self._create_upload_file() - file = self._create_file(related_id=upload_file.id, transfer_method=FileTransferMethod.LOCAL_FILE) - - # Create loader with different session (same underlying connection) - - with Session(bind=db.engine) as other_session: - other_loader = StorageKeyLoader( - other_session, - self.tenant_id, - access_controller=DatabaseFileAccessController(), - ) - with pytest.raises(ValueError): - other_loader.load_storage_keys([file]) diff --git a/api/tests/integration_tests/model_runtime/__mock/plugin_model.py b/api/tests/integration_tests/model_runtime/__mock/plugin_model.py index ce04a158a8..c4146d5ccd 100644 --- a/api/tests/integration_tests/model_runtime/__mock/plugin_model.py +++ b/api/tests/integration_tests/model_runtime/__mock/plugin_model.py @@ -4,6 +4,9 @@ from collections.abc import Generator, Sequence from decimal import Decimal from json import dumps +from core.plugin.entities.plugin_daemon import PluginModelProviderEntity +from core.plugin.impl.model import PluginModelClient + # import monkeypatch from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.llm_entities import ( @@ -23,9 +26,6 @@ from graphon.model_runtime.entities.model_entities import ( ) from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity -from core.plugin.entities.plugin_daemon import PluginModelProviderEntity -from core.plugin.impl.model import PluginModelClient - class MockModelClass(PluginModelClient): def fetch_model_providers(self, tenant_id: str) -> Sequence[PluginModelProviderEntity]: diff --git a/api/tests/integration_tests/services/test_workflow_draft_variable_service.py b/api/tests/integration_tests/services/test_workflow_draft_variable_service.py index c7bb90f019..e130644338 100644 --- a/api/tests/integration_tests/services/test_workflow_draft_variable_service.py +++ b/api/tests/integration_tests/services/test_workflow_draft_variable_service.py @@ -3,10 +3,6 @@ import unittest import uuid import pytest -from graphon.nodes import BuiltinNodeTypes -from graphon.variables.segments import StringSegment -from graphon.variables.types import SegmentType -from graphon.variables.variables import StringVariable from sqlalchemy import delete, func, select from sqlalchemy.orm import Session @@ -15,6 +11,10 @@ from extensions.ext_database import db from extensions.ext_storage import storage from extensions.storage.storage_type import StorageType from factories.variable_factory import build_segment +from graphon.nodes import BuiltinNodeTypes +from graphon.variables.segments import StringSegment +from graphon.variables.types import SegmentType +from graphon.variables.variables import StringVariable from libs import datetime_utils from models.enums import CreatorUserRole from models.model import UploadFile diff --git a/api/tests/integration_tests/tasks/test_remove_app_and_related_data_task.py b/api/tests/integration_tests/tasks/test_remove_app_and_related_data_task.py index 3dfedd811d..4f444598b1 100644 --- a/api/tests/integration_tests/tasks/test_remove_app_and_related_data_task.py +++ b/api/tests/integration_tests/tasks/test_remove_app_and_related_data_task.py @@ -2,11 +2,11 @@ import uuid from unittest.mock import patch import pytest -from graphon.variables.segments import StringSegment from sqlalchemy import delete, func, select from core.db.session_factory import session_factory from extensions.storage.storage_type import StorageType +from graphon.variables.segments import StringSegment from models import Tenant from models.enums import CreatorUserRole from models.model import App, UploadFile @@ -209,7 +209,6 @@ class TestDeleteDraftVariablesWithOffloadIntegration: def setup_offload_test_data(self, app_and_tenant): tenant, app = app_and_tenant from graphon.variables.types import SegmentType - from libs.datetime_utils import naive_utc_now with session_factory.create_session() as session: @@ -453,7 +452,6 @@ class TestDeleteDraftVariablesSessionCommit: def setup_offload_test_data(self, app_and_tenant): """Create test data with offload files for session commit tests.""" from graphon.variables.types import SegmentType - from libs.datetime_utils import naive_utc_now tenant, app = app_and_tenant diff --git a/api/tests/integration_tests/workflow/nodes/__mock/model.py b/api/tests/integration_tests/workflow/nodes/__mock/model.py index c0143faa85..a9a2617bae 100644 --- a/api/tests/integration_tests/workflow/nodes/__mock/model.py +++ b/api/tests/integration_tests/workflow/nodes/__mock/model.py @@ -1,12 +1,11 @@ from unittest.mock import MagicMock -from graphon.model_runtime.entities.model_entities import ModelType - from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle from core.entities.provider_entities import CustomConfiguration, CustomProviderConfiguration, SystemConfiguration from core.model_manager import ModelInstance from core.plugin.impl.model_runtime_factory import create_plugin_model_provider_factory +from graphon.model_runtime.entities.model_entities import ModelType from models.provider import ProviderType diff --git a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_executor.py b/api/tests/integration_tests/workflow/nodes/code_executor/test_code_executor.py deleted file mode 100644 index 487178ff58..0000000000 --- a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_executor.py +++ /dev/null @@ -1,11 +0,0 @@ -import pytest - -from core.helper.code_executor.code_executor import CodeExecutionError, CodeExecutor - -CODE_LANGUAGE = "unsupported_language" - - -def test_unsupported_with_code_template(): - with pytest.raises(CodeExecutionError) as e: - CodeExecutor.execute_workflow_code_template(language=CODE_LANGUAGE, code="", inputs={}) - assert str(e.value) == f"Unsupported language {CODE_LANGUAGE}" diff --git a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_jinja2.py b/api/tests/integration_tests/workflow/nodes/code_executor/test_code_jinja2.py deleted file mode 100644 index c8eb9ec3e4..0000000000 --- a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_jinja2.py +++ /dev/null @@ -1,95 +0,0 @@ -import base64 - -from core.helper.code_executor.code_executor import CodeExecutor, CodeLanguage -from core.helper.code_executor.jinja2.jinja2_transformer import Jinja2TemplateTransformer - -CODE_LANGUAGE = CodeLanguage.JINJA2 - - -def test_jinja2(): - """Test basic Jinja2 template rendering.""" - template = "Hello {{template}}" - # Template must be base64 encoded to match the new safe embedding approach - template_b64 = base64.b64encode(template.encode("utf-8")).decode("utf-8") - inputs = base64.b64encode(b'{"template": "World"}').decode("utf-8") - code = ( - Jinja2TemplateTransformer.get_runner_script() - .replace(Jinja2TemplateTransformer._template_b64_placeholder, template_b64) - .replace(Jinja2TemplateTransformer._inputs_placeholder, inputs) - ) - result = CodeExecutor.execute_code( - language=CODE_LANGUAGE, preload=Jinja2TemplateTransformer.get_preload_script(), code=code - ) - assert result == "<>Hello World<>\n" - - -def test_jinja2_with_code_template(): - """Test template rendering via the high-level workflow API.""" - result = CodeExecutor.execute_workflow_code_template( - language=CODE_LANGUAGE, code="Hello {{template}}", inputs={"template": "World"} - ) - assert result == {"result": "Hello World"} - - -def test_jinja2_get_runner_script(): - """Test that runner script contains required placeholders.""" - runner_script = Jinja2TemplateTransformer.get_runner_script() - assert runner_script.count(Jinja2TemplateTransformer._template_b64_placeholder) == 1 - assert runner_script.count(Jinja2TemplateTransformer._inputs_placeholder) == 1 - assert runner_script.count(Jinja2TemplateTransformer._result_tag) == 2 - - -def test_jinja2_template_with_special_characters(): - """ - Test that templates with special characters (quotes, newlines) render correctly. - This is a regression test for issue #26818 where textarea pre-fill values - containing special characters would break template rendering. - """ - # Template with triple quotes, single quotes, double quotes, and newlines - template = """ - - - -

Status: "{{ status }}"

-
'''code block'''
- -""" - inputs = {"task": {"Task ID": "TASK-123", "Issues": "Line 1\nLine 2\nLine 3"}, "status": "completed"} - - result = CodeExecutor.execute_workflow_code_template(language=CODE_LANGUAGE, code=template, inputs=inputs) - - # Verify the template rendered correctly with all special characters - output = result["result"] - assert 'value="TASK-123"' in output - assert "" in output - assert 'Status: "completed"' in output - assert "'''code block'''" in output - - -def test_jinja2_template_with_html_textarea_prefill(): - """ - Specific test for HTML textarea with Jinja2 variable pre-fill. - Verifies fix for issue #26818. - """ - template = "" - notes_content = "This is a multi-line note.\nWith special chars: 'single' and \"double\" quotes." - inputs = {"notes": notes_content} - - result = CodeExecutor.execute_workflow_code_template(language=CODE_LANGUAGE, code=template, inputs=inputs) - - expected_output = f"" - assert result["result"] == expected_output - - -def test_jinja2_assemble_runner_script_encodes_template(): - """Test that assemble_runner_script properly base64 encodes the template.""" - template = "Hello {{ name }}!" - inputs = {"name": "World"} - - script = Jinja2TemplateTransformer.assemble_runner_script(template, inputs) - - # The template should be base64 encoded in the script - template_b64 = base64.b64encode(template.encode("utf-8")).decode("utf-8") - assert template_b64 in script - # The raw template should NOT appear in the script (it's encoded) - assert "Hello {{ name }}!" not in script diff --git a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_python3.py b/api/tests/integration_tests/workflow/nodes/code_executor/test_code_python3.py deleted file mode 100644 index 25af312afa..0000000000 --- a/api/tests/integration_tests/workflow/nodes/code_executor/test_code_python3.py +++ /dev/null @@ -1,36 +0,0 @@ -from textwrap import dedent - -from core.helper.code_executor.code_executor import CodeExecutor, CodeLanguage -from core.helper.code_executor.python3.python3_code_provider import Python3CodeProvider -from core.helper.code_executor.python3.python3_transformer import Python3TemplateTransformer - -CODE_LANGUAGE = CodeLanguage.PYTHON3 - - -def test_python3_plain(): - code = 'print("Hello World")' - result = CodeExecutor.execute_code(language=CODE_LANGUAGE, preload="", code=code) - assert result == "Hello World\n" - - -def test_python3_json(): - code = dedent(""" - import json - print(json.dumps({'Hello': 'World'})) - """) - result = CodeExecutor.execute_code(language=CODE_LANGUAGE, preload="", code=code) - assert result == '{"Hello": "World"}\n' - - -def test_python3_with_code_template(): - result = CodeExecutor.execute_workflow_code_template( - language=CODE_LANGUAGE, code=Python3CodeProvider.get_default_code(), inputs={"arg1": "Hello", "arg2": "World"} - ) - assert result == {"result": "HelloWorld"} - - -def test_python3_get_runner_script(): - runner_script = Python3TemplateTransformer.get_runner_script() - assert runner_script.count(Python3TemplateTransformer._code_placeholder) == 1 - assert runner_script.count(Python3TemplateTransformer._inputs_placeholder) == 1 - assert runner_script.count(Python3TemplateTransformer._result_tag) == 2 diff --git a/api/tests/integration_tests/workflow/nodes/test_code.py b/api/tests/integration_tests/workflow/nodes/test_code.py index 4f41396c22..e3476c292b 100644 --- a/api/tests/integration_tests/workflow/nodes/test_code.py +++ b/api/tests/integration_tests/workflow/nodes/test_code.py @@ -2,17 +2,17 @@ import time import uuid import pytest + +from configs import dify_config +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.workflow.node_factory import DifyNodeFactory +from core.workflow.system_variables import build_system_variables from graphon.enums import WorkflowNodeExecutionStatus from graphon.graph import Graph from graphon.node_events import NodeRunResult from graphon.nodes.code.code_node import CodeNode from graphon.nodes.code.limits import CodeNodeLimits from graphon.runtime import GraphRuntimeState, VariablePool - -from configs import dify_config -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.workflow.node_factory import DifyNodeFactory -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params pytest_plugins = ("tests.integration_tests.workflow.nodes.__mock.code_executor",) diff --git a/api/tests/integration_tests/workflow/nodes/test_http.py b/api/tests/integration_tests/workflow/nodes/test_http.py index b1f937e738..aa6cf1e021 100644 --- a/api/tests/integration_tests/workflow/nodes/test_http.py +++ b/api/tests/integration_tests/workflow/nodes/test_http.py @@ -3,11 +3,6 @@ import uuid from urllib.parse import urlencode import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.file.file_manager import file_manager -from graphon.graph import Graph -from graphon.nodes.http_request import HttpRequestNode, HttpRequestNodeConfig -from graphon.runtime import GraphRuntimeState, VariablePool from configs import dify_config from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom @@ -16,6 +11,11 @@ from core.tools.tool_file_manager import ToolFileManager from core.workflow.node_factory import DifyNodeFactory from core.workflow.node_runtime import DifyFileReferenceFactory from core.workflow.system_variables import build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.file.file_manager import file_manager +from graphon.graph import Graph +from graphon.nodes.http_request import HttpRequestNode, HttpRequestNodeConfig +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_graph_init_params pytest_plugins = ("tests.integration_tests.workflow.nodes.__mock.http",) @@ -192,6 +192,7 @@ def test_custom_authorization_header(setup_http_mock): @pytest.mark.parametrize("setup_http_mock", [["none"]], indirect=True) def test_custom_auth_with_empty_api_key_raises_error(setup_http_mock): """Test: In custom authentication mode, when the api_key is empty, AuthorizationConfigError should be raised.""" + from core.workflow.system_variables import build_system_variables from graphon.enums import BuiltinNodeTypes from graphon.nodes.http_request.entities import ( HttpRequestNodeAuthorization, @@ -202,8 +203,6 @@ def test_custom_auth_with_empty_api_key_raises_error(setup_http_mock): from graphon.nodes.http_request.executor import Executor from graphon.runtime import VariablePool - from core.workflow.system_variables import build_system_variables - # Create variable pool variable_pool = VariablePool( system_variables=build_system_variables(user_id="test", files=[]), diff --git a/api/tests/integration_tests/workflow/nodes/test_llm.py b/api/tests/integration_tests/workflow/nodes/test_llm.py index f0f3fcead1..fa5d63cfbf 100644 --- a/api/tests/integration_tests/workflow/nodes/test_llm.py +++ b/api/tests/integration_tests/workflow/nodes/test_llm.py @@ -4,6 +4,11 @@ import uuid from collections.abc import Generator from unittest.mock import MagicMock, patch +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.llm_generator.output_parser.structured_output import _parse_structured_output +from core.model_manager import ModelInstance +from core.workflow.system_variables import build_system_variables +from extensions.ext_database import db from graphon.enums import WorkflowNodeExecutionStatus from graphon.node_events import StreamCompletedEvent from graphon.nodes.llm.file_saver import LLMFileSaver @@ -12,12 +17,6 @@ from graphon.nodes.llm.protocols import CredentialsProvider, ModelFactory from graphon.nodes.llm.runtime_protocols import PromptMessageSerializerProtocol from graphon.nodes.protocols import HttpClientProtocol from graphon.runtime import GraphRuntimeState, VariablePool - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.llm_generator.output_parser.structured_output import _parse_structured_output -from core.model_manager import ModelInstance -from core.workflow.system_variables import build_system_variables -from extensions.ext_database import db from tests.workflow_test_utils import build_test_graph_init_params """FOR MOCK FIXTURES, DO NOT REMOVE""" diff --git a/api/tests/integration_tests/workflow/nodes/test_parameter_extractor.py b/api/tests/integration_tests/workflow/nodes/test_parameter_extractor.py index fe512c2585..52886855b8 100644 --- a/api/tests/integration_tests/workflow/nodes/test_parameter_extractor.py +++ b/api/tests/integration_tests/workflow/nodes/test_parameter_extractor.py @@ -3,17 +3,16 @@ import time import uuid from unittest.mock import MagicMock -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.model_runtime.entities import AssistantPromptMessage, UserPromptMessage -from graphon.nodes.llm.protocols import CredentialsProvider, ModelFactory -from graphon.nodes.parameter_extractor.parameter_extractor_node import ParameterExtractorNode -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.model_manager import ModelInstance from core.workflow.node_runtime import DifyPromptMessageSerializer from core.workflow.system_variables import build_system_variables from extensions.ext_database import db +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.model_runtime.entities import AssistantPromptMessage, UserPromptMessage +from graphon.nodes.llm.protocols import CredentialsProvider, ModelFactory +from graphon.nodes.parameter_extractor.parameter_extractor_node import ParameterExtractorNode +from graphon.runtime import GraphRuntimeState, VariablePool from tests.integration_tests.workflow.nodes.__mock.model import get_mocked_fetch_model_instance from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/integration_tests/workflow/nodes/test_template_transform.py b/api/tests/integration_tests/workflow/nodes/test_template_transform.py index 2d728569be..9e3e1a47e3 100644 --- a/api/tests/integration_tests/workflow/nodes/test_template_transform.py +++ b/api/tests/integration_tests/workflow/nodes/test_template_transform.py @@ -1,15 +1,14 @@ import time import uuid +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.workflow.node_factory import DifyNodeFactory +from core.workflow.system_variables import build_system_variables from graphon.enums import WorkflowNodeExecutionStatus from graphon.graph import Graph from graphon.nodes.template_transform.template_transform_node import TemplateTransformNode from graphon.runtime import GraphRuntimeState, VariablePool from graphon.template_rendering import TemplateRenderError - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.workflow.node_factory import DifyNodeFactory -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/integration_tests/workflow/nodes/test_tool.py b/api/tests/integration_tests/workflow/nodes/test_tool.py index 750ced7075..f9ec51ee10 100644 --- a/api/tests/integration_tests/workflow/nodes/test_tool.py +++ b/api/tests/integration_tests/workflow/nodes/test_tool.py @@ -2,18 +2,17 @@ import time import uuid from unittest.mock import MagicMock, patch -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.graph import Graph -from graphon.node_events import StreamCompletedEvent -from graphon.nodes.protocols import ToolFileManagerProtocol -from graphon.nodes.tool.tool_node import ToolNode -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.tools.utils.configuration import ToolParameterConfigurationManager from core.workflow.node_factory import DifyNodeFactory from core.workflow.node_runtime import DifyToolNodeRuntime from core.workflow.system_variables import build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.graph import Graph +from graphon.node_events import StreamCompletedEvent +from graphon.nodes.protocols import ToolFileManagerProtocol +from graphon.nodes.tool.tool_node import ToolNode +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/test_containers_integration_tests/conftest.py b/api/tests/test_containers_integration_tests/conftest.py index ef74893f07..66a25e5daf 100644 --- a/api/tests/test_containers_integration_tests/conftest.py +++ b/api/tests/test_containers_integration_tests/conftest.py @@ -369,7 +369,7 @@ def _create_app_with_containers() -> Flask: # Create and configure the Flask application logger.info("Initializing Flask application...") - app = create_app() + sio_app, app = create_app() logger.info("Flask application created successfully") # Initialize database schema diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_app_apis.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_app_apis.py index 54e0496dbd..15dec06311 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_app_apis.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_app_apis.py @@ -432,7 +432,7 @@ class TestWorkflowAppLogEndpoints: monkeypatch.setattr(workflow_app_log_module, "sessionmaker", DummySessionMaker) def fake_get_paginate(self, **_kwargs): - return {"items": [], "total": 0} + return {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} monkeypatch.setattr( workflow_app_log_module.WorkflowAppService, @@ -443,7 +443,7 @@ class TestWorkflowAppLogEndpoints: with app.test_request_context("/?page=1&limit=20"): result = method(app_model=SimpleNamespace(id="app-1")) - assert result == {"items": [], "total": 0} + assert result == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} class TestWorkflowDraftVariableEndpoints: @@ -608,7 +608,8 @@ class TestWorkflowTriggerEndpoints: with app.test_request_context("/?node_id=node-1"): result = method(app_model=SimpleNamespace(id="app-1")) - assert result is trigger + assert isinstance(result, dict) + assert {"id", "webhook_id", "webhook_url", "webhook_debug_url", "node_id", "created_at"} <= set(result.keys()) class TestWrapsEndpoints: diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_app_import_api.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_app_import_api.py index d8c6821f8d..25d19cf35a 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_app_import_api.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_app_import_api.py @@ -96,6 +96,56 @@ class TestAppImportApi: assert status == 200 assert response["status"] == ImportStatus.COMPLETED + def test_import_post_commits_session_on_success(self, app, monkeypatch: pytest.MonkeyPatch) -> None: + api = app_import_module.AppImportApi() + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=False) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.COMPLETED, app_id="app-123"), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + fake_session = MagicMock() + fake_session.__enter__.return_value = fake_session + fake_session.__exit__.return_value = None + monkeypatch.setattr(app_import_module, "Session", lambda *_args, **_kwargs: fake_session) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + fake_session.commit.assert_called_once_with() + fake_session.rollback.assert_not_called() + assert status == 200 + assert response["status"] == ImportStatus.COMPLETED + + def test_import_post_rolls_back_session_on_failure(self, app, monkeypatch: pytest.MonkeyPatch) -> None: + api = app_import_module.AppImportApi() + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=False) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.FAILED, app_id=None), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + fake_session = MagicMock() + fake_session.__enter__.return_value = fake_session + fake_session.__exit__.return_value = None + monkeypatch.setattr(app_import_module, "Session", lambda *_args, **_kwargs: fake_session) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + fake_session.rollback.assert_called_once_with() + fake_session.commit.assert_not_called() + assert status == 400 + assert response["status"] == ImportStatus.FAILED + class TestAppImportConfirmApi: @pytest.fixture diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_chat_conversation_status_count_api.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_chat_conversation_status_count_api.py index 5cc458fe2e..5a22f81a69 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_chat_conversation_status_count_api.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_chat_conversation_status_count_api.py @@ -4,15 +4,15 @@ import json import uuid from flask.testing import FlaskClient -from graphon.enums import WorkflowExecutionStatus from sqlalchemy.orm import Session from configs import dify_config from constants import HEADER_NAME_CSRF_TOKEN +from graphon.enums import WorkflowExecutionStatus from libs.datetime_utils import naive_utc_now from libs.token import _real_cookie_name, generate_csrf_token from models import Account, DifySetup, Tenant, TenantAccountJoin -from models.account import AccountStatus, TenantAccountRole +from models.account import AccountStatus, TenantAccountRole, TenantStatus from models.enums import ConversationFromSource, CreatorUserRole from models.model import App, AppMode, Conversation, Message from models.workflow import WorkflowRun @@ -30,7 +30,7 @@ def _create_account_and_tenant(db_session: Session) -> tuple[Account, Tenant]: db_session.add(account) db_session.commit() - tenant = Tenant(name="Test Tenant", status="normal") + tenant = Tenant(name="Test Tenant", status=TenantStatus.NORMAL) db_session.add(tenant) db_session.commit() diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_conversation_read_timestamp.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_conversation_read_timestamp.py new file mode 100644 index 0000000000..fad0b8b10e --- /dev/null +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_conversation_read_timestamp.py @@ -0,0 +1,73 @@ +from datetime import datetime +from unittest.mock import patch + +import pytest +from sqlalchemy.orm import Session +from werkzeug.exceptions import NotFound + +from controllers.console.app.conversation import _get_conversation +from models.enums import ConversationFromSource +from models.model import AppMode, Conversation +from tests.test_containers_integration_tests.controllers.console.helpers import ( + create_console_account_and_tenant, + create_console_app, +) + + +def test_get_conversation_mark_read_keeps_updated_at_unchanged( + db_session_with_containers: Session, +): + account, tenant = create_console_account_and_tenant(db_session_with_containers) + app = create_console_app(db_session_with_containers, tenant.id, account.id, AppMode.CHAT) + + original_updated_at = datetime(2026, 2, 8, 0, 0, 0) + conversation = Conversation( + app_id=app.id, + name="read timestamp test", + inputs={}, + status="normal", + mode=AppMode.CHAT, + from_source=ConversationFromSource.CONSOLE, + from_account_id=account.id, + updated_at=original_updated_at, + ) + db_session_with_containers.add(conversation) + db_session_with_containers.commit() + + read_at = datetime(2026, 2, 9, 0, 0, 0) + + with ( + patch( + "controllers.console.app.conversation.current_account_with_tenant", + return_value=(account, tenant.id), + autospec=True, + ), + patch( + "controllers.console.app.conversation.naive_utc_now", + return_value=read_at, + autospec=True, + ), + ): + loaded = _get_conversation(app, conversation.id) + + db_session_with_containers.refresh(conversation) + + assert loaded.id == conversation.id + assert conversation.read_at == read_at + assert conversation.read_account_id == account.id + assert conversation.updated_at == original_updated_at + + +def test_get_conversation_raises_not_found_for_missing_conversation( + db_session_with_containers: Session, +): + account, tenant = create_console_account_and_tenant(db_session_with_containers) + app = create_console_app(db_session_with_containers, tenant.id, account.id, AppMode.CHAT) + + with patch( + "controllers.console.app.conversation.current_account_with_tenant", + return_value=(account, tenant.id), + autospec=True, + ): + with pytest.raises(NotFound): + _get_conversation(app, "00000000-0000-0000-0000-000000000000") diff --git a/api/tests/test_containers_integration_tests/controllers/console/app/test_workflow_draft_variable.py b/api/tests/test_containers_integration_tests/controllers/console/app/test_workflow_draft_variable.py index 8ddf867370..290be87697 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/app/test_workflow_draft_variable.py +++ b/api/tests/test_containers_integration_tests/controllers/console/app/test_workflow_draft_variable.py @@ -3,12 +3,12 @@ import uuid from flask.testing import FlaskClient -from graphon.variables.segments import StringSegment from sqlalchemy import select from sqlalchemy.orm import Session from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, ENVIRONMENT_VARIABLE_NODE_ID from factories.variable_factory import segment_to_variable +from graphon.variables.segments import StringSegment from models import Workflow from models.model import AppMode from models.workflow import WorkflowDraftVariable diff --git a/api/tests/test_containers_integration_tests/controllers/console/helpers.py b/api/tests/test_containers_integration_tests/controllers/console/helpers.py index 9e2084f393..a8ecf94da1 100644 --- a/api/tests/test_containers_integration_tests/controllers/console/helpers.py +++ b/api/tests/test_containers_integration_tests/controllers/console/helpers.py @@ -11,7 +11,7 @@ from constants import HEADER_NAME_CSRF_TOKEN from libs.datetime_utils import naive_utc_now from libs.token import _real_cookie_name, generate_csrf_token from models import Account, DifySetup, Tenant, TenantAccountJoin -from models.account import AccountStatus, TenantAccountRole +from models.account import AccountStatus, TenantAccountRole, TenantStatus from models.model import App, AppMode from services.account_service import AccountService @@ -37,7 +37,7 @@ def create_console_account_and_tenant(db_session: Session) -> tuple[Account, Ten db_session.add(account) db_session.commit() - tenant = Tenant(name="Test Tenant", status="normal") + tenant = Tenant(name="Test Tenant", status=TenantStatus.NORMAL) db_session.add(tenant) db_session.commit() diff --git a/api/tests/test_containers_integration_tests/controllers/service_api/test_site.py b/api/tests/test_containers_integration_tests/controllers/service_api/test_site.py new file mode 100644 index 0000000000..4e884626a7 --- /dev/null +++ b/api/tests/test_containers_integration_tests/controllers/service_api/test_site.py @@ -0,0 +1,110 @@ +""" +Testcontainers integration tests for Service API Site controller. +""" + +from __future__ import annotations + +import pytest +from flask import Flask +from sqlalchemy.orm import Session +from werkzeug.exceptions import Forbidden + +from controllers.service_api.app.site import AppSiteApi +from models.account import Tenant, TenantStatus +from models.model import App, AppMode, Site + + +@pytest.fixture +def app(flask_app_with_containers) -> Flask: + return flask_app_with_containers + + +def _unwrap(method): + fn = method + while hasattr(fn, "__wrapped__"): + fn = fn.__wrapped__ + return fn + + +def _create_tenant(db_session: Session, *, status: TenantStatus = TenantStatus.NORMAL) -> Tenant: + tenant = Tenant(name="service-api-site-tenant", status=status) + db_session.add(tenant) + db_session.commit() + return tenant + + +def _create_app(db_session: Session, tenant_id: str) -> App: + app_model = App( + tenant_id=tenant_id, + mode=AppMode.CHAT, + name="service-api-site-app", + enable_site=True, + enable_api=True, + status="normal", + ) + db_session.add(app_model) + db_session.commit() + return app_model + + +def _create_site(db_session: Session, app_id: str) -> Site: + site = Site( + app_id=app_id, + title="Service API Site", + icon_type="emoji", + icon="robot", + icon_background="#ffffff", + description="Service API test site", + default_language="en-US", + prompt_public=True, + show_workflow_steps=True, + customize_token_strategy="not_allow", + use_icon_as_answer_icon=False, + chat_color_theme="light", + chat_color_theme_inverted=False, + ) + db_session.add(site) + db_session.commit() + return site + + +class TestAppSiteApi: + def test_get_site_success(self, app: Flask, db_session_with_containers: Session) -> None: + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) + _create_site(db_session_with_containers, app_model.id) + + with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test-token"}): + api = AppSiteApi() + response = _unwrap(api.get)(api, app_model=app_model) + + assert response["title"] == "Service API Site" + assert response["icon"] == "robot" + assert response["description"] == "Service API test site" + + def test_get_site_not_found(self, app: Flask, db_session_with_containers: Session) -> None: + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) + + with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test-token"}): + api = AppSiteApi() + with pytest.raises(Forbidden): + _unwrap(api.get)(api, app_model=app_model) + + def test_get_site_tenant_archived(self, app: Flask, db_session_with_containers: Session) -> None: + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) + _create_site(db_session_with_containers, app_model.id) + + archived_tenant = db_session_with_containers.get(Tenant, tenant.id) + assert archived_tenant is not None + archived_tenant.status = TenantStatus.ARCHIVE + db_session_with_containers.commit() + + app_model = db_session_with_containers.get(App, app_model.id) + assert app_model is not None + + with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test-token"}): + api = AppSiteApi() + with pytest.raises(Forbidden): + _unwrap(api.get)(api, app_model=app_model) diff --git a/api/tests/unit_tests/controllers/web/test_site.py b/api/tests/test_containers_integration_tests/controllers/web/test_site.py similarity index 51% rename from api/tests/unit_tests/controllers/web/test_site.py rename to api/tests/test_containers_integration_tests/controllers/web/test_site.py index 6e9d754c43..9adb26ff3d 100644 --- a/api/tests/unit_tests/controllers/web/test_site.py +++ b/api/tests/test_containers_integration_tests/controllers/web/test_site.py @@ -1,28 +1,48 @@ -"""Unit tests for controllers.web.site endpoints.""" +"""Testcontainers integration tests for controllers.web.site endpoints.""" from __future__ import annotations from types import SimpleNamespace -from unittest.mock import MagicMock, patch +from unittest.mock import patch import pytest from flask import Flask +from sqlalchemy.orm import Session from werkzeug.exceptions import Forbidden from controllers.web.site import AppSiteApi, AppSiteInfo +from models import Tenant, TenantStatus +from models.model import App, AppMode, CustomizeTokenStrategy, Site -def _tenant(*, status: str = "normal") -> SimpleNamespace: - return SimpleNamespace( - id="tenant-1", - status=status, - plan="basic", - custom_config_dict={"remove_webapp_brand": False, "replace_webapp_logo": False}, +@pytest.fixture +def app(flask_app_with_containers) -> Flask: + return flask_app_with_containers + + +def _create_tenant(db_session: Session, *, status: TenantStatus = TenantStatus.NORMAL) -> Tenant: + tenant = Tenant(name="test-tenant", status=status) + db_session.add(tenant) + db_session.commit() + return tenant + + +def _create_app(db_session: Session, tenant_id: str, *, enable_site: bool = True) -> App: + app_model = App( + tenant_id=tenant_id, + mode=AppMode.CHAT, + name="test-app", + enable_site=enable_site, + enable_api=True, ) + db_session.add(app_model) + db_session.commit() + return app_model -def _site() -> SimpleNamespace: - return SimpleNamespace( +def _create_site(db_session: Session, app_id: str) -> Site: + site = Site( + app_id=app_id, title="Site", icon_type="emoji", icon="robot", @@ -31,77 +51,64 @@ def _site() -> SimpleNamespace: default_language="en", chat_color_theme="light", chat_color_theme_inverted=False, - copyright=None, - privacy_policy=None, - custom_disclaimer=None, + customize_token_strategy=CustomizeTokenStrategy.NOT_ALLOW, + code=f"code-{app_id[-6:]}", prompt_public=False, show_workflow_steps=True, use_icon_as_answer_icon=False, ) + db_session.add(site) + db_session.commit() + return site -# --------------------------------------------------------------------------- -# AppSiteApi -# --------------------------------------------------------------------------- class TestAppSiteApi: @patch("controllers.web.site.FeatureService.get_features") - @patch("controllers.web.site.db") - def test_happy_path(self, mock_db: MagicMock, mock_features: MagicMock, app: Flask) -> None: + def test_happy_path(self, mock_features, app: Flask, db_session_with_containers: Session) -> None: app.config["RESTX_MASK_HEADER"] = "X-Fields" - mock_features.return_value = SimpleNamespace(can_replace_logo=False) - site_obj = _site() - mock_db.session.scalar.return_value = site_obj - tenant = _tenant() - app_model = SimpleNamespace(id="app-1", tenant_id="tenant-1", tenant=tenant, enable_site=True) + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) + _create_site(db_session_with_containers, app_model.id) end_user = SimpleNamespace(id="eu-1") + mock_features.return_value = SimpleNamespace(can_replace_logo=False) with app.test_request_context("/site"): result = AppSiteApi().get(app_model, end_user) - # marshal_with serializes AppSiteInfo to a dict - assert result["app_id"] == "app-1" + assert result["app_id"] == app_model.id assert result["plan"] == "basic" assert result["enable_site"] is True - @patch("controllers.web.site.db") - def test_missing_site_raises_forbidden(self, mock_db: MagicMock, app: Flask) -> None: + def test_missing_site_raises_forbidden(self, app: Flask, db_session_with_containers: Session) -> None: app.config["RESTX_MASK_HEADER"] = "X-Fields" - mock_db.session.scalar.return_value = None - tenant = _tenant() - app_model = SimpleNamespace(id="app-1", tenant_id="tenant-1", tenant=tenant, enable_site=True) + tenant = _create_tenant(db_session_with_containers) + app_model = _create_app(db_session_with_containers, tenant.id) end_user = SimpleNamespace(id="eu-1") with app.test_request_context("/site"): with pytest.raises(Forbidden): AppSiteApi().get(app_model, end_user) - @patch("controllers.web.site.db") - def test_archived_tenant_raises_forbidden(self, mock_db: MagicMock, app: Flask) -> None: + @patch("controllers.web.site.FeatureService.get_features") + def test_archived_tenant_raises_forbidden( + self, mock_features, app: Flask, db_session_with_containers: Session + ) -> None: app.config["RESTX_MASK_HEADER"] = "X-Fields" - from models.account import TenantStatus - - mock_db.session.scalar.return_value = _site() - tenant = SimpleNamespace( - id="tenant-1", - status=TenantStatus.ARCHIVE, - plan="basic", - custom_config_dict={}, - ) - app_model = SimpleNamespace(id="app-1", tenant_id="tenant-1", tenant=tenant) + tenant = _create_tenant(db_session_with_containers, status=TenantStatus.ARCHIVE) + app_model = _create_app(db_session_with_containers, tenant.id) + _create_site(db_session_with_containers, app_model.id) end_user = SimpleNamespace(id="eu-1") + mock_features.return_value = SimpleNamespace(can_replace_logo=False) with app.test_request_context("/site"): with pytest.raises(Forbidden): AppSiteApi().get(app_model, end_user) -# --------------------------------------------------------------------------- -# AppSiteInfo -# --------------------------------------------------------------------------- class TestAppSiteInfo: def test_basic_fields(self) -> None: - tenant = _tenant() - site_obj = _site() + tenant = SimpleNamespace(id="tenant-1", plan="basic", custom_config_dict={}) + site_obj = SimpleNamespace() info = AppSiteInfo(tenant, SimpleNamespace(id="app-1", enable_site=True), site_obj, "eu-1", False) assert info.app_id == "app-1" @@ -118,7 +125,7 @@ class TestAppSiteInfo: plan="pro", custom_config_dict={"remove_webapp_brand": True, "replace_webapp_logo": True}, ) - site_obj = _site() + site_obj = SimpleNamespace() info = AppSiteInfo(tenant, SimpleNamespace(id="app-1", enable_site=True), site_obj, "eu-1", True) assert info.can_replace_logo is True diff --git a/api/tests/test_containers_integration_tests/core/app/layers/test_pause_state_persist_layer.py b/api/tests/test_containers_integration_tests/core/app/layers/test_pause_state_persist_layer.py index c9ee67863d..c342e8994b 100644 --- a/api/tests/test_containers_integration_tests/core/app/layers/test_pause_state_persist_layer.py +++ b/api/tests/test_containers_integration_tests/core/app/layers/test_pause_state_persist_layer.py @@ -22,13 +22,6 @@ import uuid from time import time import pytest -from graphon.entities.pause_reason import SchedulingPause -from graphon.enums import WorkflowExecutionStatus -from graphon.graph_engine.entities.commands import GraphEngineCommand -from graphon.graph_engine.layers.base import GraphEngineLayerNotInitializedError -from graphon.graph_events import GraphRunPausedEvent -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.runtime import GraphRuntimeState, ReadOnlyGraphRuntimeState, ReadOnlyGraphRuntimeStateWrapper, VariablePool from sqlalchemy import Engine, delete, select from sqlalchemy.orm import Session @@ -40,6 +33,13 @@ from core.app.layers.pause_state_persist_layer import ( ) from core.workflow.system_variables import build_system_variables from extensions.ext_storage import storage +from graphon.entities.pause_reason import SchedulingPause +from graphon.enums import WorkflowExecutionStatus +from graphon.graph_engine.entities.commands import GraphEngineCommand +from graphon.graph_engine.layers.base import GraphEngineLayerNotInitializedError +from graphon.graph_events import GraphRunPausedEvent +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.runtime import GraphRuntimeState, ReadOnlyGraphRuntimeState, ReadOnlyGraphRuntimeStateWrapper, VariablePool from libs.datetime_utils import naive_utc_now from models import Account from models import WorkflowPause as WorkflowPauseModel @@ -88,11 +88,11 @@ class TestPauseStatePersistenceLayerTestContainers: def setup_test_data(self, db_session_with_containers, file_service, workflow_run_service): """Set up test data for each test method using TestContainers.""" # Create test tenant and account - from models.account import Tenant, TenantAccountJoin, TenantAccountRole + from models.account import AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole, TenantStatus tenant = Tenant( name="Test Tenant", - status="normal", + status=TenantStatus.NORMAL, ) db_session_with_containers.add(tenant) db_session_with_containers.commit() @@ -101,7 +101,7 @@ class TestPauseStatePersistenceLayerTestContainers: email="test@example.com", name="Test User", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/core/repositories/test_human_input_form_repository_impl.py b/api/tests/test_containers_integration_tests/core/repositories/test_human_input_form_repository_impl.py index 13caad799e..14d5740072 100644 --- a/api/tests/test_containers_integration_tests/core/repositories/test_human_input_form_repository_impl.py +++ b/api/tests/test_containers_integration_tests/core/repositories/test_human_input_form_repository_impl.py @@ -4,7 +4,6 @@ from __future__ import annotations from uuid import uuid4 -from graphon.nodes.human_input.entities import FormDefinition, HumanInputNodeData, UserAction from sqlalchemy import Engine, select from sqlalchemy.orm import Session @@ -18,7 +17,15 @@ from core.workflow.human_input_compat import ( MemberRecipient, WebAppDeliveryMethod, ) -from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from graphon.nodes.human_input.entities import FormDefinition, HumanInputNodeData, UserAction +from models.account import ( + Account, + AccountStatus, + Tenant, + TenantAccountJoin, + TenantAccountRole, + TenantStatus, +) from models.human_input import ( EmailExternalRecipientPayload, EmailMemberRecipientPayload, @@ -29,7 +36,7 @@ from models.human_input import ( def _create_tenant_with_members(session: Session, member_emails: list[str]) -> tuple[Tenant, list[Account]]: - tenant = Tenant(name="Test Tenant", status="normal") + tenant = Tenant(name="Test Tenant", status=TenantStatus.NORMAL) session.add(tenant) session.flush() @@ -39,7 +46,7 @@ def _create_tenant_with_members(session: Session, member_emails: list[str]) -> t email=email, name=f"Member {index}", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) session.add(account) session.flush() diff --git a/api/tests/test_containers_integration_tests/core/workflow/test_human_input_resume_node_execution.py b/api/tests/test_containers_integration_tests/core/workflow/test_human_input_resume_node_execution.py index 0a9b476afc..da4f8847d6 100644 --- a/api/tests/test_containers_integration_tests/core/workflow/test_human_input_resume_node_execution.py +++ b/api/tests/test_containers_integration_tests/core/workflow/test_human_input_resume_node_execution.py @@ -4,6 +4,17 @@ from datetime import timedelta from unittest.mock import MagicMock import pytest +from sqlalchemy import delete, select +from sqlalchemy.orm import Session + +from core.app.app_config.entities import WorkflowUIBasedAppConfig +from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity +from core.app.workflow.layers import PersistenceWorkflowInfo, WorkflowPersistenceLayer +from core.repositories.human_input_repository import HumanInputFormEntity, HumanInputFormRepository +from core.repositories.sqlalchemy_workflow_execution_repository import SQLAlchemyWorkflowExecutionRepository +from core.repositories.sqlalchemy_workflow_node_execution_repository import SQLAlchemyWorkflowNodeExecutionRepository +from core.workflow.node_runtime import DifyHumanInputNodeRuntime +from core.workflow.system_variables import build_system_variables from graphon.enums import WorkflowType from graphon.graph import Graph from graphon.graph_engine import GraphEngine @@ -16,20 +27,9 @@ from graphon.nodes.human_input.human_input_node import HumanInputNode from graphon.nodes.start.entities import StartNodeData from graphon.nodes.start.start_node import StartNode from graphon.runtime import GraphRuntimeState, VariablePool -from sqlalchemy import delete, select -from sqlalchemy.orm import Session - -from core.app.app_config.entities import WorkflowUIBasedAppConfig -from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity -from core.app.workflow.layers import PersistenceWorkflowInfo, WorkflowPersistenceLayer -from core.repositories.human_input_repository import HumanInputFormEntity, HumanInputFormRepository -from core.repositories.sqlalchemy_workflow_execution_repository import SQLAlchemyWorkflowExecutionRepository -from core.repositories.sqlalchemy_workflow_node_execution_repository import SQLAlchemyWorkflowNodeExecutionRepository -from core.workflow.node_runtime import DifyHumanInputNodeRuntime -from core.workflow.system_variables import build_system_variables from libs.datetime_utils import naive_utc_now from models import Account -from models.account import Tenant, TenantAccountJoin, TenantAccountRole +from models.account import AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole, TenantStatus from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.model import App, AppMode, IconType from models.workflow import Workflow, WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom, WorkflowRun @@ -175,7 +175,7 @@ class TestHumanInputResumeNodeExecutionIntegration: def setup_test_data(self, db_session_with_containers: Session): tenant = Tenant( name="Test Tenant", - status="normal", + status=TenantStatus.NORMAL, ) db_session_with_containers.add(tenant) db_session_with_containers.commit() @@ -184,7 +184,7 @@ class TestHumanInputResumeNodeExecutionIntegration: email="test@example.com", name="Test User", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/factories/test_storage_key_loader.py b/api/tests/test_containers_integration_tests/factories/test_storage_key_loader.py index cc72dc1cf3..2e207ddc67 100644 --- a/api/tests/test_containers_integration_tests/factories/test_storage_key_loader.py +++ b/api/tests/test_containers_integration_tests/factories/test_storage_key_loader.py @@ -4,13 +4,13 @@ from unittest.mock import patch from uuid import uuid4 import pytest -from graphon.file import File, FileTransferMethod, FileType from sqlalchemy.orm import Session from core.app.file_access import DatabaseFileAccessController from extensions.ext_database import db from extensions.storage.storage_type import StorageType from factories.file_factory import StorageKeyLoader +from graphon.file import File, FileTransferMethod, FileType from models import ToolFile, UploadFile from models.enums import CreatorUserRole diff --git a/api/tests/test_containers_integration_tests/helpers/execution_extra_content.py b/api/tests/test_containers_integration_tests/helpers/execution_extra_content.py index b745aed141..2fd289dfbc 100644 --- a/api/tests/test_containers_integration_tests/helpers/execution_extra_content.py +++ b/api/tests/test_containers_integration_tests/helpers/execution_extra_content.py @@ -6,7 +6,6 @@ from decimal import Decimal from uuid import uuid4 from graphon.nodes.human_input.entities import FormDefinition, UserAction - from libs.datetime_utils import naive_utc_now from models.account import Account, Tenant, TenantAccountJoin from models.enums import ConversationFromSource, InvokeFrom diff --git a/api/tests/test_containers_integration_tests/models/test_conversation_message_inputs.py b/api/tests/test_containers_integration_tests/models/test_conversation_message_inputs.py index e922c19a5a..f10f519e25 100644 --- a/api/tests/test_containers_integration_tests/models/test_conversation_message_inputs.py +++ b/api/tests/test_containers_integration_tests/models/test_conversation_message_inputs.py @@ -10,10 +10,10 @@ from unittest.mock import patch from uuid import uuid4 import pytest -from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod from sqlalchemy.orm import Session from core.workflow.file_reference import build_file_reference +from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod from models.model import App, AppMode, Conversation, Message diff --git a/api/tests/test_containers_integration_tests/models/test_conversation_status_count.py b/api/tests/test_containers_integration_tests/models/test_conversation_status_count.py index 4ca87de52d..6352f815df 100644 --- a/api/tests/test_containers_integration_tests/models/test_conversation_status_count.py +++ b/api/tests/test_containers_integration_tests/models/test_conversation_status_count.py @@ -9,9 +9,9 @@ from collections.abc import Generator from uuid import uuid4 import pytest -from graphon.enums import WorkflowExecutionStatus from sqlalchemy.orm import Session +from graphon.enums import WorkflowExecutionStatus from models.enums import ConversationFromSource, InvokeFrom from models.model import App, AppMode, Conversation, Message, Site from models.workflow import Workflow, WorkflowRun, WorkflowRunTriggeredFrom, WorkflowType diff --git a/api/tests/test_containers_integration_tests/models/test_types_enum_text.py b/api/tests/test_containers_integration_tests/models/test_types_enum_text.py index 957b7145d3..b325c97f7d 100644 --- a/api/tests/test_containers_integration_tests/models/test_types_enum_text.py +++ b/api/tests/test_containers_integration_tests/models/test_types_enum_text.py @@ -4,13 +4,13 @@ from typing import Any, NamedTuple import pytest import sqlalchemy as sa -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy import exc as sa_exc from sqlalchemy import insert, select from sqlalchemy.engine import Connection, Engine from sqlalchemy.orm import DeclarativeBase, Mapped, Session, mapped_column from sqlalchemy.sql.sqltypes import VARCHAR +from graphon.model_runtime.entities.model_entities import ModelType from models.types import EnumText _USER_TABLE = "enum_text_users" diff --git a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_node_execution_repository.py b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_node_execution_repository.py index a68b3a08c7..641399c7f9 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_node_execution_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_node_execution_repository.py @@ -5,10 +5,10 @@ from __future__ import annotations from datetime import timedelta from uuid import uuid4 -from graphon.enums import WorkflowNodeExecutionStatus from sqlalchemy import Engine, delete from sqlalchemy.orm import Session, sessionmaker +from graphon.enums import WorkflowNodeExecutionStatus from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole from models.workflow import WorkflowNodeExecutionModel diff --git a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_run_repository.py b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_run_repository.py index 64c93ac07c..aebe87839c 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_run_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_api_workflow_run_repository.py @@ -8,15 +8,15 @@ from unittest.mock import Mock from uuid import uuid4 import pytest +from sqlalchemy import Engine, delete, select +from sqlalchemy.orm import Session, sessionmaker + +from extensions.ext_storage import storage from graphon.entities import WorkflowExecution from graphon.entities.pause_reason import HumanInputRequired, PauseReasonType from graphon.enums import WorkflowExecutionStatus from graphon.nodes.human_input.entities import FormDefinition, FormInput, UserAction from graphon.nodes.human_input.enums import FormInputType, HumanInputFormStatus -from sqlalchemy import Engine, delete, select -from sqlalchemy.orm import Session, sessionmaker - -from extensions.ext_storage import storage from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.human_input import ( diff --git a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_execution_extra_content_repository.py b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_execution_extra_content_repository.py index 7f44eb6ca3..aaf9a85d60 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_execution_extra_content_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_execution_extra_content_repository.py @@ -12,11 +12,11 @@ from decimal import Decimal from uuid import uuid4 import pytest -from graphon.nodes.human_input.entities import FormDefinition, UserAction -from graphon.nodes.human_input.enums import HumanInputFormStatus from sqlalchemy import Engine, delete, select from sqlalchemy.orm import Session, sessionmaker +from graphon.nodes.human_input.entities import FormDefinition, UserAction +from graphon.nodes.human_input.enums import HumanInputFormStatus from libs.datetime_utils import naive_utc_now from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.enums import ConversationFromSource, InvokeFrom diff --git a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_workflow_node_execution_repository.py b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_workflow_node_execution_repository.py index 22e0aa34ff..fa78f1c28b 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_workflow_node_execution_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_sqlalchemy_workflow_node_execution_repository.py @@ -7,6 +7,11 @@ from datetime import datetime from decimal import Decimal from uuid import uuid4 +from sqlalchemy import Engine +from sqlalchemy.orm import Session, sessionmaker + +from core.repositories import SQLAlchemyWorkflowNodeExecutionRepository +from core.repositories.factory import OrderConfig from graphon.entities import WorkflowNodeExecution from graphon.enums import ( BuiltinNodeTypes, @@ -14,11 +19,6 @@ from graphon.enums import ( WorkflowNodeExecutionStatus, ) from graphon.model_runtime.utils.encoders import jsonable_encoder -from sqlalchemy import Engine -from sqlalchemy.orm import Session, sessionmaker - -from core.repositories import SQLAlchemyWorkflowNodeExecutionRepository -from core.repositories.factory import OrderConfig from models.account import Account, Tenant from models.enums import CreatorUserRole from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionTriggeredFrom diff --git a/api/tests/test_containers_integration_tests/repositories/test_workflow_run_repository.py b/api/tests/test_containers_integration_tests/repositories/test_workflow_run_repository.py index c5e9201ee3..d6f0657380 100644 --- a/api/tests/test_containers_integration_tests/repositories/test_workflow_run_repository.py +++ b/api/tests/test_containers_integration_tests/repositories/test_workflow_run_repository.py @@ -7,12 +7,12 @@ from datetime import timedelta from uuid import uuid4 import pytest -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import Engine, delete from sqlalchemy import exc as sa_exc from sqlalchemy.orm import Session, sessionmaker +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowExecutionStatus from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import WorkflowRun, WorkflowType diff --git a/api/tests/test_containers_integration_tests/services/test_account_service.py b/api/tests/test_containers_integration_tests/services/test_account_service.py index cc9596d15f..9a53ff087c 100644 --- a/api/tests/test_containers_integration_tests/services/test_account_service.py +++ b/api/tests/test_containers_integration_tests/services/test_account_service.py @@ -9,7 +9,7 @@ from werkzeug.exceptions import Unauthorized from configs import dify_config from controllers.console.error import AccountNotFound, NotAllowedCreateWorkspace -from models import AccountStatus, TenantAccountJoin +from models import AccountStatus, TenantAccountJoin, TenantStatus from services.account_service import AccountService, RegisterService, TenantService, TokenPair from services.errors.account import ( AccountAlreadyInTenantError, @@ -2851,7 +2851,7 @@ class TestRegisterService: interface_language="en-US", password=existing_pending_member_password, ) - existing_account.status = "pending" + existing_account.status = AccountStatus.PENDING db_session_with_containers.commit() @@ -2941,7 +2941,7 @@ class TestRegisterService: interface_language="en-US", password=already_in_tenant_password, ) - existing_account.status = "active" + existing_account.status = AccountStatus.ACTIVE db_session_with_containers.commit() @@ -3331,7 +3331,7 @@ class TestRegisterService: TenantService.create_tenant_member(tenant, account, role="normal") # Change tenant status to non-normal - tenant.status = "archive" + tenant.status = TenantStatus.ARCHIVE db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/services/test_agent_service.py b/api/tests/test_containers_integration_tests/services/test_agent_service.py index 4f3c0e4200..00a2f9a59f 100644 --- a/api/tests/test_containers_integration_tests/services/test_agent_service.py +++ b/api/tests/test_containers_integration_tests/services/test_agent_service.py @@ -842,7 +842,6 @@ class TestAgentService: conversation, message = self._create_test_conversation_and_message(db_session_with_containers, app, account) from graphon.file import FileTransferMethod, FileType - from models.enums import CreatorUserRole # Add files to message diff --git a/api/tests/test_containers_integration_tests/services/test_app_dsl_service.py b/api/tests/test_containers_integration_tests/services/test_app_dsl_service.py index 6c15587058..77ce28b999 100644 --- a/api/tests/test_containers_integration_tests/services/test_app_dsl_service.py +++ b/api/tests/test_containers_integration_tests/services/test_app_dsl_service.py @@ -9,7 +9,6 @@ from uuid import uuid4 import pytest import yaml from faker import Faker -from graphon.enums import BuiltinNodeTypes from core.trigger.constants import ( TRIGGER_PLUGIN_NODE_TYPE, @@ -17,6 +16,7 @@ from core.trigger.constants import ( TRIGGER_WEBHOOK_NODE_TYPE, ) from extensions.ext_redis import redis_client +from graphon.enums import BuiltinNodeTypes from models import Account, AppMode from models.model import AppModelConfig, IconType from services import app_dsl_service diff --git a/api/tests/test_containers_integration_tests/services/test_conversation_service_variables.py b/api/tests/test_containers_integration_tests/services/test_conversation_service_variables.py new file mode 100644 index 0000000000..0b7bd9ca64 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_conversation_service_variables.py @@ -0,0 +1,524 @@ +from __future__ import annotations + +from datetime import datetime, timedelta +from unittest.mock import patch +from uuid import uuid4 + +import pytest +from sqlalchemy.orm import sessionmaker + +from core.app.entities.app_invoke_entities import InvokeFrom +from extensions.ext_database import db +from graphon.variables import FloatVariable, IntegerVariable, StringVariable +from models.account import Account, Tenant, TenantAccountJoin +from models.enums import ConversationFromSource +from models.model import App, Conversation, EndUser +from models.workflow import ConversationVariable +from services.conversation_service import ConversationService +from services.errors.conversation import ( + ConversationVariableNotExistsError, + ConversationVariableTypeMismatchError, + LastConversationNotExistsError, +) + + +class ConversationServiceVariableIntegrationFactory: + @staticmethod + def create_app_and_account(db_session_with_containers): + tenant = Tenant(name=f"Tenant {uuid4()}") + db_session_with_containers.add(tenant) + db_session_with_containers.flush() + + account = Account( + name=f"Account {uuid4()}", + email=f"conversation-variable-{uuid4()}@example.com", + password="hashed-password", + password_salt="salt", + interface_language="en-US", + timezone="UTC", + ) + db_session_with_containers.add(account) + db_session_with_containers.flush() + + tenant_join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role="owner", + current=True, + ) + db_session_with_containers.add(tenant_join) + db_session_with_containers.flush() + + app = App( + tenant_id=tenant.id, + name=f"App {uuid4()}", + description="", + mode="chat", + icon_type="emoji", + icon="bot", + icon_background="#FFFFFF", + enable_site=False, + enable_api=True, + api_rpm=100, + api_rph=100, + is_demo=False, + is_public=False, + is_universal=False, + created_by=account.id, + updated_by=account.id, + ) + db_session_with_containers.add(app) + db_session_with_containers.commit() + + return app, account + + @staticmethod + def create_end_user(db_session_with_containers, app: App): + end_user = EndUser( + tenant_id=app.tenant_id, + app_id=app.id, + type=InvokeFrom.SERVICE_API.value, + external_user_id=f"external-{uuid4()}", + name=f"End User {uuid4()}", + is_anonymous=False, + session_id=f"session-{uuid4()}", + ) + db_session_with_containers.add(end_user) + db_session_with_containers.commit() + return end_user + + @staticmethod + def create_conversation( + db_session_with_containers, + app: App, + user: Account | EndUser, + *, + name: str | None = None, + invoke_from: InvokeFrom = InvokeFrom.WEB_APP, + created_at: datetime | None = None, + updated_at: datetime | None = None, + ) -> Conversation: + conversation = Conversation( + app_id=app.id, + app_model_config_id=None, + model_provider=None, + model_id="", + override_model_configs=None, + mode=app.mode, + name=name or f"Conversation {uuid4()}", + summary="", + inputs={}, + introduction="", + system_instruction="", + system_instruction_tokens=0, + status="normal", + invoke_from=invoke_from.value, + from_source=ConversationFromSource.API if isinstance(user, EndUser) else ConversationFromSource.CONSOLE, + from_end_user_id=user.id if isinstance(user, EndUser) else None, + from_account_id=user.id if isinstance(user, Account) else None, + dialogue_count=0, + is_deleted=False, + ) + conversation.inputs = {} + if created_at is not None: + conversation.created_at = created_at + if updated_at is not None: + conversation.updated_at = updated_at + + db_session_with_containers.add(conversation) + db_session_with_containers.commit() + return conversation + + @staticmethod + def create_variable( + db_session_with_containers, + *, + app: App, + conversation: Conversation, + variable: StringVariable | FloatVariable | IntegerVariable, + created_at: datetime | None = None, + ) -> ConversationVariable: + row = ConversationVariable.from_variable(app_id=app.id, conversation_id=conversation.id, variable=variable) + if created_at is not None: + row.created_at = created_at + row.updated_at = created_at + + db_session_with_containers.add(row) + db_session_with_containers.commit() + return row + + +@pytest.fixture +def real_conversation_service_session_factory(flask_app_with_containers): + del flask_app_with_containers + real_session_maker = sessionmaker(bind=db.engine, expire_on_commit=False) + + with ( + patch("services.conversation_service.session_factory.create_session", side_effect=lambda: real_session_maker()), + patch("services.conversation_service.session_factory.get_session_maker", return_value=real_session_maker), + ): + yield + + +class TestConversationServiceVariables: + def test_get_conversational_variable_success( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + older_time = datetime(2024, 1, 1, 12, 0, 0) + newer_time = older_time + timedelta(minutes=5) + + first_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="topic", value="billing"), + created_at=older_time, + ) + second_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="priority", value="high"), + created_at=newer_time, + ) + + result = ConversationService.get_conversational_variable( + app_model=app, + conversation_id=conversation.id, + user=account, + limit=10, + last_id=None, + ) + + assert [item["id"] for item in result.data] == [first_variable.id, second_variable.id] + assert [item["name"] for item in result.data] == ["topic", "priority"] + assert result.limit == 10 + assert result.has_more is False + + def test_get_conversational_variable_with_last_id( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + base_time = datetime(2024, 1, 1, 9, 0, 0) + + first_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="topic", value="billing"), + created_at=base_time, + ) + second_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="priority", value="high"), + created_at=base_time + timedelta(minutes=1), + ) + third_variable = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="owner", value="alice"), + created_at=base_time + timedelta(minutes=2), + ) + + result = ConversationService.get_conversational_variable( + app_model=app, + conversation_id=conversation.id, + user=account, + limit=10, + last_id=first_variable.id, + ) + + assert [item["id"] for item in result.data] == [second_variable.id, third_variable.id] + assert result.has_more is False + + def test_get_conversational_variable_last_id_not_found_raises_error( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + + with pytest.raises(ConversationVariableNotExistsError): + ConversationService.get_conversational_variable( + app_model=app, + conversation_id=conversation.id, + user=account, + limit=10, + last_id=str(uuid4()), + ) + + def test_get_conversational_variable_sets_has_more( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + + for index in range(3): + factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name=f"var_{index}", value=f"value_{index}"), + created_at=datetime(2024, 1, 1, 10, 0, index), + ) + + result = ConversationService.get_conversational_variable( + app_model=app, + conversation_id=conversation.id, + user=account, + limit=2, + last_id=None, + ) + + assert len(result.data) == 2 + assert result.has_more is True + + def test_update_conversation_variable_success( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + existing = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=StringVariable(id=str(uuid4()), name="topic", value="billing"), + ) + updated_at = datetime(2024, 1, 1, 15, 0, 0) + + with patch("services.conversation_service.naive_utc_now", return_value=updated_at): + result = ConversationService.update_conversation_variable( + app_model=app, + conversation_id=conversation.id, + variable_id=existing.id, + user=account, + new_value="support", + ) + + db_session_with_containers.expire_all() + persisted = db_session_with_containers.get(ConversationVariable, (existing.id, conversation.id)) + + assert persisted is not None + assert persisted.to_variable().value == "support" + assert result["id"] == existing.id + assert result["value"] == "support" + assert result["updated_at"] == updated_at + + def test_update_conversation_variable_not_found_raises_error( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + + with pytest.raises(ConversationVariableNotExistsError): + ConversationService.update_conversation_variable( + app_model=app, + conversation_id=conversation.id, + variable_id=str(uuid4()), + user=account, + new_value="support", + ) + + def test_update_conversation_variable_type_mismatch_raises_error( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + existing = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=FloatVariable(id=str(uuid4()), name="score", value=1.5), + ) + + with pytest.raises(ConversationVariableTypeMismatchError, match="expects float"): + ConversationService.update_conversation_variable( + app_model=app, + conversation_id=conversation.id, + variable_id=existing.id, + user=account, + new_value="wrong-type", + ) + + def test_update_conversation_variable_integer_number_compatibility( + self, db_session_with_containers, real_conversation_service_session_factory + ): + del real_conversation_service_session_factory + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + conversation = factory.create_conversation(db_session_with_containers, app, account) + existing = factory.create_variable( + db_session_with_containers, + app=app, + conversation=conversation, + variable=IntegerVariable(id=str(uuid4()), name="attempts", value=1), + ) + + result = ConversationService.update_conversation_variable( + app_model=app, + conversation_id=conversation.id, + variable_id=existing.id, + user=account, + new_value=42, + ) + + db_session_with_containers.expire_all() + persisted = db_session_with_containers.get(ConversationVariable, (existing.id, conversation.id)) + + assert persisted is not None + assert persisted.to_variable().value == 42 + assert result["value"] == 42 + + +class TestConversationServicePaginationWithContainers: + def test_pagination_by_last_id_raises_error_when_last_id_missing(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + + with pytest.raises(LastConversationNotExistsError): + ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=account, + last_id=str(uuid4()), + limit=20, + invoke_from=InvokeFrom.WEB_APP, + ) + + def test_pagination_by_last_id_with_default_desc_updated_at(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + base_time = datetime(2024, 1, 1, 8, 0, 0) + newest = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Newest", + updated_at=base_time + timedelta(minutes=2), + ) + middle = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Middle", + updated_at=base_time + timedelta(minutes=1), + ) + oldest = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Oldest", + updated_at=base_time, + ) + + result = ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=account, + last_id=middle.id, + limit=10, + invoke_from=InvokeFrom.WEB_APP, + ) + + assert newest.id != middle.id + assert [conversation.id for conversation in result.data] == [oldest.id] + + def test_pagination_by_last_id_with_name_sort(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + alpha = factory.create_conversation(db_session_with_containers, app, account, name="Alpha") + beta = factory.create_conversation(db_session_with_containers, app, account, name="Beta") + gamma = factory.create_conversation(db_session_with_containers, app, account, name="Gamma") + + result = ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=account, + last_id=beta.id, + limit=10, + invoke_from=InvokeFrom.WEB_APP, + sort_by="name", + ) + + assert alpha.id != beta.id + assert [conversation.id for conversation in result.data] == [gamma.id] + + def test_pagination_filters_to_end_user_api_source(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + end_user = factory.create_end_user(db_session_with_containers, app) + account_conversation = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Console Conversation", + invoke_from=InvokeFrom.WEB_APP, + ) + end_user_conversation = factory.create_conversation( + db_session_with_containers, + app, + end_user, + name="API Conversation", + invoke_from=InvokeFrom.SERVICE_API, + ) + + result = ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=end_user, + last_id=None, + limit=20, + invoke_from=InvokeFrom.SERVICE_API, + ) + + assert account_conversation.id != end_user_conversation.id + assert [conversation.id for conversation in result.data] == [end_user_conversation.id] + + def test_pagination_filters_to_account_console_source(self, db_session_with_containers): + factory = ConversationServiceVariableIntegrationFactory + app, account = factory.create_app_and_account(db_session_with_containers) + end_user = factory.create_end_user(db_session_with_containers, app) + account_conversation = factory.create_conversation( + db_session_with_containers, + app, + account, + name="Console Conversation", + invoke_from=InvokeFrom.WEB_APP, + ) + factory.create_conversation( + db_session_with_containers, + app, + end_user, + name="API Conversation", + invoke_from=InvokeFrom.SERVICE_API, + ) + + result = ConversationService.pagination_by_last_id( + session=db_session_with_containers, + app_model=app, + user=account, + last_id=None, + limit=20, + invoke_from=InvokeFrom.WEB_APP, + ) + + assert [conversation.id for conversation in result.data] == [account_conversation.id] diff --git a/api/tests/test_containers_integration_tests/services/test_conversation_variable_updater.py b/api/tests/test_containers_integration_tests/services/test_conversation_variable_updater.py index fb0adbbcc2..02ab3f8314 100644 --- a/api/tests/test_containers_integration_tests/services/test_conversation_variable_updater.py +++ b/api/tests/test_containers_integration_tests/services/test_conversation_variable_updater.py @@ -3,10 +3,10 @@ from uuid import uuid4 import pytest -from graphon.variables import StringVariable from sqlalchemy.orm import sessionmaker from extensions.ext_database import db +from graphon.variables import StringVariable from models.workflow import ConversationVariable from services.conversation_variable_updater import ConversationVariableNotFoundError, ConversationVariableUpdater diff --git a/api/tests/test_containers_integration_tests/services/test_dataset_service.py b/api/tests/test_containers_integration_tests/services/test_dataset_service.py index f9bfa570cb..0de3c64c4f 100644 --- a/api/tests/test_containers_integration_tests/services/test_dataset_service.py +++ b/api/tests/test_containers_integration_tests/services/test_dataset_service.py @@ -9,11 +9,11 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from core.rag.retrieval.retrieval_methods import RetrievalMethod +from graphon.model_runtime.entities.model_entities import ModelType from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.dataset import Dataset, DatasetPermissionEnum, Document, ExternalKnowledgeBindings, Pipeline from models.enums import DatasetRuntimeMode, DataSourceType, DocumentCreatedFrom, IndexingStatus diff --git a/api/tests/test_containers_integration_tests/services/test_dataset_service_permissions.py b/api/tests/test_containers_integration_tests/services/test_dataset_service_permissions.py new file mode 100644 index 0000000000..1b4179c9c7 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_dataset_service_permissions.py @@ -0,0 +1,613 @@ +"""Testcontainers integration tests for DatasetService permission and lifecycle SQL paths.""" + +from datetime import datetime +from types import SimpleNamespace +from unittest.mock import patch +from uuid import uuid4 + +import pytest +from sqlalchemy.orm import Session +from werkzeug.exceptions import NotFound + +from core.rag.index_processor.constant.index_type import IndexTechniqueType +from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from models.dataset import ( + AppDatasetJoin, + Dataset, + DatasetAutoDisableLog, + DatasetCollectionBinding, + DatasetPermission, + DatasetPermissionEnum, +) +from models.enums import DataSourceType +from services.dataset_service import DatasetCollectionBindingService, DatasetPermissionService, DatasetService +from services.errors.account import NoPermissionError + + +class DatasetPermissionIntegrationFactory: + @staticmethod + def create_account_with_tenant( + db_session_with_containers: Session, + role: TenantAccountRole = TenantAccountRole.OWNER, + ) -> tuple[Account, Tenant]: + account = Account( + email=f"{uuid4()}@example.com", + name=f"user-{uuid4()}", + interface_language="en-US", + status="active", + ) + tenant = Tenant(name=f"tenant-{uuid4()}", status="normal") + db_session_with_containers.add_all([account, tenant]) + db_session_with_containers.flush() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=role, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + account.role = role + account._current_tenant = tenant + return account, tenant + + @staticmethod + def create_account_in_tenant( + db_session_with_containers: Session, + tenant: Tenant, + role: TenantAccountRole = TenantAccountRole.EDITOR, + ) -> Account: + account = Account( + email=f"{uuid4()}@example.com", + name=f"user-{uuid4()}", + interface_language="en-US", + status="active", + ) + db_session_with_containers.add(account) + db_session_with_containers.flush() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=role, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + account.role = role + account._current_tenant = tenant + return account + + @staticmethod + def create_dataset( + db_session_with_containers: Session, + *, + tenant_id: str, + created_by: str, + name: str | None = None, + permission: DatasetPermissionEnum = DatasetPermissionEnum.ONLY_ME, + indexing_technique: str | None = IndexTechniqueType.HIGH_QUALITY, + enable_api: bool = True, + ) -> Dataset: + dataset = Dataset( + tenant_id=tenant_id, + name=name or f"dataset-{uuid4()}", + description="desc", + data_source_type=DataSourceType.UPLOAD_FILE, + indexing_technique=indexing_technique, + created_by=created_by, + provider="vendor", + permission=permission, + retrieval_model={"top_k": 2}, + ) + dataset.enable_api = enable_api + db_session_with_containers.add(dataset) + db_session_with_containers.commit() + return dataset + + @staticmethod + def create_dataset_permission( + db_session_with_containers: Session, + *, + dataset_id: str, + tenant_id: str, + account_id: str, + ) -> DatasetPermission: + permission = DatasetPermission( + dataset_id=dataset_id, + tenant_id=tenant_id, + account_id=account_id, + has_permission=True, + ) + db_session_with_containers.add(permission) + db_session_with_containers.commit() + return permission + + @staticmethod + def create_app_dataset_join( + db_session_with_containers: Session, + *, + dataset_id: str, + ) -> AppDatasetJoin: + join = AppDatasetJoin( + app_id=str(uuid4()), + dataset_id=dataset_id, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + return join + + @staticmethod + def create_collection_binding( + db_session_with_containers: Session, + *, + provider_name: str, + model_name: str, + collection_type: str = "dataset", + ) -> DatasetCollectionBinding: + binding = DatasetCollectionBinding( + provider_name=provider_name, + model_name=model_name, + collection_name=f"collection_{uuid4().hex}", + type=collection_type, + ) + db_session_with_containers.add(binding) + db_session_with_containers.commit() + return binding + + @staticmethod + def create_auto_disable_log( + db_session_with_containers: Session, + *, + tenant_id: str, + dataset_id: str, + document_id: str, + ) -> DatasetAutoDisableLog: + log = DatasetAutoDisableLog( + tenant_id=tenant_id, + dataset_id=dataset_id, + document_id=document_id, + ) + db_session_with_containers.add(log) + db_session_with_containers.commit() + return log + + +class TestDatasetServicePermissionsAndLifecycle: + def test_delete_dataset_returns_false_when_dataset_is_missing(self, db_session_with_containers: Session): + owner, _tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + + result = DatasetService.delete_dataset(str(uuid4()), user=owner) + + assert result is False + + def test_delete_dataset_checks_permission_and_deletes_dataset(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + + with patch("services.dataset_service.dataset_was_deleted.send") as send_deleted_signal: + result = DatasetService.delete_dataset(dataset.id, user=owner) + + assert result is True + assert db_session_with_containers.get(Dataset, dataset.id) is None + send_deleted_signal.assert_called_once_with(dataset) + + def test_dataset_use_check_returns_true_when_join_exists(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + DatasetPermissionIntegrationFactory.create_app_dataset_join( + db_session_with_containers, + dataset_id=dataset.id, + ) + + assert DatasetService.dataset_use_check(dataset.id) is True + + def test_dataset_use_check_returns_false_when_join_missing(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + + assert DatasetService.dataset_use_check(dataset.id) is False + + def test_check_dataset_permission_rejects_cross_tenant_access(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + outsider, _other_tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant( + db_session_with_containers + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_permission(dataset, outsider) + + def test_check_dataset_permission_rejects_only_me_dataset_for_non_creator( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.ONLY_ME, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_permission(dataset, member) + + def test_check_dataset_permission_rejects_partial_team_user_without_binding( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_permission(dataset, member) + + def test_check_dataset_permission_allows_partial_team_creator(self, db_session_with_containers: Session): + creator, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant( + db_session_with_containers, + role=TenantAccountRole.EDITOR, + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=creator.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + + DatasetService.check_dataset_permission(dataset, creator) + + def test_check_dataset_permission_allows_partial_team_member_with_binding( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=member.id, + ) + + DatasetService.check_dataset_permission(dataset, member) + + def test_check_dataset_operator_permission_rejects_only_me_for_non_creator( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + operator = DatasetPermissionIntegrationFactory.create_account_in_tenant( + db_session_with_containers, + tenant, + role=TenantAccountRole.EDITOR, + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.ONLY_ME, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_operator_permission(user=operator, dataset=dataset) + + def test_check_dataset_operator_permission_rejects_partial_team_without_binding( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + operator = DatasetPermissionIntegrationFactory.create_account_in_tenant( + db_session_with_containers, + tenant, + role=TenantAccountRole.EDITOR, + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + + with pytest.raises(NoPermissionError, match="do not have permission"): + DatasetService.check_dataset_operator_permission(user=operator, dataset=dataset) + + def test_check_dataset_operator_permission_allows_partial_team_with_binding( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + operator = DatasetPermissionIntegrationFactory.create_account_in_tenant( + db_session_with_containers, + tenant, + role=TenantAccountRole.EDITOR, + ) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=operator.id, + ) + + DatasetService.check_dataset_operator_permission(user=operator, dataset=dataset) + + def test_update_dataset_api_status_raises_not_found_for_missing_dataset(self, flask_app_with_containers): + with flask_app_with_containers.app_context(): + with pytest.raises(NotFound, match="Dataset not found"): + DatasetService.update_dataset_api_status(str(uuid4()), True) + + def test_update_dataset_api_status_requires_current_user_id(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + enable_api=False, + ) + + with patch("services.dataset_service.current_user", SimpleNamespace(id=None)): + with pytest.raises(ValueError, match="Current user or current user id not found"): + DatasetService.update_dataset_api_status(dataset.id, True) + + def test_update_dataset_api_status_updates_fields_and_commits(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + enable_api=False, + ) + now = datetime(2026, 4, 14, 18, 0, 0) + + with ( + patch("services.dataset_service.current_user", owner), + patch("services.dataset_service.naive_utc_now", return_value=now), + ): + DatasetService.update_dataset_api_status(dataset.id, True) + + db_session_with_containers.refresh(dataset) + assert dataset.enable_api is True + assert dataset.updated_by == owner.id + assert dataset.updated_at == now + + def test_get_dataset_auto_disable_logs_returns_empty_when_billing_is_disabled( + self, db_session_with_containers: Session + ): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + features = SimpleNamespace( + billing=SimpleNamespace(enabled=False, subscription=SimpleNamespace(plan="professional")) + ) + + with ( + patch("services.dataset_service.current_user", owner), + patch("services.dataset_service.FeatureService.get_features", return_value=features), + ): + result = DatasetService.get_dataset_auto_disable_logs(str(uuid4())) + + assert result == {"document_ids": [], "count": 0} + + def test_get_dataset_auto_disable_logs_returns_recent_document_ids(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + ) + DatasetPermissionIntegrationFactory.create_auto_disable_log( + db_session_with_containers, + tenant_id=tenant.id, + dataset_id=dataset.id, + document_id=str(uuid4()), + ) + DatasetPermissionIntegrationFactory.create_auto_disable_log( + db_session_with_containers, + tenant_id=tenant.id, + dataset_id=dataset.id, + document_id=str(uuid4()), + ) + features = SimpleNamespace( + billing=SimpleNamespace(enabled=True, subscription=SimpleNamespace(plan="professional")) + ) + + with ( + patch("services.dataset_service.current_user", owner), + patch("services.dataset_service.FeatureService.get_features", return_value=features), + ): + result = DatasetService.get_dataset_auto_disable_logs(dataset.id) + + assert result["count"] == 2 + assert len(result["document_ids"]) == 2 + + +class TestDatasetCollectionBindingServiceIntegration: + def test_get_dataset_collection_binding_returns_existing_binding(self, db_session_with_containers: Session): + binding = DatasetPermissionIntegrationFactory.create_collection_binding( + db_session_with_containers, + provider_name="provider", + model_name="model", + ) + + result = DatasetCollectionBindingService.get_dataset_collection_binding("provider", "model") + + assert result.id == binding.id + + def test_get_dataset_collection_binding_creates_binding_when_missing(self, db_session_with_containers: Session): + result = DatasetCollectionBindingService.get_dataset_collection_binding("provider", "missing-model") + + persisted = db_session_with_containers.get(DatasetCollectionBinding, result.id) + assert persisted is not None + assert persisted.provider_name == "provider" + assert persisted.model_name == "missing-model" + assert persisted.type == "dataset" + assert persisted.collection_name + + def test_get_dataset_collection_binding_by_id_and_type_raises_when_missing(self, flask_app_with_containers): + with flask_app_with_containers.app_context(): + with pytest.raises(ValueError, match="Dataset collection binding not found"): + DatasetCollectionBindingService.get_dataset_collection_binding_by_id_and_type(str(uuid4())) + + def test_get_dataset_collection_binding_by_id_and_type_returns_binding(self, db_session_with_containers: Session): + binding = DatasetPermissionIntegrationFactory.create_collection_binding( + db_session_with_containers, + provider_name="provider", + model_name="model", + ) + + result = DatasetCollectionBindingService.get_dataset_collection_binding_by_id_and_type(binding.id) + + assert result.id == binding.id + + +class TestDatasetPermissionServiceIntegration: + def test_get_dataset_partial_member_list_returns_scalar_results(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member_a = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + member_b = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=member_a.id, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=member_b.id, + ) + + result = DatasetPermissionService.get_dataset_partial_member_list(dataset.id) + + assert set(result) == {member_a.id, member_b.id} + + def test_update_partial_member_list_replaces_permissions_and_commits(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member_a = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + member_b = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + stale_member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=stale_member.id, + ) + + DatasetPermissionService.update_partial_member_list( + tenant.id, + dataset.id, + [{"user_id": member_a.id}, {"user_id": member_b.id}], + ) + + permissions = db_session_with_containers.query(DatasetPermission).filter_by(dataset_id=dataset.id).all() + assert {permission.account_id for permission in permissions} == {member_a.id, member_b.id} + + def test_check_permission_requires_dataset_editor(self): + user = SimpleNamespace(is_dataset_editor=False, is_dataset_operator=False) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.ALL_TEAM) + + with pytest.raises(NoPermissionError, match="does not have permission"): + DatasetPermissionService.check_permission(user, dataset, DatasetPermissionEnum.ALL_TEAM, []) + + def test_check_permission_prevents_dataset_operator_from_changing_permission_mode(self): + user = SimpleNamespace(is_dataset_editor=True, is_dataset_operator=True) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.ALL_TEAM) + + with pytest.raises(NoPermissionError, match="cannot change the dataset permissions"): + DatasetPermissionService.check_permission(user, dataset, DatasetPermissionEnum.ONLY_ME, []) + + def test_check_permission_requires_partial_member_list_for_partial_members_mode(self): + user = SimpleNamespace(is_dataset_editor=True, is_dataset_operator=True) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.PARTIAL_TEAM) + + with pytest.raises(ValueError, match="Partial member list is required"): + DatasetPermissionService.check_permission(user, dataset, DatasetPermissionEnum.PARTIAL_TEAM, []) + + def test_check_permission_rejects_dataset_operator_member_list_changes(self): + user = SimpleNamespace(is_dataset_editor=True, is_dataset_operator=True) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.PARTIAL_TEAM) + + with patch.object(DatasetPermissionService, "get_dataset_partial_member_list", return_value=["user-1"]): + with pytest.raises(ValueError, match="cannot change the dataset permissions"): + DatasetPermissionService.check_permission( + user, + dataset, + DatasetPermissionEnum.PARTIAL_TEAM, + [{"user_id": "user-2"}], + ) + + def test_check_permission_allows_dataset_operator_when_member_list_is_unchanged(self): + user = SimpleNamespace(is_dataset_editor=True, is_dataset_operator=True) + dataset = SimpleNamespace(id="dataset-1", permission=DatasetPermissionEnum.PARTIAL_TEAM) + + with patch.object(DatasetPermissionService, "get_dataset_partial_member_list", return_value=["user-1"]): + DatasetPermissionService.check_permission( + user, + dataset, + DatasetPermissionEnum.PARTIAL_TEAM, + [{"user_id": "user-1"}], + ) + + def test_clear_partial_member_list_deletes_permissions_and_commits(self, db_session_with_containers: Session): + owner, tenant = DatasetPermissionIntegrationFactory.create_account_with_tenant(db_session_with_containers) + member = DatasetPermissionIntegrationFactory.create_account_in_tenant(db_session_with_containers, tenant) + dataset = DatasetPermissionIntegrationFactory.create_dataset( + db_session_with_containers, + tenant_id=tenant.id, + created_by=owner.id, + permission=DatasetPermissionEnum.PARTIAL_TEAM, + ) + DatasetPermissionIntegrationFactory.create_dataset_permission( + db_session_with_containers, + dataset_id=dataset.id, + tenant_id=tenant.id, + account_id=member.id, + ) + + DatasetPermissionService.clear_partial_member_list(dataset.id) + + remaining = db_session_with_containers.query(DatasetPermission).filter_by(dataset_id=dataset.id).all() + assert remaining == [] diff --git a/api/tests/test_containers_integration_tests/services/test_dataset_service_update_dataset.py b/api/tests/test_containers_integration_tests/services/test_dataset_service_update_dataset.py index 2a2d86a8a6..ac0483a45d 100644 --- a/api/tests/test_containers_integration_tests/services/test_dataset_service_update_dataset.py +++ b/api/tests/test_containers_integration_tests/services/test_dataset_service_update_dataset.py @@ -3,11 +3,18 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexTechniqueType -from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from graphon.model_runtime.entities.model_entities import ModelType +from models.account import ( + Account, + AccountStatus, + Tenant, + TenantAccountJoin, + TenantAccountRole, + TenantStatus, +) from models.dataset import Dataset, ExternalKnowledgeApis, ExternalKnowledgeBindings from models.enums import DataSourceType from services.dataset_service import DatasetService @@ -26,12 +33,12 @@ class DatasetUpdateTestDataFactory: email=f"{uuid4()}@example.com", name=f"user-{uuid4()}", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.commit() - tenant = Tenant(name=f"tenant-{account.id}", status="normal") + tenant = Tenant(name=f"tenant-{account.id}", status=TenantStatus.NORMAL) db_session_with_containers.add(tenant) db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/services/test_delete_archived_workflow_run.py b/api/tests/test_containers_integration_tests/services/test_delete_archived_workflow_run.py index c8f04e9215..fe426ae516 100644 --- a/api/tests/test_containers_integration_tests/services/test_delete_archived_workflow_run.py +++ b/api/tests/test_containers_integration_tests/services/test_delete_archived_workflow_run.py @@ -5,9 +5,9 @@ Testcontainers integration tests for archived workflow run deletion service. from datetime import UTC, datetime, timedelta from uuid import uuid4 -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import select +from graphon.enums import WorkflowExecutionStatus from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import WorkflowArchiveLog, WorkflowRun from services.retention.workflow_run.delete_archived_workflow_run import ArchivedWorkflowRunDeletion diff --git a/api/tests/test_containers_integration_tests/services/test_feature_service.py b/api/tests/test_containers_integration_tests/services/test_feature_service.py index b3e7dd2a59..315936d721 100644 --- a/api/tests/test_containers_integration_tests/services/test_feature_service.py +++ b/api/tests/test_containers_integration_tests/services/test_feature_service.py @@ -274,6 +274,7 @@ class TestFeatureService: mock_config.ENABLE_EMAIL_CODE_LOGIN = True mock_config.ENABLE_EMAIL_PASSWORD_LOGIN = True mock_config.ENABLE_SOCIAL_OAUTH_LOGIN = False + mock_config.ENABLE_COLLABORATION_MODE = True mock_config.ALLOW_REGISTER = False mock_config.ALLOW_CREATE_WORKSPACE = False mock_config.MAIL_TYPE = "smtp" @@ -298,6 +299,7 @@ class TestFeatureService: # Verify authentication settings assert result.enable_email_code_login is True assert result.enable_email_password_login is False + assert result.enable_collaboration_mode is True assert result.is_allow_register is False assert result.is_allow_create_workspace is False @@ -401,6 +403,7 @@ class TestFeatureService: mock_config.ENABLE_EMAIL_CODE_LOGIN = True mock_config.ENABLE_EMAIL_PASSWORD_LOGIN = True mock_config.ENABLE_SOCIAL_OAUTH_LOGIN = False + mock_config.ENABLE_COLLABORATION_MODE = False mock_config.ALLOW_REGISTER = True mock_config.ALLOW_CREATE_WORKSPACE = True mock_config.MAIL_TYPE = "smtp" @@ -422,6 +425,7 @@ class TestFeatureService: assert result.enable_email_code_login is True assert result.enable_email_password_login is True assert result.enable_social_oauth_login is False + assert result.enable_collaboration_mode is False assert result.is_allow_register is True assert result.is_allow_create_workspace is True assert result.is_email_setup is True diff --git a/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test.py b/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test.py index c46b8fba0b..18c5320d0a 100644 --- a/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test.py +++ b/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test.py @@ -3,8 +3,6 @@ import uuid from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes -from graphon.nodes.human_input.entities import HumanInputNodeData from core.workflow.human_input_compat import ( EmailDeliveryConfig, @@ -12,6 +10,8 @@ from core.workflow.human_input_compat import ( EmailRecipients, ExternalRecipient, ) +from graphon.enums import BuiltinNodeTypes +from graphon.nodes.human_input.entities import HumanInputNodeData from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.model import App, AppMode from models.workflow import Workflow, WorkflowType diff --git a/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test_service.py b/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test_service.py index 0f252515f7..21a54e909e 100644 --- a/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test_service.py +++ b/api/tests/test_containers_integration_tests/services/test_human_input_delivery_test_service.py @@ -5,7 +5,6 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest -from graphon.runtime import VariablePool from sqlalchemy.engine import Engine from configs import dify_config @@ -16,6 +15,7 @@ from core.workflow.human_input_compat import ( ExternalRecipient, MemberRecipient, ) +from graphon.runtime import VariablePool from models.account import Account, TenantAccountJoin from services import human_input_delivery_test_service as service_module from services.human_input_delivery_test_service import ( diff --git a/api/tests/test_containers_integration_tests/services/test_messages_clean_service.py b/api/tests/test_containers_integration_tests/services/test_messages_clean_service.py index 2340dd2a03..cd63d3ad6c 100644 --- a/api/tests/test_containers_integration_tests/services/test_messages_clean_service.py +++ b/api/tests/test_containers_integration_tests/services/test_messages_clean_service.py @@ -8,11 +8,11 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker -from graphon.file import FileType from sqlalchemy.orm import Session from enums.cloud_plan import CloudPlan from extensions.ext_redis import redis_client +from graphon.file import FileType from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.enums import ( ConversationFromSource, diff --git a/api/tests/test_containers_integration_tests/services/test_model_provider_service.py b/api/tests/test_containers_integration_tests/services/test_model_provider_service.py index ba926bf675..8955a3b5f2 100644 --- a/api/tests/test_containers_integration_tests/services/test_model_provider_service.py +++ b/api/tests/test_containers_integration_tests/services/test_model_provider_service.py @@ -2,10 +2,10 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker -from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType from sqlalchemy.orm import Session from core.entities.model_entities import ModelStatus +from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType from models import Account, Tenant, TenantAccountJoin, TenantAccountRole from models.provider import Provider, ProviderModel, ProviderModelSetting, ProviderType from services.model_provider_service import ModelProviderService @@ -405,11 +405,10 @@ class TestModelProviderService: mock_provider_manager = mock_external_service_dependencies["provider_manager"].return_value # Create mock models + from core.entities.model_entities import ModelWithProviderEntity, SimpleModelProviderEntity from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.provider_entities import ProviderEntity - from core.entities.model_entities import ModelWithProviderEntity, SimpleModelProviderEntity - # Create real model objects instead of mocks provider_entity_1 = SimpleModelProviderEntity( ProviderEntity( @@ -644,9 +643,8 @@ class TestModelProviderService: mock_provider_manager = mock_external_service_dependencies["provider_manager"].return_value # Create mock default model response - from graphon.model_runtime.entities.common_entities import I18nObject - from core.entities.model_entities import DefaultModelEntity, DefaultModelProviderEntity + from graphon.model_runtime.entities.common_entities import I18nObject mock_default_model = DefaultModelEntity( model="gpt-3.5-turbo", diff --git a/api/tests/test_containers_integration_tests/services/test_schedule_service.py b/api/tests/test_containers_integration_tests/services/test_schedule_service.py new file mode 100644 index 0000000000..87f3306258 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_schedule_service.py @@ -0,0 +1,387 @@ +"""Testcontainers integration tests for schedule service SQL-backed behavior.""" + +from datetime import datetime +from types import SimpleNamespace +from uuid import uuid4 + +import pytest +from sqlalchemy import delete, select +from sqlalchemy.orm import Session + +from core.workflow.nodes.trigger_schedule.entities import ScheduleConfig, SchedulePlanUpdate +from core.workflow.nodes.trigger_schedule.exc import ScheduleNotFoundError +from events.event_handlers.sync_workflow_schedule_when_app_published import sync_schedule_from_workflow +from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from models.trigger import WorkflowSchedulePlan +from services.errors.account import AccountNotFoundError +from services.trigger.schedule_service import ScheduleService + + +class ScheduleServiceIntegrationFactory: + @staticmethod + def create_account_with_tenant( + db_session_with_containers: Session, + role: TenantAccountRole = TenantAccountRole.OWNER, + ) -> tuple[Account, Tenant]: + account = Account( + email=f"{uuid4()}@example.com", + name=f"user-{uuid4()}", + interface_language="en-US", + status="active", + ) + tenant = Tenant(name=f"tenant-{uuid4()}", status="normal") + db_session_with_containers.add_all([account, tenant]) + db_session_with_containers.flush() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=role, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + account.current_tenant = tenant + return account, tenant + + @staticmethod + def create_schedule_plan( + db_session_with_containers: Session, + *, + tenant_id: str, + app_id: str | None = None, + node_id: str = "start", + cron_expression: str = "30 10 * * *", + timezone: str = "UTC", + next_run_at: datetime | None = None, + ) -> WorkflowSchedulePlan: + schedule = WorkflowSchedulePlan( + tenant_id=tenant_id, + app_id=app_id or str(uuid4()), + node_id=node_id, + cron_expression=cron_expression, + timezone=timezone, + next_run_at=next_run_at, + ) + db_session_with_containers.add(schedule) + db_session_with_containers.commit() + return schedule + + +def _cron_workflow( + *, + node_id: str = "start", + cron_expression: str = "30 10 * * *", + timezone: str = "UTC", +): + return SimpleNamespace( + graph_dict={ + "nodes": [ + { + "id": node_id, + "data": { + "type": "trigger-schedule", + "mode": "cron", + "cron_expression": cron_expression, + "timezone": timezone, + }, + } + ] + } + ) + + +def _no_schedule_workflow(): + return SimpleNamespace( + graph_dict={ + "nodes": [ + { + "id": "node-1", + "data": {"type": "llm"}, + } + ] + } + ) + + +class TestScheduleServiceIntegration: + def test_create_schedule_persists_schedule(self, db_session_with_containers: Session): + account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + expected_next_run = datetime(2026, 1, 1, 10, 30, 0) + config = ScheduleConfig( + node_id="start", + cron_expression="30 10 * * *", + timezone="UTC", + ) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + schedule = ScheduleService.create_schedule( + session=db_session_with_containers, + tenant_id=tenant.id, + app_id=str(uuid4()), + config=config, + ) + + persisted = db_session_with_containers.get(WorkflowSchedulePlan, schedule.id) + assert persisted is not None + assert persisted.tenant_id == tenant.id + assert persisted.node_id == "start" + assert persisted.cron_expression == "30 10 * * *" + assert persisted.timezone == "UTC" + assert persisted.next_run_at == expected_next_run + + def test_update_schedule_updates_fields_and_recomputes_next_run(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + schedule = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + cron_expression="30 10 * * *", + timezone="UTC", + ) + expected_next_run = datetime(2026, 1, 2, 12, 0, 0) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + updated = ScheduleService.update_schedule( + session=db_session_with_containers, + schedule_id=schedule.id, + updates=SchedulePlanUpdate( + cron_expression="0 12 * * *", + timezone="America/New_York", + ), + ) + + db_session_with_containers.refresh(updated) + assert updated.cron_expression == "0 12 * * *" + assert updated.timezone == "America/New_York" + assert updated.next_run_at == expected_next_run + + def test_update_schedule_updates_only_node_id_without_recomputing_time(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + initial_next_run = datetime(2026, 1, 1, 10, 0, 0) + schedule = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + next_run_at=initial_next_run, + ) + + with pytest.MonkeyPatch.context() as monkeypatch: + calls: list[tuple] = [] + + def _track(*args, **kwargs): + calls.append((args, kwargs)) + return datetime(2026, 1, 9, 10, 0, 0) + + monkeypatch.setattr("services.trigger.schedule_service.calculate_next_run_at", _track) + updated = ScheduleService.update_schedule( + session=db_session_with_containers, + schedule_id=schedule.id, + updates=SchedulePlanUpdate(node_id="node-new"), + ) + + db_session_with_containers.refresh(updated) + assert updated.node_id == "node-new" + assert updated.next_run_at == initial_next_run + assert calls == [] + + def test_update_schedule_not_found_raises(self, db_session_with_containers: Session): + with pytest.raises(ScheduleNotFoundError, match="Schedule not found"): + ScheduleService.update_schedule( + session=db_session_with_containers, + schedule_id=str(uuid4()), + updates=SchedulePlanUpdate(node_id="node-new"), + ) + + def test_delete_schedule_removes_row(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + schedule = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + ) + + ScheduleService.delete_schedule( + session=db_session_with_containers, + schedule_id=schedule.id, + ) + db_session_with_containers.commit() + + assert db_session_with_containers.get(WorkflowSchedulePlan, schedule.id) is None + + def test_delete_schedule_not_found_raises(self, db_session_with_containers: Session): + with pytest.raises(ScheduleNotFoundError, match="Schedule not found"): + ScheduleService.delete_schedule( + session=db_session_with_containers, + schedule_id=str(uuid4()), + ) + + def test_get_tenant_owner_returns_owner_account(self, db_session_with_containers: Session): + owner, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant( + db_session_with_containers, + role=TenantAccountRole.OWNER, + ) + + result = ScheduleService.get_tenant_owner( + session=db_session_with_containers, + tenant_id=tenant.id, + ) + + assert result.id == owner.id + + def test_get_tenant_owner_falls_back_to_admin(self, db_session_with_containers: Session): + admin, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant( + db_session_with_containers, + role=TenantAccountRole.ADMIN, + ) + + result = ScheduleService.get_tenant_owner( + session=db_session_with_containers, + tenant_id=tenant.id, + ) + + assert result.id == admin.id + + def test_get_tenant_owner_raises_when_account_record_missing(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + db_session_with_containers.execute(delete(TenantAccountJoin)) + missing_account_id = str(uuid4()) + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=missing_account_id, + role=TenantAccountRole.OWNER, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + with pytest.raises(AccountNotFoundError, match=missing_account_id): + ScheduleService.get_tenant_owner(session=db_session_with_containers, tenant_id=tenant.id) + + def test_get_tenant_owner_raises_when_no_owner_or_admin_found(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.commit() + + with pytest.raises(AccountNotFoundError, match=tenant.id): + ScheduleService.get_tenant_owner(session=db_session_with_containers, tenant_id=tenant.id) + + def test_update_next_run_at_updates_persisted_value(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + schedule = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + ) + expected_next_run = datetime(2026, 1, 3, 10, 30, 0) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + result = ScheduleService.update_next_run_at( + session=db_session_with_containers, + schedule_id=schedule.id, + ) + + db_session_with_containers.refresh(schedule) + assert result == expected_next_run + assert schedule.next_run_at == expected_next_run + + def test_update_next_run_at_raises_when_schedule_not_found(self, db_session_with_containers: Session): + with pytest.raises(ScheduleNotFoundError, match="Schedule not found"): + ScheduleService.update_next_run_at( + session=db_session_with_containers, + schedule_id=str(uuid4()), + ) + + +class TestSyncScheduleFromWorkflowIntegration: + def test_sync_schedule_create_new(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + expected_next_run = datetime(2026, 1, 4, 10, 30, 0) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + result = sync_schedule_from_workflow( + tenant_id=tenant.id, + app_id=app_id, + workflow=_cron_workflow(), + ) + + assert result is not None + persisted = db_session_with_containers.execute( + select(WorkflowSchedulePlan).where(WorkflowSchedulePlan.app_id == app_id) + ).scalar_one() + assert persisted.node_id == "start" + assert persisted.cron_expression == "30 10 * * *" + assert persisted.timezone == "UTC" + assert persisted.next_run_at == expected_next_run + + def test_sync_schedule_update_existing(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + existing = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + app_id=app_id, + node_id="old-start", + cron_expression="30 10 * * *", + timezone="UTC", + ) + existing_id = existing.id + expected_next_run = datetime(2026, 1, 5, 12, 0, 0) + + with pytest.MonkeyPatch.context() as monkeypatch: + monkeypatch.setattr( + "services.trigger.schedule_service.calculate_next_run_at", + lambda *_args, **_kwargs: expected_next_run, + ) + result = sync_schedule_from_workflow( + tenant_id=tenant.id, + app_id=app_id, + workflow=_cron_workflow( + node_id="start", + cron_expression="0 12 * * *", + timezone="America/New_York", + ), + ) + + assert result is not None + db_session_with_containers.expire_all() + persisted = db_session_with_containers.get(WorkflowSchedulePlan, existing_id) + assert persisted is not None + assert persisted.node_id == "start" + assert persisted.cron_expression == "0 12 * * *" + assert persisted.timezone == "America/New_York" + assert persisted.next_run_at == expected_next_run + + def test_sync_schedule_remove_when_no_config(self, db_session_with_containers: Session): + _account, tenant = ScheduleServiceIntegrationFactory.create_account_with_tenant(db_session_with_containers) + app_id = str(uuid4()) + existing = ScheduleServiceIntegrationFactory.create_schedule_plan( + db_session_with_containers, + tenant_id=tenant.id, + app_id=app_id, + ) + existing_id = existing.id + + result = sync_schedule_from_workflow( + tenant_id=tenant.id, + app_id=app_id, + workflow=_no_schedule_workflow(), + ) + + assert result is None + db_session_with_containers.expire_all() + assert db_session_with_containers.get(WorkflowSchedulePlan, existing_id) is None diff --git a/api/tests/test_containers_integration_tests/services/test_webhook_service_relationships.py b/api/tests/test_containers_integration_tests/services/test_webhook_service_relationships.py new file mode 100644 index 0000000000..ec10c51e04 --- /dev/null +++ b/api/tests/test_containers_integration_tests/services/test_webhook_service_relationships.py @@ -0,0 +1,507 @@ +from __future__ import annotations + +import json +from types import SimpleNamespace +from unittest.mock import MagicMock, patch +from uuid import uuid4 + +import pytest +from sqlalchemy import select +from sqlalchemy.orm import Session + +from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE +from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole +from models.enums import AppTriggerStatus, AppTriggerType +from models.model import App +from models.trigger import AppTrigger, WorkflowWebhookTrigger +from models.workflow import Workflow +from services.errors.app import QuotaExceededError +from services.trigger.webhook_service import WebhookService + + +class WebhookServiceRelationshipFactory: + @staticmethod + def create_account_and_tenant(db_session_with_containers: Session) -> tuple[Account, Tenant]: + account = Account( + name=f"Account {uuid4()}", + email=f"webhook-{uuid4()}@example.com", + password="hashed-password", + password_salt="salt", + interface_language="en-US", + timezone="UTC", + ) + db_session_with_containers.add(account) + db_session_with_containers.commit() + + tenant = Tenant(name=f"Tenant {uuid4()}", plan="basic", status="normal") + db_session_with_containers.add(tenant) + db_session_with_containers.commit() + + join = TenantAccountJoin( + tenant_id=tenant.id, + account_id=account.id, + role=TenantAccountRole.OWNER, + current=True, + ) + db_session_with_containers.add(join) + db_session_with_containers.commit() + + account.current_tenant = tenant + return account, tenant + + @staticmethod + def create_app(db_session_with_containers: Session, tenant: Tenant, account: Account) -> App: + app = App( + tenant_id=tenant.id, + name=f"Webhook App {uuid4()}", + description="", + mode="workflow", + icon_type="emoji", + icon="bot", + icon_background="#FFFFFF", + enable_site=False, + enable_api=True, + api_rpm=100, + api_rph=100, + is_demo=False, + is_public=False, + is_universal=False, + created_by=account.id, + updated_by=account.id, + ) + db_session_with_containers.add(app) + db_session_with_containers.commit() + return app + + @staticmethod + def create_workflow( + db_session_with_containers: Session, + *, + app: App, + account: Account, + node_ids: list[str], + version: str, + ) -> Workflow: + graph = { + "nodes": [ + { + "id": node_id, + "data": { + "type": TRIGGER_WEBHOOK_NODE_TYPE, + "title": f"Webhook {node_id}", + "method": "post", + "content_type": "application/json", + "headers": [], + "params": [], + "body": [], + "status_code": 200, + "response_body": '{"status": "ok"}', + "timeout": 30, + }, + } + for node_id in node_ids + ], + "edges": [], + } + + workflow = Workflow( + tenant_id=app.tenant_id, + app_id=app.id, + type="workflow", + graph=json.dumps(graph), + features=json.dumps({}), + created_by=account.id, + updated_by=account.id, + environment_variables=[], + conversation_variables=[], + version=version, + ) + db_session_with_containers.add(workflow) + db_session_with_containers.commit() + return workflow + + @staticmethod + def create_webhook_trigger( + db_session_with_containers: Session, + *, + app: App, + account: Account, + node_id: str, + webhook_id: str | None = None, + ) -> WorkflowWebhookTrigger: + webhook_trigger = WorkflowWebhookTrigger( + app_id=app.id, + node_id=node_id, + tenant_id=app.tenant_id, + webhook_id=webhook_id or uuid4().hex[:24], + created_by=account.id, + ) + db_session_with_containers.add(webhook_trigger) + db_session_with_containers.commit() + return webhook_trigger + + @staticmethod + def create_app_trigger( + db_session_with_containers: Session, + *, + app: App, + node_id: str, + status: AppTriggerStatus, + ) -> AppTrigger: + app_trigger = AppTrigger( + tenant_id=app.tenant_id, + app_id=app.id, + node_id=node_id, + trigger_type=AppTriggerType.TRIGGER_WEBHOOK, + provider_name="webhook", + title=f"Webhook {node_id}", + status=status, + ) + db_session_with_containers.add(app_trigger) + db_session_with_containers.commit() + return app_trigger + + +class TestWebhookServiceLookupWithContainers: + def test_get_webhook_trigger_and_workflow_raises_when_app_trigger_missing( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + + with pytest.raises(ValueError, match="App trigger not found"): + WebhookService.get_webhook_trigger_and_workflow(webhook_trigger.webhook_id) + + def test_get_webhook_trigger_and_workflow_raises_when_app_trigger_rate_limited( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + factory.create_app_trigger( + db_session_with_containers, app=app, node_id="node-1", status=AppTriggerStatus.RATE_LIMITED + ) + + with pytest.raises(ValueError, match="rate limited"): + WebhookService.get_webhook_trigger_and_workflow(webhook_trigger.webhook_id) + + def test_get_webhook_trigger_and_workflow_raises_when_app_trigger_disabled( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + factory.create_app_trigger( + db_session_with_containers, app=app, node_id="node-1", status=AppTriggerStatus.DISABLED + ) + + with pytest.raises(ValueError, match="disabled"): + WebhookService.get_webhook_trigger_and_workflow(webhook_trigger.webhook_id) + + def test_get_webhook_trigger_and_workflow_raises_when_workflow_missing( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + factory.create_app_trigger( + db_session_with_containers, app=app, node_id="node-1", status=AppTriggerStatus.ENABLED + ) + + with pytest.raises(ValueError, match="Workflow not found"): + WebhookService.get_webhook_trigger_and_workflow(webhook_trigger.webhook_id) + + def test_get_webhook_trigger_and_workflow_returns_debug_draft_workflow( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + factory.create_workflow( + db_session_with_containers, + app=app, + account=account, + node_ids=["published-node"], + version="2026-04-14.001", + ) + draft_workflow = factory.create_workflow( + db_session_with_containers, + app=app, + account=account, + node_ids=["debug-node"], + version=Workflow.VERSION_DRAFT, + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="debug-node" + ) + + got_trigger, got_workflow, got_node_config = WebhookService.get_webhook_trigger_and_workflow( + webhook_trigger.webhook_id, + is_debug=True, + ) + + assert got_trigger.id == webhook_trigger.id + assert got_workflow.id == draft_workflow.id + assert got_node_config["id"] == "debug-node" + + +class TestWebhookServiceTriggerExecutionWithContainers: + def test_trigger_workflow_execution_triggers_async_workflow_successfully( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + + end_user = SimpleNamespace(id=str(uuid4())) + webhook_data = {"body": {"value": 1}, "headers": {}, "query_params": {}, "files": {}, "method": "POST"} + + with ( + patch( + "services.trigger.webhook_service.EndUserService.get_or_create_end_user_by_type", + return_value=end_user, + ), + patch("services.trigger.webhook_service.QuotaType.TRIGGER.consume") as mock_consume, + patch("services.trigger.webhook_service.AsyncWorkflowService.trigger_workflow_async") as mock_trigger, + ): + WebhookService.trigger_workflow_execution(webhook_trigger, webhook_data, workflow) + + mock_consume.assert_called_once_with(webhook_trigger.tenant_id) + mock_trigger.assert_called_once() + trigger_args = mock_trigger.call_args.args + assert trigger_args[1] is end_user + assert trigger_args[2].workflow_id == workflow.id + assert trigger_args[2].root_node_id == webhook_trigger.node_id + + def test_trigger_workflow_execution_marks_tenant_rate_limited_when_quota_exceeded( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + + with ( + patch( + "services.trigger.webhook_service.EndUserService.get_or_create_end_user_by_type", + return_value=SimpleNamespace(id=str(uuid4())), + ), + patch( + "services.trigger.webhook_service.QuotaType.TRIGGER.consume", + side_effect=QuotaExceededError(feature="trigger", tenant_id=tenant.id, required=1), + ), + patch( + "services.trigger.webhook_service.AppTriggerService.mark_tenant_triggers_rate_limited" + ) as mock_mark_rate_limited, + ): + with pytest.raises(QuotaExceededError): + WebhookService.trigger_workflow_execution( + webhook_trigger, + {"body": {}, "headers": {}, "query_params": {}, "files": {}, "method": "POST"}, + workflow, + ) + + mock_mark_rate_limited.assert_called_once_with(tenant.id) + + def test_trigger_workflow_execution_logs_and_reraises_unexpected_errors( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version="2026-04-14.001" + ) + webhook_trigger = factory.create_webhook_trigger( + db_session_with_containers, app=app, account=account, node_id="node-1" + ) + + with ( + patch( + "services.trigger.webhook_service.EndUserService.get_or_create_end_user_by_type", + side_effect=RuntimeError("boom"), + ), + patch("services.trigger.webhook_service.logger.exception") as mock_logger_exception, + ): + with pytest.raises(RuntimeError, match="boom"): + WebhookService.trigger_workflow_execution( + webhook_trigger, + {"body": {}, "headers": {}, "query_params": {}, "files": {}, "method": "POST"}, + workflow, + ) + + mock_logger_exception.assert_called_once() + + +class TestWebhookServiceRelationshipSyncWithContainers: + def test_sync_webhook_relationships_raises_when_workflow_exceeds_node_limit( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + node_ids = [f"node-{index}" for index in range(WebhookService.MAX_WEBHOOK_NODES_PER_WORKFLOW + 1)] + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=node_ids, version=Workflow.VERSION_DRAFT + ) + + with pytest.raises(ValueError, match="maximum webhook node limit"): + WebhookService.sync_webhook_relationships(app, workflow) + + def test_sync_webhook_relationships_raises_when_lock_not_acquired( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=["node-1"], version=Workflow.VERSION_DRAFT + ) + lock = MagicMock() + lock.acquire.return_value = False + + with patch("services.trigger.webhook_service.redis_client.lock", return_value=lock): + with pytest.raises(RuntimeError, match="Failed to acquire lock"): + WebhookService.sync_webhook_relationships(app, workflow) + + def test_sync_webhook_relationships_creates_missing_records_and_deletes_stale_records( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + stale_trigger = factory.create_webhook_trigger( + db_session_with_containers, + app=app, + account=account, + node_id="node-stale", + webhook_id="stale-webhook-id-000001", + ) + stale_trigger_id = stale_trigger.id + workflow = factory.create_workflow( + db_session_with_containers, + app=app, + account=account, + node_ids=["node-new"], + version=Workflow.VERSION_DRAFT, + ) + + with patch( + "services.trigger.webhook_service.WebhookService.generate_webhook_id", return_value="new-webhook-id-000001" + ): + WebhookService.sync_webhook_relationships(app, workflow) + + db_session_with_containers.expire_all() + records = db_session_with_containers.scalars( + select(WorkflowWebhookTrigger).where(WorkflowWebhookTrigger.app_id == app.id) + ).all() + + assert [record.node_id for record in records] == ["node-new"] + assert records[0].webhook_id == "new-webhook-id-000001" + assert db_session_with_containers.get(WorkflowWebhookTrigger, stale_trigger_id) is None + + def test_sync_webhook_relationships_sets_redis_cache_for_new_record( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, + app=app, + account=account, + node_ids=["node-cache"], + version=Workflow.VERSION_DRAFT, + ) + cache_key = f"{WebhookService.__WEBHOOK_NODE_CACHE_KEY__}:{app.id}:node-cache" + + with patch( + "services.trigger.webhook_service.WebhookService.generate_webhook_id", return_value="cache-webhook-id-00001" + ): + WebhookService.sync_webhook_relationships(app, workflow) + + cached_payload = WebhookServiceRelationshipFactory._read_cache(cache_key) + assert cached_payload is not None + assert cached_payload["node_id"] == "node-cache" + assert cached_payload["webhook_id"] == "cache-webhook-id-00001" + + def test_sync_webhook_relationships_logs_when_lock_release_fails( + self, db_session_with_containers: Session, flask_app_with_containers + ): + del flask_app_with_containers + factory = WebhookServiceRelationshipFactory + account, tenant = factory.create_account_and_tenant(db_session_with_containers) + app = factory.create_app(db_session_with_containers, tenant, account) + workflow = factory.create_workflow( + db_session_with_containers, app=app, account=account, node_ids=[], version=Workflow.VERSION_DRAFT + ) + lock = MagicMock() + lock.acquire.return_value = True + lock.release.side_effect = RuntimeError("release failed") + + with ( + patch("services.trigger.webhook_service.redis_client.lock", return_value=lock), + patch("services.trigger.webhook_service.logger.exception") as mock_logger_exception, + ): + WebhookService.sync_webhook_relationships(app, workflow) + + mock_logger_exception.assert_called_once() + + +def _read_cache(cache_key: str) -> dict[str, str] | None: + from extensions.ext_redis import redis_client + + cached = redis_client.get(cache_key) + if not cached: + return None + if isinstance(cached, bytes): + cached = cached.decode("utf-8") + return json.loads(cached) + + +WebhookServiceRelationshipFactory._read_cache = staticmethod(_read_cache) diff --git a/api/tests/test_containers_integration_tests/services/test_workflow_app_service.py b/api/tests/test_containers_integration_tests/services/test_workflow_app_service.py index 749c6fff5b..1e57b5603d 100644 --- a/api/tests/test_containers_integration_tests/services/test_workflow_app_service.py +++ b/api/tests/test_containers_integration_tests/services/test_workflow_app_service.py @@ -8,9 +8,9 @@ from unittest.mock import patch import pytest from faker import Faker -from graphon.enums import WorkflowExecutionStatus from sqlalchemy.orm import Session +from graphon.enums import WorkflowExecutionStatus from models import EndUser, Workflow, WorkflowAppLog, WorkflowArchiveLog, WorkflowRun from models.enums import AppTriggerType, CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import WorkflowAppLogCreatedFrom diff --git a/api/tests/test_containers_integration_tests/services/test_workflow_draft_variable_service.py b/api/tests/test_containers_integration_tests/services/test_workflow_draft_variable_service.py index 0c281c8c33..86cf2327c7 100644 --- a/api/tests/test_containers_integration_tests/services/test_workflow_draft_variable_service.py +++ b/api/tests/test_containers_integration_tests/services/test_workflow_draft_variable_service.py @@ -1,9 +1,9 @@ import pytest from faker import Faker -from graphon.variables.segments import StringSegment from sqlalchemy.orm import Session from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID +from graphon.variables.segments import StringSegment from models import App, Workflow from models.enums import DraftVariableType from models.workflow import WorkflowDraftVariable diff --git a/api/tests/test_containers_integration_tests/services/tools/test_api_tools_manage_service.py b/api/tests/test_containers_integration_tests/services/tools/test_api_tools_manage_service.py index d3e765055a..af83adaae0 100644 --- a/api/tests/test_containers_integration_tests/services/tools/test_api_tools_manage_service.py +++ b/api/tests/test_containers_integration_tests/services/tools/test_api_tools_manage_service.py @@ -1,3 +1,5 @@ +import inspect +import json from unittest.mock import patch import pytest @@ -6,6 +8,8 @@ from pydantic import TypeAdapter, ValidationError from sqlalchemy.orm import Session from core.tools.entities.tool_entities import ApiProviderSchemaType +from core.tools.errors import ApiToolProviderNotFoundError +from core.tools.tool_label_manager import ToolLabelManager from models import Account, Tenant from models.tools import ApiToolProvider from services.tools.api_tools_manage_service import ApiToolManageService @@ -590,30 +594,204 @@ class TestApiToolManageService: with pytest.raises(ValueError, match="you have not added provider"): ApiToolManageService.delete_api_tool_provider(account.id, tenant.id, "nonexistent") - def test_update_api_tool_provider_not_found( + def test_update_api_tool_provider_success( self, flask_req_ctx_with_containers, db_session_with_containers: Session, mock_external_service_dependencies ): - """Test update raises ValueError when original provider not found.""" fake = Faker() + + # Firmware fix for cache.delete() in update flow + mock_encrypter = mock_external_service_dependencies["encrypter"] + from unittest.mock import MagicMock + + mock_cache = MagicMock() + mock_cache.delete.return_value = None + mock_encrypter.return_value = (mock_encrypter, mock_cache) + + # Get fake account and tenant account, tenant = self._create_test_account_and_tenant( db_session_with_containers, mock_external_service_dependencies ) - with pytest.raises(ValueError, match="does not exists"): - ApiToolManageService.update_api_tool_provider( + # original provider name + original_name = "original-provider" + + # Create original provider + _ = ApiToolManageService.create_api_tool_provider( + user_id=account.id, + tenant_id=tenant.id, + provider_name=original_name, + icon={"type": "emoji", "value": "🔧"}, + credentials={"auth_type": "none"}, + schema_type=ApiProviderSchemaType.OPENAPI, + schema=self._create_test_openapi_schema(), + privacy_policy="", + custom_disclaimer="", + labels=["old-label"], + ) + + # new provide name and new labels for update + new_name = "updated-provider" + new_labels = ["new-label-1", "new-label-2"] + + # Reset mock history so assertions focus on update path only + mock_external_service_dependencies["encrypter"].reset_mock() + mock_external_service_dependencies["provider_controller"].from_db.reset_mock() + mock_external_service_dependencies["tool_label_manager"].update_tool_labels.reset_mock() + + # Act: Update the provider with new values + result = ApiToolManageService.update_api_tool_provider( + user_id=account.id, + tenant_id=tenant.id, + # new provider name - changed 1 + provider_name=new_name, + original_provider=original_name, + # new icon - changed 2 + icon={"type": "emoji", "value": "🚀"}, + credentials={"auth_type": "none"}, + _schema_type=ApiProviderSchemaType.OPENAPI, + schema=self._create_test_openapi_schema(), + # new privacy policy - changed 3 + privacy_policy="https://new-policy.com", + # new custom disclaimer - changed 4 + custom_disclaimer="New disclaimer", + # new labels - changed 5 (However, we will not verify this, not this layer responsibility.) + labels=new_labels, + ) + + # Assert: Verify the result + assert result == {"result": "success"} + + # Get the updated provider from the database + updated_provider: ApiToolProvider | None = ( + db_session_with_containers.query(ApiToolProvider) + .filter(ApiToolProvider.tenant_id == tenant.id, ApiToolProvider.name == new_name) + .first() + ) + + # Verify the provider was updated successfully + assert updated_provider is not None + + # Manually refresh to keep object detachment + db_session_with_containers.refresh(updated_provider) + # Verify all the updated fields + # - changed 1 + assert updated_provider.name == new_name + # - changed 2 + icon_data = json.loads(updated_provider.icon) + assert icon_data["type"] == "emoji" + assert icon_data["value"] == "🚀" + # - changed 3 + assert updated_provider.privacy_policy == "https://new-policy.com" + # - changed 4 + assert updated_provider.custom_disclaimer == "New disclaimer" + + # Verify old provider name no longer exists after rename + original_provider: ApiToolProvider | None = ( + db_session_with_containers.query(ApiToolProvider) + .filter(ApiToolProvider.tenant_id == tenant.id, ApiToolProvider.name == original_name) + .first() + ) + assert original_provider is None + + # Verify update flow calls critical collaborators + mock_external_service_dependencies["provider_controller"].from_db.assert_called_once() + mock_external_service_dependencies["encrypter"].assert_called_once() + mock_cache.delete.assert_called_once() + + # Deeply verify on session propagation of labels update logics: + # Since in refactoring, we pass session down to label manager to keep atomicity. + # The assertion here is to verify this. + sig = inspect.signature(ToolLabelManager.update_tool_labels) + args, kwargs = mock_external_service_dependencies["tool_label_manager"].update_tool_labels.call_args + bound_args = sig.bind(*args, **kwargs) + passed_session = bound_args.arguments.get("session") + # Ensure the type: Session + assert isinstance(passed_session, Session), f"Expected Session object, got {type(passed_session)}" + assert passed_session is not None, ( + "Atomicity Failure: Session cannot be passed to Label Manager in update_api_tool_provider" + ) + + def test_update_api_tool_provider_not_found( + self, flask_req_ctx_with_containers, db_session_with_containers: Session, mock_external_service_dependencies + ): + """ + Test update raises ValueError when original provider not found. + + This test verifies: + - Proper error when trying to update a non-existing original provider + - No accidental upsert/new provider creation + - No external dependency invocation on early failure path + """ + # Arrange: Create test account and tenant + account, tenant = self._create_test_account_and_tenant( + db_session_with_containers, mock_external_service_dependencies + ) + + # Keep an existing provider in DB to ensure unrelated data remains unchanged + existing_provider_name = "existing-provider" + _ = ApiToolManageService.create_api_tool_provider( + user_id=account.id, + tenant_id=tenant.id, + provider_name=existing_provider_name, + icon={"type": "emoji", "value": "🔧"}, + credentials={"auth_type": "none"}, + schema_type=ApiProviderSchemaType.OPENAPI, + schema=self._create_test_openapi_schema(), + privacy_policy="https://existing-policy.com", + custom_disclaimer="Existing disclaimer", + labels=["existing-label"], + ) + + # Reset mock history so assertions focus on update failure path only + mock_external_service_dependencies["tool_label_manager"].update_tool_labels.reset_mock() + mock_external_service_dependencies["encrypter"].reset_mock() + mock_external_service_dependencies["provider_controller"].from_db.reset_mock() + + # Act & Assert: Verify update fails with clear error message + target_new_name = "new-provider-name" + missing_original_name = "missing-original-provider" + with pytest.raises(ApiToolProviderNotFoundError) as exc_info: + _ = ApiToolManageService.update_api_tool_provider( user_id=account.id, tenant_id=tenant.id, - provider_name="new-name", - original_provider="nonexistent", - icon={}, + provider_name=target_new_name, + original_provider=missing_original_name, + icon={"type": "emoji", "value": "🚀"}, credentials={"auth_type": "none"}, _schema_type=ApiProviderSchemaType.OPENAPI, schema=self._create_test_openapi_schema(), - privacy_policy=None, - custom_disclaimer="", - labels=[], + privacy_policy="https://new-policy.com", + custom_disclaimer="New disclaimer", + labels=["new-label"], ) + error = exc_info.value + assert error.provider_name == missing_original_name + assert error.tenant_id == tenant.id + assert error.error_code == "api_tool_provider_not_found" + + # Assert: Existing provider should remain unchanged + existing_provider: ApiToolProvider | None = ( + db_session_with_containers.query(ApiToolProvider) + .filter(ApiToolProvider.tenant_id == tenant.id, ApiToolProvider.name == existing_provider_name) + .first() + ) + assert existing_provider is not None + assert existing_provider.name == existing_provider_name + + # Assert: No new provider should be created + unexpected_new_provider: ApiToolProvider | None = ( + db_session_with_containers.query(ApiToolProvider) + .filter(ApiToolProvider.tenant_id == tenant.id, ApiToolProvider.name == target_new_name) + .first() + ) + assert unexpected_new_provider is None + + # Assert: Early failure should skip all downstream external interactions + mock_external_service_dependencies["tool_label_manager"].update_tool_labels.assert_not_called() + mock_external_service_dependencies["encrypter"].assert_not_called() + mock_external_service_dependencies["provider_controller"].from_db.assert_not_called() + def test_update_api_tool_provider_missing_auth_type( self, flask_req_ctx_with_containers, db_session_with_containers: Session, mock_external_service_dependencies ): diff --git a/api/tests/test_containers_integration_tests/services/workflow/test_workflow_converter.py b/api/tests/test_containers_integration_tests/services/workflow/test_workflow_converter.py index ce2fd2eeb1..ce5c2bd162 100644 --- a/api/tests/test_containers_integration_tests/services/workflow/test_workflow_converter.py +++ b/api/tests/test_containers_integration_tests/services/workflow/test_workflow_converter.py @@ -5,9 +5,6 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.message_entities import PromptMessageRole -from graphon.variables.input_entities import VariableEntity, VariableEntityType from sqlalchemy.orm import Session from core.app.app_config.entities import ( @@ -21,6 +18,9 @@ from core.app.app_config.entities import ( PromptTemplateEntity, ) from core.prompt.utils.prompt_template_parser import PromptTemplateParser +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.message_entities import PromptMessageRole +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models import Account, Tenant from models.api_based_extension import APIBasedExtension, APIBasedExtensionPoint from models.model import App, AppMode, AppModelConfig diff --git a/api/tests/test_containers_integration_tests/services/workflow/test_workflow_node_execution_service_repository.py b/api/tests/test_containers_integration_tests/services/workflow/test_workflow_node_execution_service_repository.py index 7c43bf676b..4dab895135 100644 --- a/api/tests/test_containers_integration_tests/services/workflow/test_workflow_node_execution_service_repository.py +++ b/api/tests/test_containers_integration_tests/services/workflow/test_workflow_node_execution_service_repository.py @@ -1,10 +1,10 @@ from datetime import datetime, timedelta from uuid import uuid4 -from graphon.enums import WorkflowNodeExecutionStatus from sqlalchemy import Engine, select from sqlalchemy.orm import Session, sessionmaker +from graphon.enums import WorkflowNodeExecutionStatus from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole from models.workflow import WorkflowNodeExecutionModel diff --git a/api/tests/test_containers_integration_tests/tasks/test_add_document_to_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_add_document_to_index_task.py index 4b04c1accb..fcc15aad42 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_add_document_to_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_add_document_to_index_task.py @@ -2,6 +2,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -530,22 +531,18 @@ class TestAddDocumentToIndexTask: redis_client.set(indexing_cache_key, "processing", ex=300) # Verify logs exist before processing - existing_logs = ( - db_session_with_containers.query(DatasetAutoDisableLog) - .where(DatasetAutoDisableLog.document_id == document.id) - .all() - ) + existing_logs = db_session_with_containers.scalars( + select(DatasetAutoDisableLog).where(DatasetAutoDisableLog.document_id == document.id) + ).all() assert len(existing_logs) == 2 # Act: Execute the task add_document_to_index_task(document.id) # Assert: Verify auto disable logs were deleted - remaining_logs = ( - db_session_with_containers.query(DatasetAutoDisableLog) - .where(DatasetAutoDisableLog.document_id == document.id) - .all() - ) + remaining_logs = db_session_with_containers.scalars( + select(DatasetAutoDisableLog).where(DatasetAutoDisableLog.document_id == document.id) + ).all() assert len(remaining_logs) == 0 # Verify index processing occurred normally diff --git a/api/tests/test_containers_integration_tests/tasks/test_batch_clean_document_task.py b/api/tests/test_containers_integration_tests/tasks/test_batch_clean_document_task.py index 6cbbe43137..e29ca7ebab 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_batch_clean_document_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_batch_clean_document_task.py @@ -11,6 +11,7 @@ from unittest.mock import Mock, patch import pytest from faker import Faker +from sqlalchemy import func, select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType @@ -267,11 +268,13 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Ensure all changes are committed # Check that segment is deleted - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that upload file is deleted - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None def test_batch_clean_document_task_with_image_files( @@ -319,7 +322,9 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check that segment is deleted - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Verify that the task completed successfully by checking the log output @@ -360,14 +365,14 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check that upload file is deleted - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None # Verify database cleanup db_session_with_containers.commit() # Check that upload file is deleted - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None def test_batch_clean_document_task_dataset_not_found( @@ -410,7 +415,9 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Document should still exist since cleanup failed - existing_document = db_session_with_containers.query(Document).filter_by(id=document_id).first() + existing_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document_id).limit(1) + ) assert existing_document is not None def test_batch_clean_document_task_storage_cleanup_failure( @@ -453,11 +460,13 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check that segment is deleted from database - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that upload file is deleted from database - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None def test_batch_clean_document_task_multiple_documents( @@ -510,12 +519,16 @@ class TestBatchCleanDocumentTask: # Check that all segments are deleted for segment_id in segment_ids: - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that all upload files are deleted for file_id in file_ids: - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar( + select(UploadFile).where(UploadFile.id == file_id).limit(1) + ) assert deleted_file is None def test_batch_clean_document_task_different_doc_forms( @@ -564,7 +577,9 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check that segment is deleted - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None except Exception as e: @@ -574,7 +589,9 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Check if the segment still exists (task may have failed before deletion) - existing_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + existing_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) if existing_segment is not None: # If segment still exists, the task failed before deletion # This is acceptable in test environments with external service issues @@ -645,12 +662,16 @@ class TestBatchCleanDocumentTask: # Check that all segments are deleted for segment_id in segment_ids: - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that all upload files are deleted for file_id in file_ids: - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar( + select(UploadFile).where(UploadFile.id == file_id).limit(1) + ) assert deleted_file is None def test_batch_clean_document_task_integration_with_real_database( @@ -699,8 +720,16 @@ class TestBatchCleanDocumentTask: db_session_with_containers.commit() # Verify initial state - assert db_session_with_containers.query(DocumentSegment).filter_by(document_id=document.id).count() == 3 - assert db_session_with_containers.query(UploadFile).filter_by(id=upload_file.id).first() is not None + assert ( + db_session_with_containers.scalar( + select(func.count()).select_from(DocumentSegment).where(DocumentSegment.document_id == document.id) + ) + == 3 + ) + assert ( + db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == upload_file.id).limit(1)) + is not None + ) # Store original IDs for verification document_id = document.id @@ -720,13 +749,20 @@ class TestBatchCleanDocumentTask: # Check that all segments are deleted for segment_id in segment_ids: - deleted_segment = db_session_with_containers.query(DocumentSegment).filter_by(id=segment_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == segment_id).limit(1) + ) assert deleted_segment is None # Check that upload file is deleted - deleted_file = db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() + deleted_file = db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) assert deleted_file is None # Verify final database state - assert db_session_with_containers.query(DocumentSegment).filter_by(document_id=document_id).count() == 0 - assert db_session_with_containers.query(UploadFile).filter_by(id=file_id).first() is None + assert ( + db_session_with_containers.scalar( + select(func.count()).select_from(DocumentSegment).where(DocumentSegment.document_id == document_id) + ) + == 0 + ) + assert db_session_with_containers.scalar(select(UploadFile).where(UploadFile.id == file_id).limit(1)) is None diff --git a/api/tests/test_containers_integration_tests/tasks/test_batch_create_segment_to_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_batch_create_segment_to_index_task.py index f9ae33b32f..05827112d4 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_batch_create_segment_to_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_batch_create_segment_to_index_task.py @@ -17,6 +17,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import delete, select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -37,13 +38,13 @@ class TestBatchCreateSegmentToIndexTask: from extensions.ext_redis import redis_client # Clear all test data - db_session_with_containers.query(DocumentSegment).delete() - db_session_with_containers.query(Document).delete() - db_session_with_containers.query(Dataset).delete() - db_session_with_containers.query(UploadFile).delete() - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(DocumentSegment)) + db_session_with_containers.execute(delete(Document)) + db_session_with_containers.execute(delete(Dataset)) + db_session_with_containers.execute(delete(UploadFile)) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache @@ -292,12 +293,9 @@ class TestBatchCreateSegmentToIndexTask: # Verify results # Check that segments were created - segments = ( - db_session_with_containers.query(DocumentSegment) - .filter_by(document_id=document.id) - .order_by(DocumentSegment.position) - .all() - ) + segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.document_id == document.id).order_by(DocumentSegment.position) + ).all() assert len(segments) == 3 # Verify segment content and metadata @@ -367,11 +365,11 @@ class TestBatchCreateSegmentToIndexTask: # Verify no segments were created (since dataset doesn't exist) - segments = db_session_with_containers.query(DocumentSegment).all() + segments = db_session_with_containers.scalars(select(DocumentSegment)).all() assert len(segments) == 0 # Verify no documents were modified - documents = db_session_with_containers.query(Document).all() + documents = db_session_with_containers.scalars(select(Document)).all() assert len(documents) == 0 def test_batch_create_segment_to_index_task_document_not_found( @@ -415,12 +413,14 @@ class TestBatchCreateSegmentToIndexTask: # Verify no segments were created - segments = db_session_with_containers.query(DocumentSegment).all() + segments = db_session_with_containers.scalars(select(DocumentSegment)).all() assert len(segments) == 0 # Verify dataset remains unchanged (no segments were added to the dataset) db_session_with_containers.refresh(dataset) - segments_for_dataset = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + segments_for_dataset = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(segments_for_dataset) == 0 def test_batch_create_segment_to_index_task_document_not_available( @@ -516,7 +516,9 @@ class TestBatchCreateSegmentToIndexTask: assert cache_value == b"error" # Verify no segments were created - segments = db_session_with_containers.query(DocumentSegment).filter_by(document_id=document.id).all() + segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.document_id == document.id) + ).all() assert len(segments) == 0 def test_batch_create_segment_to_index_task_upload_file_not_found( @@ -560,7 +562,7 @@ class TestBatchCreateSegmentToIndexTask: # Verify no segments were created - segments = db_session_with_containers.query(DocumentSegment).all() + segments = db_session_with_containers.scalars(select(DocumentSegment)).all() assert len(segments) == 0 # Verify document remains unchanged @@ -611,7 +613,7 @@ class TestBatchCreateSegmentToIndexTask: # Verify error handling # Since exception was raised, no segments should be created - segments = db_session_with_containers.query(DocumentSegment).all() + segments = db_session_with_containers.scalars(select(DocumentSegment)).all() assert len(segments) == 0 # Verify document remains unchanged @@ -682,12 +684,9 @@ class TestBatchCreateSegmentToIndexTask: # Verify results # Check that new segments were created with correct positions - all_segments = ( - db_session_with_containers.query(DocumentSegment) - .filter_by(document_id=document.id) - .order_by(DocumentSegment.position) - .all() - ) + all_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.document_id == document.id).order_by(DocumentSegment.position) + ).all() assert len(all_segments) == 6 # 3 existing + 3 new # Verify position ordering diff --git a/api/tests/test_containers_integration_tests/tasks/test_clean_dataset_task.py b/api/tests/test_containers_integration_tests/tasks/test_clean_dataset_task.py index 1dd37fbc92..32bc2fc0bd 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_clean_dataset_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_clean_dataset_task.py @@ -16,6 +16,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import delete, select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -52,18 +53,18 @@ class TestCleanDatasetTask: from extensions.ext_redis import redis_client # Clear all test data using the provided session fixture - db_session_with_containers.query(DatasetMetadataBinding).delete() - db_session_with_containers.query(DatasetMetadata).delete() - db_session_with_containers.query(AppDatasetJoin).delete() - db_session_with_containers.query(DatasetQuery).delete() - db_session_with_containers.query(DatasetProcessRule).delete() - db_session_with_containers.query(DocumentSegment).delete() - db_session_with_containers.query(Document).delete() - db_session_with_containers.query(Dataset).delete() - db_session_with_containers.query(UploadFile).delete() - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(DatasetMetadataBinding)) + db_session_with_containers.execute(delete(DatasetMetadata)) + db_session_with_containers.execute(delete(AppDatasetJoin)) + db_session_with_containers.execute(delete(DatasetQuery)) + db_session_with_containers.execute(delete(DatasetProcessRule)) + db_session_with_containers.execute(delete(DocumentSegment)) + db_session_with_containers.execute(delete(Document)) + db_session_with_containers.execute(delete(Dataset)) + db_session_with_containers.execute(delete(UploadFile)) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache @@ -302,28 +303,40 @@ class TestCleanDatasetTask: # Verify results # Check that dataset-related data was cleaned up - documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + documents = db_session_with_containers.scalars(select(Document).where(Document.dataset_id == dataset.id)).all() assert len(documents) == 0 - segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(segments) == 0 # Check that metadata and bindings were cleaned up - metadata = db_session_with_containers.query(DatasetMetadata).filter_by(dataset_id=dataset.id).all() + metadata = db_session_with_containers.scalars( + select(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset.id) + ).all() assert len(metadata) == 0 - bindings = db_session_with_containers.query(DatasetMetadataBinding).filter_by(dataset_id=dataset.id).all() + bindings = db_session_with_containers.scalars( + select(DatasetMetadataBinding).where(DatasetMetadataBinding.dataset_id == dataset.id) + ).all() assert len(bindings) == 0 # Check that process rules and queries were cleaned up - process_rules = db_session_with_containers.query(DatasetProcessRule).filter_by(dataset_id=dataset.id).all() + process_rules = db_session_with_containers.scalars( + select(DatasetProcessRule).where(DatasetProcessRule.dataset_id == dataset.id) + ).all() assert len(process_rules) == 0 - queries = db_session_with_containers.query(DatasetQuery).filter_by(dataset_id=dataset.id).all() + queries = db_session_with_containers.scalars( + select(DatasetQuery).where(DatasetQuery.dataset_id == dataset.id) + ).all() assert len(queries) == 0 # Check that app dataset joins were cleaned up - app_joins = db_session_with_containers.query(AppDatasetJoin).filter_by(dataset_id=dataset.id).all() + app_joins = db_session_with_containers.scalars( + select(AppDatasetJoin).where(AppDatasetJoin.dataset_id == dataset.id) + ).all() assert len(app_joins) == 0 # Verify index processor was called @@ -414,24 +427,32 @@ class TestCleanDatasetTask: # Verify results # Check that all documents were deleted - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that all segments were deleted - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Check that all upload files were deleted - remaining_files = db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).all() + remaining_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ).all() assert len(remaining_files) == 0 # Check that metadata and bindings were cleaned up - remaining_metadata = db_session_with_containers.query(DatasetMetadata).filter_by(dataset_id=dataset.id).all() + remaining_metadata = db_session_with_containers.scalars( + select(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset.id) + ).all() assert len(remaining_metadata) == 0 - remaining_bindings = ( - db_session_with_containers.query(DatasetMetadataBinding).filter_by(dataset_id=dataset.id).all() - ) + remaining_bindings = db_session_with_containers.scalars( + select(DatasetMetadataBinding).where(DatasetMetadataBinding.dataset_id == dataset.id) + ).all() assert len(remaining_bindings) == 0 # Verify index processor was called @@ -485,12 +506,14 @@ class TestCleanDatasetTask: # Check that all data was cleaned up - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 - remaining_segments = ( - db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() - ) + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Recreate data for next test case @@ -538,11 +561,15 @@ class TestCleanDatasetTask: # Verify results - even with vector cleanup failure, documents and segments should be deleted # Check that documents were still deleted despite vector cleanup failure - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that segments were still deleted despite vector cleanup failure - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Verify that index processor was called and failed @@ -622,18 +649,22 @@ class TestCleanDatasetTask: # Verify results # Check that all documents were deleted - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that all segments were deleted - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Check that all image files were deleted from database image_file_ids = [f.id for f in image_files] - remaining_image_files = ( - db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(image_file_ids)).all() - ) + remaining_image_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id.in_(image_file_ids)) + ).all() assert len(remaining_image_files) == 0 # Verify that storage.delete was called for each image file @@ -738,24 +769,32 @@ class TestCleanDatasetTask: # Verify results # Check that all documents were deleted - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that all segments were deleted - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Check that all upload files were deleted - remaining_files = db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)).all() + remaining_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ).all() assert len(remaining_files) == 0 # Check that all metadata and bindings were deleted - remaining_metadata = db_session_with_containers.query(DatasetMetadata).filter_by(dataset_id=dataset.id).all() + remaining_metadata = db_session_with_containers.scalars( + select(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset.id) + ).all() assert len(remaining_metadata) == 0 - remaining_bindings = ( - db_session_with_containers.query(DatasetMetadataBinding).filter_by(dataset_id=dataset.id).all() - ) + remaining_bindings = db_session_with_containers.scalars( + select(DatasetMetadataBinding).where(DatasetMetadataBinding.dataset_id == dataset.id) + ).all() assert len(remaining_bindings) == 0 # Verify performance expectations @@ -826,7 +865,9 @@ class TestCleanDatasetTask: # Check that upload file was still deleted from database despite storage failure # Note: When storage operations fail, the upload file may not be deleted # This demonstrates that the cleanup process continues even with storage errors - remaining_files = db_session_with_containers.query(UploadFile).filter_by(id=upload_file.id).all() + remaining_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id == upload_file.id) + ).all() # The upload file should still be deleted from the database even if storage cleanup fails # However, this depends on the specific implementation of clean_dataset_task if len(remaining_files) > 0: @@ -976,19 +1017,27 @@ class TestCleanDatasetTask: # Verify results # Check that all documents were deleted - remaining_documents = db_session_with_containers.query(Document).filter_by(dataset_id=dataset.id).all() + remaining_documents = db_session_with_containers.scalars( + select(Document).where(Document.dataset_id == dataset.id) + ).all() assert len(remaining_documents) == 0 # Check that all segments were deleted - remaining_segments = db_session_with_containers.query(DocumentSegment).filter_by(dataset_id=dataset.id).all() + remaining_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.dataset_id == dataset.id) + ).all() assert len(remaining_segments) == 0 # Check that all upload files were deleted - remaining_files = db_session_with_containers.query(UploadFile).filter_by(id=upload_file_id).all() + remaining_files = db_session_with_containers.scalars( + select(UploadFile).where(UploadFile.id == upload_file_id) + ).all() assert len(remaining_files) == 0 # Check that all metadata was deleted - remaining_metadata = db_session_with_containers.query(DatasetMetadata).filter_by(dataset_id=dataset.id).all() + remaining_metadata = db_session_with_containers.scalars( + select(DatasetMetadata).where(DatasetMetadata.dataset_id == dataset.id) + ).all() assert len(remaining_metadata) == 0 # Verify that storage.delete was called diff --git a/api/tests/test_containers_integration_tests/tasks/test_clean_notion_document_task.py b/api/tests/test_containers_integration_tests/tasks/test_clean_notion_document_task.py index 926c839c8b..fa3ac12cf0 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_clean_notion_document_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_clean_notion_document_task.py @@ -11,6 +11,8 @@ from unittest.mock import Mock, patch import pytest from faker import Faker +from sqlalchemy import ColumnElement, func, select +from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType from models.dataset import Dataset, Document, DocumentSegment @@ -20,6 +22,14 @@ from tasks.clean_notion_document_task import clean_notion_document_task from tests.test_containers_integration_tests.helpers import generate_valid_password +def _count_documents(session: Session, condition: ColumnElement[bool]) -> int: + return session.scalar(select(func.count()).select_from(Document).where(condition)) or 0 + + +def _count_segments(session: Session, condition: ColumnElement[bool]) -> int: + return session.scalar(select(func.count()).select_from(DocumentSegment).where(condition)) or 0 + + class TestCleanNotionDocumentTask: """Integration tests for clean_notion_document_task using testcontainers.""" @@ -145,24 +155,14 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify data exists before cleanup - assert db_session_with_containers.query(Document).filter(Document.id.in_(document_ids)).count() == 3 - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(document_ids)) - .count() - == 6 - ) + assert _count_documents(db_session_with_containers, Document.id.in_(document_ids)) == 3 + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(document_ids)) == 6 # Execute cleanup task clean_notion_document_task(document_ids, dataset.id) # Verify segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(document_ids)) - .count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(document_ids)) == 0 # Verify index processor was called mock_processor = mock_index_processor_factory.return_value.init_index_processor.return_value @@ -322,12 +322,7 @@ class TestCleanNotionDocumentTask: # The task properly handles various index types and document configurations. # Verify segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id == document.id) - .count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 0 # Reset mock for next iteration mock_index_processor_factory.reset_mock() @@ -410,10 +405,7 @@ class TestCleanNotionDocumentTask: clean_notion_document_task([document.id], dataset.id) # Verify segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 0 # Note: This test successfully verifies that segments without index_node_ids # are properly deleted from the database. @@ -499,11 +491,8 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify all data exists before cleanup - assert db_session_with_containers.query(Document).filter(Document.dataset_id == dataset.id).count() == 5 - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() - == 10 - ) + assert _count_documents(db_session_with_containers, Document.dataset_id == dataset.id) == 5 + assert _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == 10 # Clean up only first 3 documents documents_to_clean = [doc.id for doc in documents[:3]] @@ -513,22 +502,12 @@ class TestCleanNotionDocumentTask: clean_notion_document_task(documents_to_clean, dataset.id) # Verify only specified documents' segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(documents_to_clean)) - .count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(documents_to_clean)) == 0 # Verify remaining documents and segments are intact remaining_docs = [doc.id for doc in documents[3:]] - assert db_session_with_containers.query(Document).filter(Document.id.in_(remaining_docs)).count() == 2 - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(remaining_docs)) - .count() - == 4 - ) + assert _count_documents(db_session_with_containers, Document.id.in_(remaining_docs)) == 2 + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(remaining_docs)) == 4 # Note: This test successfully verifies partial document cleanup operations. # The database operations work correctly, isolating only the specified documents. @@ -612,19 +591,13 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify all segments exist before cleanup - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 4 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 4 # Execute cleanup task clean_notion_document_task([document.id], dataset.id) # Verify all segments are deleted regardless of status - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 0 # Note: This test successfully verifies database operations. # IndexProcessor verification would require more sophisticated mocking. @@ -794,12 +767,9 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify all data exists before cleanup + assert _count_documents(db_session_with_containers, Document.dataset_id == dataset.id) == num_documents assert ( - db_session_with_containers.query(Document).filter(Document.dataset_id == dataset.id).count() - == num_documents - ) - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() + _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == num_documents * num_segments_per_doc ) @@ -808,10 +778,7 @@ class TestCleanNotionDocumentTask: clean_notion_document_task(all_document_ids, dataset.id) # Verify all segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == 0 # Note: This test successfully verifies bulk document cleanup operations. # The database efficiently handles large-scale deletions. @@ -906,8 +873,8 @@ class TestCleanNotionDocumentTask: # Verify all data exists before cleanup # Note: There may be documents from previous tests, so we check for at least 3 - assert db_session_with_containers.query(Document).count() >= 3 - assert db_session_with_containers.query(DocumentSegment).count() >= 9 + assert db_session_with_containers.scalar(select(func.count()).select_from(Document)) >= 3 + assert db_session_with_containers.scalar(select(func.count()).select_from(DocumentSegment)) >= 9 # Clean up documents from only the first dataset target_dataset = datasets[0] @@ -918,22 +885,12 @@ class TestCleanNotionDocumentTask: clean_notion_document_task([target_document.id], target_dataset.id) # Verify only documents' segments from target dataset are deleted - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id == target_document.id) - .count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == target_document.id) == 0 # Verify documents from other datasets remain intact remaining_docs = [doc.id for doc in all_documents[1:]] - assert db_session_with_containers.query(Document).filter(Document.id.in_(remaining_docs)).count() == 2 - assert ( - db_session_with_containers.query(DocumentSegment) - .filter(DocumentSegment.document_id.in_(remaining_docs)) - .count() - == 6 - ) + assert _count_documents(db_session_with_containers, Document.id.in_(remaining_docs)) == 2 + assert _count_segments(db_session_with_containers, DocumentSegment.document_id.in_(remaining_docs)) == 6 # Note: This test successfully verifies multi-tenant isolation. # Only documents from the target dataset are affected, maintaining tenant separation. @@ -1028,11 +985,9 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify all data exists before cleanup - assert db_session_with_containers.query(Document).filter(Document.dataset_id == dataset.id).count() == len( - document_statuses - ) + assert _count_documents(db_session_with_containers, Document.dataset_id == dataset.id) == len(document_statuses) assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() + _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == len(document_statuses) * 2 ) @@ -1041,10 +996,7 @@ class TestCleanNotionDocumentTask: clean_notion_document_task(all_document_ids, dataset.id) # Verify all segments are deleted regardless of status - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.dataset_id == dataset.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.dataset_id == dataset.id) == 0 # Note: This test successfully verifies cleanup of documents in various states. # All documents are deleted regardless of their indexing status. @@ -1142,20 +1094,14 @@ class TestCleanNotionDocumentTask: db_session_with_containers.commit() # Verify data exists before cleanup - assert db_session_with_containers.query(Document).filter(Document.id == document.id).count() == 1 - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 3 - ) + assert _count_documents(db_session_with_containers, Document.id == document.id) == 1 + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 3 # Execute cleanup task clean_notion_document_task([document.id], dataset.id) # Verify segments are deleted - assert ( - db_session_with_containers.query(DocumentSegment).filter(DocumentSegment.document_id == document.id).count() - == 0 - ) + assert _count_segments(db_session_with_containers, DocumentSegment.document_id == document.id) == 0 # Note: This test successfully verifies cleanup of documents with rich metadata. # The task properly handles complex document structures and metadata fields. diff --git a/api/tests/test_containers_integration_tests/tasks/test_create_segment_to_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_create_segment_to_index_task.py index 9f8e37fc9e..9084667c31 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_create_segment_to_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_create_segment_to_index_task.py @@ -11,6 +11,7 @@ from uuid import uuid4 import pytest from faker import Faker +from sqlalchemy import delete from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from extensions.ext_redis import redis_client @@ -28,12 +29,12 @@ class TestCreateSegmentToIndexTask: """Clean up database and Redis before each test to ensure isolation.""" # Clear all test data using fixture session - db_session_with_containers.query(DocumentSegment).delete() - db_session_with_containers.query(Document).delete() - db_session_with_containers.query(Dataset).delete() - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(DocumentSegment)) + db_session_with_containers.execute(delete(Document)) + db_session_with_containers.execute(delete(Dataset)) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache diff --git a/api/tests/test_containers_integration_tests/tasks/test_dataset_indexing_task.py b/api/tests/test_containers_integration_tests/tasks/test_dataset_indexing_task.py index 13ea94348a..684097851b 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_dataset_indexing_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_dataset_indexing_task.py @@ -6,6 +6,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import select from core.indexing_runner import DocumentIsPausedError from core.rag.index_processor.constant.index_type import IndexTechniqueType @@ -175,7 +176,7 @@ class TestDatasetIndexingTaskIntegration: def _query_document(self, db_session_with_containers, document_id: str) -> Document | None: """Return the latest persisted document state.""" - return db_session_with_containers.query(Document).where(Document.id == document_id).first() + return db_session_with_containers.scalar(select(Document).where(Document.id == document_id).limit(1)) def _assert_documents_parsing(self, db_session_with_containers, document_ids: Sequence[str]) -> None: """Assert all target documents are persisted in parsing status.""" diff --git a/api/tests/test_containers_integration_tests/tasks/test_deal_dataset_vector_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_deal_dataset_vector_index_task.py index d457b59d58..48fec441c5 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_deal_dataset_vector_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_deal_dataset_vector_index_task.py @@ -11,6 +11,7 @@ from unittest.mock import ANY, Mock, patch import pytest from faker import Faker +from sqlalchemy import select from core.rag.index_processor.constant.index_type import IndexStructureType from models.dataset import Dataset, Document, DocumentSegment @@ -221,7 +222,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor load method was called @@ -322,7 +325,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "update") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor clean and load methods were called @@ -431,7 +436,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify that no index processor load was called since no segments exist @@ -564,7 +571,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to error - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.ERROR assert "Test exception during indexing" in updated_document.error @@ -635,7 +644,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor was initialized with custom index type @@ -711,7 +722,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify document status was updated to indexing then completed - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor was initialized with the document's index type @@ -815,7 +828,9 @@ class TestDealDatasetVectorIndexTask: # Verify all documents were processed for document in documents: - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED # Verify index processor load was called multiple times @@ -917,7 +932,9 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify final document status - updated_document = db_session_with_containers.query(Document).filter_by(id=document.id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == document.id).limit(1) + ) assert updated_document.indexing_status == IndexingStatus.COMPLETED def test_deal_dataset_vector_index_task_with_disabled_documents( @@ -1027,12 +1044,14 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify only enabled document was processed - updated_enabled_document = db_session_with_containers.query(Document).filter_by(id=enabled_document.id).first() + updated_enabled_document = db_session_with_containers.scalar( + select(Document).where(Document.id == enabled_document.id).limit(1) + ) assert updated_enabled_document.indexing_status == IndexingStatus.COMPLETED # Verify disabled document status remains unchanged - updated_disabled_document = ( - db_session_with_containers.query(Document).filter_by(id=disabled_document.id).first() + updated_disabled_document = db_session_with_containers.scalar( + select(Document).where(Document.id == disabled_document.id).limit(1) ) assert updated_disabled_document.indexing_status == IndexingStatus.COMPLETED # Should not change @@ -1148,12 +1167,14 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify only active document was processed - updated_active_document = db_session_with_containers.query(Document).filter_by(id=active_document.id).first() + updated_active_document = db_session_with_containers.scalar( + select(Document).where(Document.id == active_document.id).limit(1) + ) assert updated_active_document.indexing_status == IndexingStatus.COMPLETED # Verify archived document status remains unchanged - updated_archived_document = ( - db_session_with_containers.query(Document).filter_by(id=archived_document.id).first() + updated_archived_document = db_session_with_containers.scalar( + select(Document).where(Document.id == archived_document.id).limit(1) ) assert updated_archived_document.indexing_status == IndexingStatus.COMPLETED # Should not change @@ -1269,14 +1290,14 @@ class TestDealDatasetVectorIndexTask: deal_dataset_vector_index_task(dataset.id, "add") # Verify only completed document was processed - updated_completed_document = ( - db_session_with_containers.query(Document).filter_by(id=completed_document.id).first() + updated_completed_document = db_session_with_containers.scalar( + select(Document).where(Document.id == completed_document.id).limit(1) ) assert updated_completed_document.indexing_status == IndexingStatus.COMPLETED # Verify incomplete document status remains unchanged - updated_incomplete_document = ( - db_session_with_containers.query(Document).filter_by(id=incomplete_document.id).first() + updated_incomplete_document = db_session_with_containers.scalar( + select(Document).where(Document.id == incomplete_document.id).limit(1) ) assert updated_incomplete_document.indexing_status == IndexingStatus.INDEXING # Should not change diff --git a/api/tests/test_containers_integration_tests/tasks/test_disable_segments_from_index_task.py b/api/tests/test_containers_integration_tests/tasks/test_disable_segments_from_index_task.py index 3e9a0c8f7f..6e03bd9351 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_disable_segments_from_index_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_disable_segments_from_index_task.py @@ -9,6 +9,7 @@ The task is responsible for removing document segments from the search index whe from unittest.mock import MagicMock, patch from faker import Faker +from sqlalchemy import select from sqlalchemy.orm import Session from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -471,9 +472,9 @@ class TestDisableSegmentsFromIndexTask: db_session_with_containers.refresh(segments[1]) # Check that segments are re-enabled after error - updated_segments = ( - db_session_with_containers.query(DocumentSegment).where(DocumentSegment.id.in_(segment_ids)).all() - ) + updated_segments = db_session_with_containers.scalars( + select(DocumentSegment).where(DocumentSegment.id.in_(segment_ids)) + ).all() for segment in updated_segments: assert segment.enabled is True diff --git a/api/tests/test_containers_integration_tests/tasks/test_document_indexing_sync_task.py b/api/tests/test_containers_integration_tests/tasks/test_document_indexing_sync_task.py index d4021143ef..b6e7e6e5c9 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_document_indexing_sync_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_document_indexing_sync_task.py @@ -12,10 +12,11 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest +from sqlalchemy import delete, func, select, update from core.indexing_runner import DocumentIsPausedError, IndexingRunner from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType -from models import Account, Tenant, TenantAccountJoin, TenantAccountRole +from models import Account, AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole, TenantStatus from models.dataset import Dataset, Document, DocumentSegment from models.enums import DataSourceType, DocumentCreatedFrom, IndexingStatus, SegmentStatus from tasks.document_indexing_sync_task import document_indexing_sync_task @@ -30,12 +31,12 @@ class DocumentIndexingSyncTaskTestDataFactory: email=f"{uuid4()}@example.com", name=f"user-{uuid4()}", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.flush() - tenant = Tenant(name=f"tenant-{account.id}", status="normal") + tenant = Tenant(name=f"tenant-{account.id}", status=TenantStatus.NORMAL) db_session_with_containers.add(tenant) db_session_with_containers.flush() @@ -254,8 +255,8 @@ class TestDocumentIndexingSyncTask: """Test that task raises error when data_source_info is empty.""" # Arrange context = self._create_notion_sync_context(db_session_with_containers, data_source_info=None) - db_session_with_containers.query(Document).where(Document.id == context["document"].id).update( - {"data_source_info": None} + db_session_with_containers.execute( + update(Document).where(Document.id == context["document"].id).values(data_source_info=None) ) db_session_with_containers.commit() @@ -274,8 +275,8 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.ERROR @@ -294,13 +295,13 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) - remaining_segments = ( - db_session_with_containers.query(DocumentSegment) + remaining_segments = db_session_with_containers.scalar( + select(func.count()) + .select_from(DocumentSegment) .where(DocumentSegment.document_id == context["document"].id) - .count() ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.COMPLETED @@ -319,13 +320,13 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) - remaining_segments = ( - db_session_with_containers.query(DocumentSegment) + remaining_segments = db_session_with_containers.scalar( + select(func.count()) + .select_from(DocumentSegment) .where(DocumentSegment.document_id == context["document"].id) - .count() ) assert updated_document is not None @@ -354,7 +355,7 @@ class TestDocumentIndexingSyncTask: context = self._create_notion_sync_context(db_session_with_containers) def _delete_dataset_before_clean() -> str: - db_session_with_containers.query(Dataset).where(Dataset.id == context["dataset"].id).delete() + db_session_with_containers.execute(delete(Dataset).where(Dataset.id == context["dataset"].id)) db_session_with_containers.commit() return "2024-01-02T00:00:00Z" @@ -367,8 +368,8 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.PARSING @@ -386,13 +387,13 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) - remaining_segments = ( - db_session_with_containers.query(DocumentSegment) + remaining_segments = db_session_with_containers.scalar( + select(func.count()) + .select_from(DocumentSegment) .where(DocumentSegment.document_id == context["document"].id) - .count() ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.PARSING @@ -410,8 +411,8 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.PARSING @@ -428,8 +429,8 @@ class TestDocumentIndexingSyncTask: # Assert db_session_with_containers.expire_all() - updated_document = ( - db_session_with_containers.query(Document).where(Document.id == context["document"].id).first() + updated_document = db_session_with_containers.scalar( + select(Document).where(Document.id == context["document"].id).limit(1) ) assert updated_document is not None assert updated_document.indexing_status == IndexingStatus.ERROR diff --git a/api/tests/test_containers_integration_tests/tasks/test_document_indexing_update_task.py b/api/tests/test_containers_integration_tests/tasks/test_document_indexing_update_task.py index d94abf2b40..a9a8c0f30c 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_document_indexing_update_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_document_indexing_update_task.py @@ -2,6 +2,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import func, select from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from models import Account, Tenant, TenantAccountJoin, TenantAccountRole @@ -123,13 +124,13 @@ class TestDocumentIndexingUpdateTask: db_session_with_containers.expire_all() # Assert document status updated before reindex - updated = db_session_with_containers.query(Document).where(Document.id == document.id).first() + updated = db_session_with_containers.scalar(select(Document).where(Document.id == document.id).limit(1)) assert updated.indexing_status == IndexingStatus.PARSING assert updated.processing_started_at is not None # Segments should be deleted - remaining = ( - db_session_with_containers.query(DocumentSegment).where(DocumentSegment.document_id == document.id).count() + remaining = db_session_with_containers.scalar( + select(func.count()).select_from(DocumentSegment).where(DocumentSegment.document_id == document.id) ) assert remaining == 0 @@ -167,8 +168,8 @@ class TestDocumentIndexingUpdateTask: mock_external_dependencies["runner_instance"].run.assert_called_once() # Segments should remain (since clean failed before DB delete) - remaining = ( - db_session_with_containers.query(DocumentSegment).where(DocumentSegment.document_id == document.id).count() + remaining = db_session_with_containers.scalar( + select(func.count()).select_from(DocumentSegment).where(DocumentSegment.document_id == document.id) ) assert remaining > 0 diff --git a/api/tests/test_containers_integration_tests/tasks/test_duplicate_document_indexing_task.py b/api/tests/test_containers_integration_tests/tasks/test_duplicate_document_indexing_task.py index 6a8e186958..39c58987fd 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_duplicate_document_indexing_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_duplicate_document_indexing_task.py @@ -2,6 +2,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import select from core.indexing_runner import DocumentIsPausedError from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType @@ -317,7 +318,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were updated to parsing status # Re-query documents from database since _duplicate_document_indexing_task uses a different session for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.processing_started_at is not None @@ -362,14 +363,14 @@ class TestDuplicateDocumentIndexingTasks: # Verify segments were deleted from database # Re-query segments from database using captured IDs to avoid stale ORM instances for seg_id in segment_ids: - deleted_segment = ( - db_session_with_containers.query(DocumentSegment).where(DocumentSegment.id == seg_id).first() + deleted_segment = db_session_with_containers.scalar( + select(DocumentSegment).where(DocumentSegment.id == seg_id).limit(1) ) assert deleted_segment is None # Verify documents were updated to parsing status for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.processing_started_at is not None @@ -438,7 +439,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify only existing documents were updated # Re-query documents from database since _duplicate_document_indexing_task uses a different session for doc_id in existing_document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.processing_started_at is not None @@ -485,7 +486,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were still updated to parsing status before the exception # Re-query documents from database since _duplicate_document_indexing_task close the session for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.processing_started_at is not None @@ -543,7 +544,7 @@ class TestDuplicateDocumentIndexingTasks: # Assert: Verify error handling # Re-query documents from database since _duplicate_document_indexing_task uses a different session for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.ERROR assert updated_document.error is not None assert "batch upload" in updated_document.error.lower() @@ -585,7 +586,7 @@ class TestDuplicateDocumentIndexingTasks: # Assert: Verify error handling # Re-query documents from database since _duplicate_document_indexing_task uses a different session for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.ERROR assert updated_document.error is not None assert "limit" in updated_document.error.lower() @@ -649,7 +650,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were processed for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING @patch("tasks.duplicate_document_indexing_task.TenantIsolatedTaskQueue", autospec=True) @@ -692,7 +693,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were processed for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING @patch("tasks.duplicate_document_indexing_task.TenantIsolatedTaskQueue", autospec=True) @@ -736,7 +737,7 @@ class TestDuplicateDocumentIndexingTasks: # Verify documents were processed for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.indexing_status == IndexingStatus.PARSING @patch("tasks.duplicate_document_indexing_task.TenantIsolatedTaskQueue", autospec=True) @@ -851,7 +852,7 @@ class TestDuplicateDocumentIndexingTasks: # Assert for doc_id in document_ids: - updated_document = db_session_with_containers.query(Document).where(Document.id == doc_id).first() + updated_document = db_session_with_containers.scalar(select(Document).where(Document.id == doc_id).limit(1)) assert updated_document.is_paused is True assert updated_document.indexing_status == IndexingStatus.PARSING assert updated_document.display_status == "paused" diff --git a/api/tests/test_containers_integration_tests/tasks/test_mail_email_code_login_task.py b/api/tests/test_containers_integration_tests/tasks/test_mail_email_code_login_task.py index c0ddc27286..8343711998 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_mail_email_code_login_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_mail_email_code_login_task.py @@ -14,6 +14,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import delete from libs.email_i18n import EmailType from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole @@ -41,9 +42,9 @@ class TestSendEmailCodeLoginMailTask: from extensions.ext_redis import redis_client # Clear all test data - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache diff --git a/api/tests/test_containers_integration_tests/tasks/test_mail_human_input_delivery_task.py b/api/tests/test_containers_integration_tests/tasks/test_mail_human_input_delivery_task.py index a16f3ff773..328bdbf055 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_mail_human_input_delivery_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_mail_human_input_delivery_task.py @@ -3,9 +3,7 @@ from datetime import UTC, datetime from unittest.mock import patch import pytest -from graphon.enums import WorkflowExecutionStatus -from graphon.nodes.human_input.entities import HumanInputNodeData -from graphon.runtime import GraphRuntimeState, VariablePool +from sqlalchemy import delete from configs import dify_config from core.app.app_config.entities import WorkflowUIBasedAppConfig @@ -20,6 +18,9 @@ from core.workflow.human_input_compat import ( MemberRecipient, ) from extensions.ext_storage import storage +from graphon.enums import WorkflowExecutionStatus +from graphon.nodes.human_input.entities import HumanInputNodeData +from graphon.runtime import GraphRuntimeState, VariablePool from models.account import Account, AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.human_input import HumanInputDelivery, HumanInputForm, HumanInputFormRecipient @@ -30,14 +31,14 @@ from tasks.mail_human_input_delivery_task import dispatch_human_input_email_task @pytest.fixture(autouse=True) def cleanup_database(db_session_with_containers): - db_session_with_containers.query(HumanInputFormRecipient).delete() - db_session_with_containers.query(HumanInputDelivery).delete() - db_session_with_containers.query(HumanInputForm).delete() - db_session_with_containers.query(WorkflowPause).delete() - db_session_with_containers.query(WorkflowRun).delete() - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(HumanInputFormRecipient)) + db_session_with_containers.execute(delete(HumanInputDelivery)) + db_session_with_containers.execute(delete(HumanInputForm)) + db_session_with_containers.execute(delete(WorkflowPause)) + db_session_with_containers.execute(delete(WorkflowRun)) + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/tasks/test_mail_invite_member_task.py b/api/tests/test_containers_integration_tests/tasks/test_mail_invite_member_task.py index 212fbd26cd..d34828c4b1 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_mail_invite_member_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_mail_invite_member_task.py @@ -17,6 +17,7 @@ from unittest.mock import MagicMock, patch import pytest from faker import Faker +from sqlalchemy import delete, select from extensions.ext_redis import redis_client from libs.email_i18n import EmailType @@ -44,9 +45,9 @@ class TestMailInviteMemberTask: def cleanup_database(self, db_session_with_containers): """Clean up database before each test to ensure isolation.""" # Clear all test data - db_session_with_containers.query(TenantAccountJoin).delete() - db_session_with_containers.query(Tenant).delete() - db_session_with_containers.query(Account).delete() + db_session_with_containers.execute(delete(TenantAccountJoin)) + db_session_with_containers.execute(delete(Tenant)) + db_session_with_containers.execute(delete(Account)) db_session_with_containers.commit() # Clear Redis cache @@ -491,10 +492,10 @@ class TestMailInviteMemberTask: assert tenant.name is not None # Verify tenant relationship exists - tenant_join = ( - db_session_with_containers.query(TenantAccountJoin) - .filter_by(tenant_id=tenant.id, account_id=pending_account.id) - .first() + tenant_join = db_session_with_containers.scalar( + select(TenantAccountJoin) + .where(TenantAccountJoin.tenant_id == tenant.id, TenantAccountJoin.account_id == pending_account.id) + .limit(1) ) assert tenant_join is not None assert tenant_join.role == TenantAccountRole.NORMAL diff --git a/api/tests/test_containers_integration_tests/tasks/test_remove_app_and_related_data_task.py b/api/tests/test_containers_integration_tests/tasks/test_remove_app_and_related_data_task.py index 96cf9cebf5..b43b622870 100644 --- a/api/tests/test_containers_integration_tests/tasks/test_remove_app_and_related_data_task.py +++ b/api/tests/test_containers_integration_tests/tasks/test_remove_app_and_related_data_task.py @@ -2,11 +2,12 @@ import uuid from unittest.mock import ANY, call, patch import pytest -from graphon.variables.segments import StringSegment -from graphon.variables.types import SegmentType +from sqlalchemy import delete, func, select from core.db.session_factory import session_factory from extensions.storage.storage_type import StorageType +from graphon.variables.segments import StringSegment +from graphon.variables.types import SegmentType from libs.datetime_utils import naive_utc_now from models import Tenant from models.enums import CreatorUserRole @@ -20,11 +21,11 @@ from tasks.remove_app_and_related_data_task import ( @pytest.fixture(autouse=True) def cleanup_database(db_session_with_containers): - db_session_with_containers.query(WorkflowDraftVariable).delete() - db_session_with_containers.query(WorkflowDraftVariableFile).delete() - db_session_with_containers.query(UploadFile).delete() - db_session_with_containers.query(App).delete() - db_session_with_containers.query(Tenant).delete() + db_session_with_containers.execute(delete(WorkflowDraftVariable)) + db_session_with_containers.execute(delete(WorkflowDraftVariableFile)) + db_session_with_containers.execute(delete(UploadFile)) + db_session_with_containers.execute(delete(App)) + db_session_with_containers.execute(delete(Tenant)) db_session_with_containers.commit() @@ -127,21 +128,21 @@ class TestDeleteDraftVariablesBatch: result = delete_draft_variables_batch(app1.id, batch_size=100) assert result == 150 - app1_remaining = db_session_with_containers.query(WorkflowDraftVariable).where( - WorkflowDraftVariable.app_id == app1.id + app1_remaining_count = db_session_with_containers.scalar( + select(func.count()).select_from(WorkflowDraftVariable).where(WorkflowDraftVariable.app_id == app1.id) ) - app2_remaining = db_session_with_containers.query(WorkflowDraftVariable).where( - WorkflowDraftVariable.app_id == app2.id + app2_remaining_count = db_session_with_containers.scalar( + select(func.count()).select_from(WorkflowDraftVariable).where(WorkflowDraftVariable.app_id == app2.id) ) - assert app1_remaining.count() == 0 - assert app2_remaining.count() == 100 + assert app1_remaining_count == 0 + assert app2_remaining_count == 100 def test_delete_draft_variables_batch_empty_result(self, db_session_with_containers): """Test deletion when no draft variables exist for the app.""" result = delete_draft_variables_batch(str(uuid.uuid4()), 1000) assert result == 0 - assert db_session_with_containers.query(WorkflowDraftVariable).count() == 0 + assert db_session_with_containers.scalar(select(func.count()).select_from(WorkflowDraftVariable)) == 0 @patch("tasks.remove_app_and_related_data_task._delete_draft_variable_offload_data") @patch("tasks.remove_app_and_related_data_task.logger") @@ -190,12 +191,16 @@ class TestDeleteDraftVariableOffloadData: expected_storage_calls = [call(storage_key) for storage_key in upload_file_keys] mock_storage.delete.assert_has_calls(expected_storage_calls, any_order=True) - remaining_var_files = db_session_with_containers.query(WorkflowDraftVariableFile).where( - WorkflowDraftVariableFile.id.in_(file_ids) + remaining_var_files_count = db_session_with_containers.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) + .where(WorkflowDraftVariableFile.id.in_(file_ids)) ) - remaining_upload_files = db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)) - assert remaining_var_files.count() == 0 - assert remaining_upload_files.count() == 0 + remaining_upload_files_count = db_session_with_containers.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ) + assert remaining_var_files_count == 0 + assert remaining_upload_files_count == 0 @patch("extensions.ext_storage.storage") @patch("tasks.remove_app_and_related_data_task.logging") @@ -217,9 +222,13 @@ class TestDeleteDraftVariableOffloadData: assert result == 1 mock_logging.exception.assert_called_once_with("Failed to delete storage object %s", storage_keys[0]) - remaining_var_files = db_session_with_containers.query(WorkflowDraftVariableFile).where( - WorkflowDraftVariableFile.id.in_(file_ids) + remaining_var_files_count = db_session_with_containers.scalar( + select(func.count()) + .select_from(WorkflowDraftVariableFile) + .where(WorkflowDraftVariableFile.id.in_(file_ids)) ) - remaining_upload_files = db_session_with_containers.query(UploadFile).where(UploadFile.id.in_(upload_file_ids)) - assert remaining_var_files.count() == 0 - assert remaining_upload_files.count() == 0 + remaining_upload_files_count = db_session_with_containers.scalar( + select(func.count()).select_from(UploadFile).where(UploadFile.id.in_(upload_file_ids)) + ) + assert remaining_var_files_count == 0 + assert remaining_upload_files_count == 0 diff --git a/api/tests/test_containers_integration_tests/test_workflow_pause_integration.py b/api/tests/test_containers_integration_tests/test_workflow_pause_integration.py index 4bc022c415..b00d827e37 100644 --- a/api/tests/test_containers_integration_tests/test_workflow_pause_integration.py +++ b/api/tests/test_containers_integration_tests/test_workflow_pause_integration.py @@ -24,16 +24,16 @@ from dataclasses import dataclass from datetime import timedelta import pytest -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import delete, func, select from sqlalchemy.orm import Session, selectinload, sessionmaker from extensions.ext_storage import storage +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowExecutionStatus from libs.datetime_utils import naive_utc_now from models import Account from models import WorkflowPause as WorkflowPauseModel -from models.account import Tenant, TenantAccountJoin, TenantAccountRole +from models.account import AccountStatus, Tenant, TenantAccountJoin, TenantAccountRole, TenantStatus from models.model import UploadFile from models.workflow import Workflow, WorkflowRun from repositories.sqlalchemy_api_workflow_run_repository import ( @@ -181,7 +181,7 @@ class TestWorkflowPauseIntegration: tenant = Tenant( name="Test Tenant", - status="normal", + status=TenantStatus.NORMAL, ) db_session_with_containers.add(tenant) db_session_with_containers.commit() @@ -190,7 +190,7 @@ class TestWorkflowPauseIntegration: email="test@example.com", name="Test User", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) db_session_with_containers.add(account) db_session_with_containers.commit() @@ -696,7 +696,7 @@ class TestWorkflowPauseIntegration: tenant2 = Tenant( name="Test Tenant 2", - status="normal", + status=TenantStatus.NORMAL, ) self.session.add(tenant2) self.session.commit() @@ -705,7 +705,7 @@ class TestWorkflowPauseIntegration: email="test2@example.com", name="Test User 2", interface_language="en-US", - status="active", + status=AccountStatus.ACTIVE, ) self.session.add(account2) self.session.commit() diff --git a/api/tests/test_containers_integration_tests/trigger/conftest.py b/api/tests/test_containers_integration_tests/trigger/conftest.py index e3832fb2ef..272bee9630 100644 --- a/api/tests/test_containers_integration_tests/trigger/conftest.py +++ b/api/tests/test_containers_integration_tests/trigger/conftest.py @@ -11,6 +11,7 @@ from collections.abc import Generator from typing import Any import pytest +from sqlalchemy import delete from sqlalchemy.orm import Session from models.account import Account, Tenant, TenantAccountJoin, TenantAccountRole @@ -40,9 +41,9 @@ def tenant_and_account(db_session_with_containers: Session) -> Generator[tuple[T yield tenant, account # Cleanup - db_session_with_containers.query(TenantAccountJoin).filter_by(tenant_id=tenant.id).delete() - db_session_with_containers.query(Account).filter_by(id=account.id).delete() - db_session_with_containers.query(Tenant).filter_by(id=tenant.id).delete() + db_session_with_containers.execute(delete(TenantAccountJoin).where(TenantAccountJoin.tenant_id == tenant.id)) + db_session_with_containers.execute(delete(Account).where(Account.id == account.id)) + db_session_with_containers.execute(delete(Tenant).where(Tenant.id == tenant.id)) db_session_with_containers.commit() @@ -93,14 +94,14 @@ def app_model( ) from models.workflow import Workflow - db_session_with_containers.query(WorkflowTriggerLog).filter_by(app_id=app.id).delete() - db_session_with_containers.query(WorkflowSchedulePlan).filter_by(app_id=app.id).delete() - db_session_with_containers.query(WorkflowWebhookTrigger).filter_by(app_id=app.id).delete() - db_session_with_containers.query(WorkflowPluginTrigger).filter_by(app_id=app.id).delete() - db_session_with_containers.query(AppTrigger).filter_by(app_id=app.id).delete() - db_session_with_containers.query(TriggerSubscription).filter_by(tenant_id=tenant.id).delete() - db_session_with_containers.query(Workflow).filter_by(app_id=app.id).delete() - db_session_with_containers.query(App).filter_by(id=app.id).delete() + db_session_with_containers.execute(delete(WorkflowTriggerLog).where(WorkflowTriggerLog.app_id == app.id)) + db_session_with_containers.execute(delete(WorkflowSchedulePlan).where(WorkflowSchedulePlan.app_id == app.id)) + db_session_with_containers.execute(delete(WorkflowWebhookTrigger).where(WorkflowWebhookTrigger.app_id == app.id)) + db_session_with_containers.execute(delete(WorkflowPluginTrigger).where(WorkflowPluginTrigger.app_id == app.id)) + db_session_with_containers.execute(delete(AppTrigger).where(AppTrigger.app_id == app.id)) + db_session_with_containers.execute(delete(TriggerSubscription).where(TriggerSubscription.tenant_id == tenant.id)) + db_session_with_containers.execute(delete(Workflow).where(Workflow.app_id == app.id)) + db_session_with_containers.execute(delete(App).where(App.id == app.id)) db_session_with_containers.commit() diff --git a/api/tests/test_containers_integration_tests/trigger/test_trigger_e2e.py b/api/tests/test_containers_integration_tests/trigger/test_trigger_e2e.py index 3514447240..9c20118e27 100644 --- a/api/tests/test_containers_integration_tests/trigger/test_trigger_e2e.py +++ b/api/tests/test_containers_integration_tests/trigger/test_trigger_e2e.py @@ -10,7 +10,7 @@ from typing import Any import pytest from flask import Flask, Response from flask.testing import FlaskClient -from graphon.enums import BuiltinNodeTypes +from sqlalchemy import select from sqlalchemy.orm import Session from configs import dify_config @@ -24,6 +24,7 @@ from core.trigger.debug import event_selectors from core.trigger.debug.event_bus import TriggerDebugEventBus from core.trigger.debug.event_selectors import PluginTriggerDebugEventPoller, WebhookTriggerDebugEventPoller from core.trigger.debug.events import PluginTriggerDebugEvent, build_plugin_pool_key +from graphon.enums import BuiltinNodeTypes from libs.datetime_utils import naive_utc_now from models.account import Account, Tenant from models.enums import AppTriggerStatus, AppTriggerType, CreatorUserRole, WorkflowTriggerStatus @@ -227,7 +228,9 @@ def test_webhook_trigger_creates_trigger_log( assert response.status_code == 200 db_session_with_containers.expire_all() - logs = db_session_with_containers.query(WorkflowTriggerLog).filter_by(app_id=app_model.id).all() + logs = db_session_with_containers.scalars( + select(WorkflowTriggerLog).where(WorkflowTriggerLog.app_id == app_model.id) + ).all() assert logs, "Webhook trigger should create trigger log" @@ -611,7 +614,9 @@ def test_schedule_trigger_creates_trigger_log( # Verify WorkflowTriggerLog was created db_session_with_containers.expire_all() - logs = db_session_with_containers.query(WorkflowTriggerLog).filter_by(app_id=app_model.id).all() + logs = db_session_with_containers.scalars( + select(WorkflowTriggerLog).where(WorkflowTriggerLog.app_id == app_model.id) + ).all() assert logs, "Schedule trigger should create WorkflowTriggerLog" assert logs[0].trigger_type == AppTriggerType.TRIGGER_SCHEDULE assert logs[0].root_node_id == schedule_node_id @@ -786,11 +791,12 @@ def test_plugin_trigger_full_chain_with_db_verification( # Verify database records exist db_session_with_containers.expire_all() - plugin_triggers = ( - db_session_with_containers.query(WorkflowPluginTrigger) - .filter_by(app_id=app_model.id, node_id=plugin_node_id) - .all() - ) + plugin_triggers = db_session_with_containers.scalars( + select(WorkflowPluginTrigger).where( + WorkflowPluginTrigger.app_id == app_model.id, + WorkflowPluginTrigger.node_id == plugin_node_id, + ) + ).all() assert plugin_triggers, "WorkflowPluginTrigger record should exist" assert plugin_triggers[0].provider_id == provider_id assert plugin_triggers[0].event_name == "test_event" diff --git a/api/tests/unit_tests/configs/test_dify_config.py b/api/tests/unit_tests/configs/test_dify_config.py index d6933e2180..bad246a4bb 100644 --- a/api/tests/unit_tests/configs/test_dify_config.py +++ b/api/tests/unit_tests/configs/test_dify_config.py @@ -145,7 +145,7 @@ def test_inner_api_config_exist(monkeypatch: pytest.MonkeyPatch): def test_db_extras_options_merging(monkeypatch: pytest.MonkeyPatch): - """Test that DB_EXTRAS options are properly merged with default timezone setting""" + """Test that DB_EXTRAS options are merged with the default timezone startup option.""" # Set environment variables monkeypatch.setenv("DB_TYPE", "postgresql") monkeypatch.setenv("DB_USERNAME", "postgres") @@ -158,15 +158,28 @@ def test_db_extras_options_merging(monkeypatch: pytest.MonkeyPatch): # Create config config = DifyConfig() - # Get engine options - engine_options = config.SQLALCHEMY_ENGINE_OPTIONS - - # Verify options contains both search_path and timezone - options = engine_options["connect_args"]["options"] + options = config.SQLALCHEMY_ENGINE_OPTIONS["connect_args"]["options"] assert "search_path=myschema" in options assert "timezone=UTC" in options +def test_db_session_timezone_override_can_disable_app_level_timezone_injection(monkeypatch: pytest.MonkeyPatch): + monkeypatch.setenv("DB_TYPE", "postgresql") + monkeypatch.setenv("DB_USERNAME", "postgres") + monkeypatch.setenv("DB_PASSWORD", "postgres") + monkeypatch.setenv("DB_HOST", "localhost") + monkeypatch.setenv("DB_PORT", "5432") + monkeypatch.setenv("DB_DATABASE", "dify") + monkeypatch.setenv("DB_EXTRAS", "options=-c search_path=myschema") + monkeypatch.setenv("DB_SESSION_TIMEZONE_OVERRIDE", "") + + config = DifyConfig() + + assert config.SQLALCHEMY_ENGINE_OPTIONS["connect_args"] == { + "options": "-c search_path=myschema", + } + + def test_pubsub_redis_url_default(monkeypatch: pytest.MonkeyPatch): os.environ.clear() @@ -223,6 +236,41 @@ def test_pubsub_redis_url_required_when_default_unavailable(monkeypatch: pytest. _ = DifyConfig().normalized_pubsub_redis_url +def test_dify_config_exposes_redis_key_prefix_default(monkeypatch: pytest.MonkeyPatch): + os.environ.clear() + + monkeypatch.setenv("CONSOLE_API_URL", "https://example.com") + monkeypatch.setenv("CONSOLE_WEB_URL", "https://example.com") + monkeypatch.setenv("DB_TYPE", "postgresql") + monkeypatch.setenv("DB_USERNAME", "postgres") + monkeypatch.setenv("DB_PASSWORD", "postgres") + monkeypatch.setenv("DB_HOST", "localhost") + monkeypatch.setenv("DB_PORT", "5432") + monkeypatch.setenv("DB_DATABASE", "dify") + + config = DifyConfig(_env_file=None) + + assert config.REDIS_KEY_PREFIX == "" + + +def test_dify_config_reads_redis_key_prefix_from_env(monkeypatch: pytest.MonkeyPatch): + os.environ.clear() + + monkeypatch.setenv("CONSOLE_API_URL", "https://example.com") + monkeypatch.setenv("CONSOLE_WEB_URL", "https://example.com") + monkeypatch.setenv("DB_TYPE", "postgresql") + monkeypatch.setenv("DB_USERNAME", "postgres") + monkeypatch.setenv("DB_PASSWORD", "postgres") + monkeypatch.setenv("DB_HOST", "localhost") + monkeypatch.setenv("DB_PORT", "5432") + monkeypatch.setenv("DB_DATABASE", "dify") + monkeypatch.setenv("REDIS_KEY_PREFIX", "enterprise-a") + + config = DifyConfig(_env_file=None) + + assert config.REDIS_KEY_PREFIX == "enterprise-a" + + @pytest.mark.parametrize( ("broker_url", "expected_host", "expected_port", "expected_username", "expected_password", "expected_db"), [ diff --git a/api/tests/unit_tests/controllers/console/app/test_app_import_api.py b/api/tests/unit_tests/controllers/console/app/test_app_import_api.py new file mode 100644 index 0000000000..9c4678aed3 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_app_import_api.py @@ -0,0 +1,139 @@ +"""Unit tests for console app import endpoints.""" + +from __future__ import annotations + +from types import SimpleNamespace +from unittest.mock import MagicMock + +import pytest + +from controllers.console.app import app_import as app_import_module +from services.app_dsl_service import ImportStatus + + +def _unwrap(func): + bound_self = getattr(func, "__self__", None) + while hasattr(func, "__wrapped__"): + func = func.__wrapped__ + if bound_self is not None: + return func.__get__(bound_self, bound_self.__class__) + return func + + +class _Result: + def __init__(self, status: ImportStatus, app_id: str | None = "app-1"): + self.status = status + self.app_id = app_id + + def model_dump(self, mode: str = "json"): + return {"status": self.status, "app_id": self.app_id} + + +def _install_features(monkeypatch: pytest.MonkeyPatch, enabled: bool) -> None: + features = SimpleNamespace(webapp_auth=SimpleNamespace(enabled=enabled)) + monkeypatch.setattr(app_import_module.FeatureService, "get_system_features", lambda: features) + + +def _mock_session(monkeypatch: pytest.MonkeyPatch) -> MagicMock: + fake_session = MagicMock() + fake_session.__enter__.return_value = fake_session + fake_session.__exit__.return_value = None + monkeypatch.setattr(app_import_module, "db", SimpleNamespace(engine=object())) + monkeypatch.setattr(app_import_module, "Session", lambda *_args, **_kwargs: fake_session) + return fake_session + + +class TestAppImportApi: + @pytest.fixture + def api(self): + return app_import_module.AppImportApi() + + def test_import_post_returns_failed_status_and_rolls_back(self, api, app, monkeypatch: pytest.MonkeyPatch) -> None: + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=False) + session = _mock_session(monkeypatch) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.FAILED, app_id=None), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + session.rollback.assert_called_once_with() + session.commit.assert_not_called() + assert status == 400 + assert response["status"] == ImportStatus.FAILED + + def test_import_post_returns_pending_status_and_commits(self, api, app, monkeypatch: pytest.MonkeyPatch) -> None: + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=False) + session = _mock_session(monkeypatch) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.PENDING), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + session.commit.assert_called_once_with() + session.rollback.assert_not_called() + assert status == 202 + assert response["status"] == ImportStatus.PENDING + + def test_import_post_updates_webapp_auth_when_enabled(self, api, app, monkeypatch: pytest.MonkeyPatch) -> None: + method = _unwrap(api.post) + + _install_features(monkeypatch, enabled=True) + session = _mock_session(monkeypatch) + monkeypatch.setattr( + app_import_module.AppDslService, + "import_app", + lambda *_args, **_kwargs: _Result(ImportStatus.COMPLETED, app_id="app-123"), + ) + update_access = MagicMock() + monkeypatch.setattr(app_import_module.EnterpriseService.WebAppAuth, "update_app_access_mode", update_access) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + with app.test_request_context("/console/api/apps/imports", method="POST", json={"mode": "yaml-content"}): + response, status = method() + + session.commit.assert_called_once_with() + session.rollback.assert_not_called() + update_access.assert_called_once_with("app-123", "private") + assert status == 200 + assert response["status"] == ImportStatus.COMPLETED + + +class TestAppImportConfirmApi: + @pytest.fixture + def api(self): + return app_import_module.AppImportConfirmApi() + + def test_import_confirm_returns_failed_status_and_rolls_back( + self, api, app, monkeypatch: pytest.MonkeyPatch + ) -> None: + method = _unwrap(api.post) + + session = _mock_session(monkeypatch) + monkeypatch.setattr( + app_import_module.AppDslService, + "confirm_import", + lambda *_args, **_kwargs: _Result(ImportStatus.FAILED), + ) + monkeypatch.setattr(app_import_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + + with app.test_request_context("/console/api/apps/imports/import-1/confirm", method="POST"): + response, status = method(import_id="import-1") + + session.rollback.assert_called_once_with() + session.commit.assert_not_called() + assert status == 400 + assert response["status"] == ImportStatus.FAILED diff --git a/api/tests/unit_tests/controllers/console/app/test_app_response_models.py b/api/tests/unit_tests/controllers/console/app/test_app_response_models.py index 2ac3dc037d..35d07a987d 100644 --- a/api/tests/unit_tests/controllers/console/app/test_app_response_models.py +++ b/api/tests/unit_tests/controllers/console/app/test_app_response_models.py @@ -138,12 +138,15 @@ def app_models(app_module): def patch_signed_url(monkeypatch, app_module): """Ensure icon URL generation uses a deterministic helper for tests.""" - def _fake_signed_url(key: str | None) -> str | None: - if not key: + def _fake_build_icon_url(_icon_type, key: str | None) -> str | None: + if key is None: + return None + icon_type = str(_icon_type).lower() + if icon_type != "image": return None return f"signed:{key}" - monkeypatch.setattr(app_module.file_helpers, "get_signed_file_url", _fake_signed_url) + monkeypatch.setattr(app_module, "build_icon_url", _fake_build_icon_url) def _ts(hour: int = 12) -> datetime: diff --git a/api/tests/unit_tests/controllers/console/app/test_audio.py b/api/tests/unit_tests/controllers/console/app/test_audio.py index c52bc02420..2d218dac7e 100644 --- a/api/tests/unit_tests/controllers/console/app/test_audio.py +++ b/api/tests/unit_tests/controllers/console/app/test_audio.py @@ -4,7 +4,6 @@ import io from types import SimpleNamespace import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.datastructures import FileStorage from werkzeug.exceptions import InternalServerError @@ -21,6 +20,7 @@ from controllers.console.app.error import ( UnsupportedAudioTypeError, ) from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from services.audio_service import AudioService from services.errors.app_model_config import AppModelConfigBrokenError from services.errors.audio import ( diff --git a/api/tests/unit_tests/controllers/console/app/test_conversation_api.py b/api/tests/unit_tests/controllers/console/app/test_conversation_api.py index 11b3b3470d..24b7e39f73 100644 --- a/api/tests/unit_tests/controllers/console/app/test_conversation_api.py +++ b/api/tests/unit_tests/controllers/console/app/test_conversation_api.py @@ -33,12 +33,17 @@ def test_completion_conversation_list_returns_paginated_result(app, monkeypatch: monkeypatch.setattr(conversation_module, "parse_time_range", lambda *_args, **_kwargs: (None, None)) paginate_result = MagicMock() + paginate_result.page = 1 + paginate_result.per_page = 20 + paginate_result.total = 0 + paginate_result.has_next = False + paginate_result.items = [] monkeypatch.setattr(conversation_module.db, "paginate", lambda *_args, **_kwargs: paginate_result) with app.test_request_context("/console/api/apps/app-1/completion-conversations", method="GET"): response = method(app_model=SimpleNamespace(id="app-1")) - assert response is paginate_result + assert response == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} def test_completion_conversation_list_invalid_time_range(app, monkeypatch: pytest.MonkeyPatch) -> None: @@ -71,12 +76,17 @@ def test_chat_conversation_list_advanced_chat_calls_paginate(app, monkeypatch: p monkeypatch.setattr(conversation_module, "parse_time_range", lambda *_args, **_kwargs: (None, None)) paginate_result = MagicMock() + paginate_result.page = 1 + paginate_result.per_page = 20 + paginate_result.total = 0 + paginate_result.has_next = False + paginate_result.items = [] monkeypatch.setattr(conversation_module.db, "paginate", lambda *_args, **_kwargs: paginate_result) with app.test_request_context("/console/api/apps/app-1/chat-conversations", method="GET"): response = method(app_model=SimpleNamespace(id="app-1", mode=AppMode.ADVANCED_CHAT)) - assert response is paginate_result + assert response == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} def test_get_conversation_updates_read_at(monkeypatch: pytest.MonkeyPatch) -> None: diff --git a/api/tests/unit_tests/controllers/console/app/test_conversation_read_timestamp.py b/api/tests/unit_tests/controllers/console/app/test_conversation_read_timestamp.py deleted file mode 100644 index f588ab261d..0000000000 --- a/api/tests/unit_tests/controllers/console/app/test_conversation_read_timestamp.py +++ /dev/null @@ -1,42 +0,0 @@ -from datetime import datetime -from types import SimpleNamespace -from unittest.mock import MagicMock, patch - -from controllers.console.app.conversation import _get_conversation - - -def test_get_conversation_mark_read_keeps_updated_at_unchanged(): - app_model = SimpleNamespace(id="app-id") - account = SimpleNamespace(id="account-id") - conversation = MagicMock() - conversation.id = "conversation-id" - - with ( - patch( - "controllers.console.app.conversation.current_account_with_tenant", - return_value=(account, None), - autospec=True, - ), - patch( - "controllers.console.app.conversation.naive_utc_now", - return_value=datetime(2026, 2, 9, 0, 0, 0), - autospec=True, - ), - patch("controllers.console.app.conversation.db.session", autospec=True) as mock_session, - ): - mock_session.scalar.return_value = conversation - - _get_conversation(app_model, "conversation-id") - - statement = mock_session.execute.call_args[0][0] - compiled = statement.compile() - sql_text = str(compiled).lower() - compact_sql_text = sql_text.replace(" ", "") - params = compiled.params - - assert "updated_at=current_timestamp" not in compact_sql_text - assert "updated_at=conversations.updated_at" in compact_sql_text - assert "read_at=:read_at" in compact_sql_text - assert "read_account_id=:read_account_id" in compact_sql_text - assert params["read_at"] == datetime(2026, 2, 9, 0, 0, 0) - assert params["read_account_id"] == "account-id" diff --git a/api/tests/unit_tests/controllers/console/app/test_conversation_variables_api.py b/api/tests/unit_tests/controllers/console/app/test_conversation_variables_api.py new file mode 100644 index 0000000000..1a412aff29 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_conversation_variables_api.py @@ -0,0 +1,108 @@ +from __future__ import annotations + +from contextlib import nullcontext +from datetime import UTC, datetime +from types import SimpleNamespace + +import pytest +from pydantic import ValidationError + +from controllers.console.app import conversation_variables as conversation_variables_module +from graphon.variables.types import SegmentType + + +def _unwrap(func): + bound_self = getattr(func, "__self__", None) + while hasattr(func, "__wrapped__"): + func = func.__wrapped__ + if bound_self is not None: + return func.__get__(bound_self, bound_self.__class__) + return func + + +def test_get_conversation_variables_returns_paginated_response(app, monkeypatch: pytest.MonkeyPatch) -> None: + api = conversation_variables_module.ConversationVariablesApi() + method = _unwrap(api.get) + + created_at = datetime(2026, 1, 1, tzinfo=UTC) + updated_at = datetime(2026, 1, 2, tzinfo=UTC) + row = SimpleNamespace( + created_at=created_at, + updated_at=updated_at, + to_variable=lambda: SimpleNamespace( + model_dump=lambda: { + "id": "var-1", + "name": "my_var", + "value_type": "string", + "value": "value", + "description": "desc", + } + ), + ) + session = SimpleNamespace(scalars=lambda _stmt: SimpleNamespace(all=lambda: [row])) + monkeypatch.setattr(conversation_variables_module, "db", SimpleNamespace(engine=object())) + monkeypatch.setattr( + conversation_variables_module, + "sessionmaker", + lambda *_args, **_kwargs: SimpleNamespace(begin=lambda: nullcontext(session)), + ) + + with app.test_request_context( + "/console/api/apps/app-1/conversation-variables", + method="GET", + query_string={"conversation_id": "conv-1"}, + ): + response = method(app_model=SimpleNamespace(id="app-1")) + + assert response["page"] == 1 + assert response["limit"] == 100 + assert response["total"] == 1 + assert response["has_more"] is False + assert response["data"][0]["id"] == "var-1" + assert response["data"][0]["created_at"] == int(created_at.timestamp()) + assert response["data"][0]["updated_at"] == int(updated_at.timestamp()) + + +def test_get_conversation_variables_normalizes_value_type_and_value(app, monkeypatch: pytest.MonkeyPatch) -> None: + api = conversation_variables_module.ConversationVariablesApi() + method = _unwrap(api.get) + + row = SimpleNamespace( + created_at=None, + updated_at=None, + to_variable=lambda: SimpleNamespace( + model_dump=lambda: { + "id": "var-2", + "name": "my_var_2", + "value_type": SegmentType.INTEGER, + "value": 42, + "description": None, + } + ), + ) + session = SimpleNamespace(scalars=lambda _stmt: SimpleNamespace(all=lambda: [row])) + monkeypatch.setattr(conversation_variables_module, "db", SimpleNamespace(engine=object())) + monkeypatch.setattr( + conversation_variables_module, + "sessionmaker", + lambda *_args, **_kwargs: SimpleNamespace(begin=lambda: nullcontext(session)), + ) + + with app.test_request_context( + "/console/api/apps/app-1/conversation-variables", + method="GET", + query_string={"conversation_id": "conv-1"}, + ): + response = method(app_model=SimpleNamespace(id="app-1")) + + assert response["data"][0]["value_type"] == "number" + assert response["data"][0]["value"] == "42" + + +def test_get_conversation_variables_requires_conversation_id(app) -> None: + api = conversation_variables_module.ConversationVariablesApi() + method = _unwrap(api.get) + + with app.test_request_context("/console/api/apps/app-1/conversation-variables", method="GET"): + with pytest.raises(ValidationError): + method(app_model=SimpleNamespace(id="app-1")) diff --git a/api/tests/unit_tests/controllers/console/app/test_mcp_server_response.py b/api/tests/unit_tests/controllers/console/app/test_mcp_server_response.py index baac4cd4e0..1af15d8dc6 100644 --- a/api/tests/unit_tests/controllers/console/app/test_mcp_server_response.py +++ b/api/tests/unit_tests/controllers/console/app/test_mcp_server_response.py @@ -1,6 +1,25 @@ import datetime +from types import SimpleNamespace +from unittest.mock import PropertyMock, patch -from controllers.console.app.mcp_server import AppMCPServerResponse +from flask import Flask + +from controllers.console import console_ns +from controllers.console.app.mcp_server import AppMCPServerController, AppMCPServerResponse + + +def unwrap(func): + while hasattr(func, "__wrapped__"): + func = func.__wrapped__ + return func + + +class _ValidatedResponse: + def __init__(self, payload): + self._payload = payload + + def model_dump(self, mode="json"): + return self._payload class TestAppMCPServerResponse: @@ -40,6 +59,18 @@ class TestAppMCPServerResponse: resp = AppMCPServerResponse.model_validate(data) assert resp.parameters == {"already": "parsed"} + def test_parameters_json_array_parsed(self): + data = { + "id": "s1", + "name": "test", + "server_code": "code", + "description": "desc", + "status": "active", + "parameters": '["a", "b"]', + } + resp = AppMCPServerResponse.model_validate(data) + assert resp.parameters == ["a", "b"] + def test_timestamps_normalized(self): dt = datetime.datetime(2024, 1, 1, 0, 0, 0, tzinfo=datetime.UTC) data = { @@ -68,3 +99,40 @@ class TestAppMCPServerResponse: resp = AppMCPServerResponse.model_validate(data) assert resp.created_at is None assert resp.updated_at is None + + +class TestAppMCPServerController: + def test_get_returns_empty_dict_when_server_missing(self): + api = AppMCPServerController() + method = unwrap(api.get) + + with patch("controllers.console.app.mcp_server.db.session.scalar", return_value=None): + response = method(api, app_model=SimpleNamespace(id="app-1")) + + assert response == {} + + def test_post_returns_201(self): + api = AppMCPServerController() + method = unwrap(api.post) + payload = {"parameters": {"timeout": 30}} + app = Flask(__name__) + app.config["TESTING"] = True + + with ( + app.test_request_context("/", json=payload), + patch.object(type(console_ns), "payload", new_callable=PropertyMock, return_value=payload), + patch("controllers.console.app.mcp_server.current_account_with_tenant", return_value=(None, "tenant-1")), + patch("controllers.console.app.mcp_server.db.session.add"), + patch("controllers.console.app.mcp_server.db.session.commit"), + patch("controllers.console.app.mcp_server.AppMCPServer.generate_server_code", return_value="server-code"), + patch( + "controllers.console.app.mcp_server.AppMCPServerResponse.model_validate", + return_value=_ValidatedResponse({"id": "server-1"}), + ), + ): + response, status_code = method( + api, app_model=SimpleNamespace(id="app-1", name="Demo App", description="App description") + ) + + assert response == {"id": "server-1"} + assert status_code == 201 diff --git a/api/tests/unit_tests/controllers/console/app/test_message_api.py b/api/tests/unit_tests/controllers/console/app/test_message_api.py index a76e958829..c984dbef5d 100644 --- a/api/tests/unit_tests/controllers/console/app/test_message_api.py +++ b/api/tests/unit_tests/controllers/console/app/test_message_api.py @@ -1,5 +1,7 @@ from __future__ import annotations +from datetime import UTC, datetime + import pytest from controllers.console.app import message as message_module @@ -120,3 +122,24 @@ def test_suggested_questions_response(app, monkeypatch: pytest.MonkeyPatch) -> N response = message_module.SuggestedQuestionsResponse(data=["What is AI?", "How does ML work?"]) assert len(response.data) == 2 assert response.data[0] == "What is AI?" + + +def test_message_detail_response_normalizes_aliases_and_timestamp(app, monkeypatch: pytest.MonkeyPatch) -> None: + """Test MessageDetailResponse normalizes alias fields and datetime timestamps.""" + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + response = message_module.MessageDetailResponse.model_validate( + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "conversation_id": "550e8400-e29b-41d4-a716-446655440001", + "inputs": {"foo": "bar"}, + "query": "hello", + "re_sign_file_url_answer": "world", + "from_source": "user", + "status": "normal", + "created_at": created_at, + "message_metadata_dict": {"token_usage": 3}, + } + ) + assert response.answer == "world" + assert response.metadata == {"token_usage": 3} + assert response.created_at == int(created_at.timestamp()) diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow.py b/api/tests/unit_tests/controllers/console/app/test_workflow.py index 3607636880..6ff3b19362 100644 --- a/api/tests/unit_tests/controllers/console/app/test_workflow.py +++ b/api/tests/unit_tests/controllers/console/app/test_workflow.py @@ -1,15 +1,16 @@ from __future__ import annotations +import json from datetime import datetime from types import SimpleNamespace from unittest.mock import Mock import pytest -from graphon.file import File, FileTransferMethod, FileType from werkzeug.exceptions import HTTPException, NotFound from controllers.console.app import workflow as workflow_module from controllers.console.app.error import DraftWorkflowNotExist, DraftWorkflowNotSync +from graphon.file import File, FileTransferMethod, FileType def _unwrap(func): @@ -258,6 +259,63 @@ def test_restore_published_workflow_to_draft_returns_400_for_invalid_structure( assert exc.value.description == "invalid workflow graph" +def test_get_published_workflows_marshals_items_before_session_closes(app, monkeypatch: pytest.MonkeyPatch) -> None: + api = workflow_module.PublishedAllWorkflowApi() + handler = _unwrap(api.get) + + session_state = {"open": False} + + class _SessionContext: + def __enter__(self): + session_state["open"] = True + return object() + + def __exit__(self, exc_type, exc, tb): + session_state["open"] = False + return False + + class _SessionMaker: + def begin(self): + return _SessionContext() + + class _Workflow: + @property + def id(self): + assert session_state["open"] is True + return "w1" + + monkeypatch.setattr(workflow_module, "db", SimpleNamespace(engine=object())) + monkeypatch.setattr(workflow_module, "sessionmaker", lambda *_args, **_kwargs: _SessionMaker()) + monkeypatch.setattr(workflow_module, "current_account_with_tenant", lambda: (SimpleNamespace(id="u1"), "t1")) + monkeypatch.setattr( + workflow_module, + "WorkflowService", + lambda: SimpleNamespace( + get_all_published_workflow=lambda **_kwargs: ([_Workflow()], False), + ), + ) + + def _fake_marshal(items, fields): + assert session_state["open"] is True + return [{"id": item.id} for item in items] + + monkeypatch.setattr(workflow_module, "marshal", _fake_marshal) + + with app.test_request_context( + "/apps/app/workflows", + method="GET", + query_string={"page": 1, "limit": 10, "user_id": "", "named_only": "false"}, + ): + response = handler(api, app_model=SimpleNamespace(id="app", workflow_id="wf-1")) + + assert response == { + "items": [{"id": "w1"}], + "page": 1, + "limit": 10, + "has_more": False, + } + + def test_draft_workflow_get_not_found(monkeypatch: pytest.MonkeyPatch) -> None: monkeypatch.setattr( workflow_module, "WorkflowService", lambda: SimpleNamespace(get_draft_workflow=lambda **_k: None) @@ -290,3 +348,87 @@ def test_advanced_chat_run_conversation_not_exists(app, monkeypatch: pytest.Monk ): with pytest.raises(NotFound): handler(api, app_model=SimpleNamespace(id="app")) + + +def test_workflow_online_users_filters_inaccessible_workflow(app, monkeypatch: pytest.MonkeyPatch) -> None: + app_id_1 = "11111111-1111-1111-1111-111111111111" + app_id_2 = "22222222-2222-2222-2222-222222222222" + signed_avatar_url = "https://files.example.com/signed/avatar-1" + sign_avatar = Mock(return_value=signed_avatar_url) + monkeypatch.setattr(workflow_module, "current_account_with_tenant", lambda: (SimpleNamespace(), "tenant-1")) + monkeypatch.setattr( + workflow_module, + "WorkflowService", + lambda: SimpleNamespace(get_accessible_app_ids=lambda app_ids, tenant_id: {app_id_1}), + ) + monkeypatch.setattr(workflow_module.file_helpers, "get_signed_file_url", sign_avatar) + + workflow_module.redis_client.hgetall.side_effect = lambda key: ( + { + b"sid-1": json.dumps( + { + "user_id": "u-1", + "username": "Alice", + "avatar": "avatar-file-id", + "sid": "sid-1", + } + ) + } + if key == f"{workflow_module.WORKFLOW_ONLINE_USERS_PREFIX}{app_id_1}" + else {} + ) + + api = workflow_module.WorkflowOnlineUsersApi() + handler = _unwrap(api.get) + + with app.test_request_context( + f"/apps/workflows/online-users?app_ids={app_id_1},{app_id_2}", + method="GET", + ): + response = handler(api) + + assert response == { + "data": [ + { + "app_id": app_id_1, + "users": [ + { + "user_id": "u-1", + "username": "Alice", + "avatar": signed_avatar_url, + "sid": "sid-1", + } + ], + } + ] + } + workflow_module.redis_client.hgetall.assert_called_once_with( + f"{workflow_module.WORKFLOW_ONLINE_USERS_PREFIX}{app_id_1}" + ) + sign_avatar.assert_called_once_with("avatar-file-id") + + +def test_workflow_online_users_rejects_excessive_workflow_ids(app, monkeypatch: pytest.MonkeyPatch) -> None: + monkeypatch.setattr(workflow_module, "current_account_with_tenant", lambda: (SimpleNamespace(), "tenant-1")) + accessible_app_ids = Mock(return_value=set()) + monkeypatch.setattr( + workflow_module, + "WorkflowService", + lambda: SimpleNamespace(get_accessible_app_ids=accessible_app_ids), + ) + + excessive_ids = ",".join(f"wf-{index}" for index in range(workflow_module.MAX_WORKFLOW_ONLINE_USERS_QUERY_IDS + 1)) + + api = workflow_module.WorkflowOnlineUsersApi() + handler = _unwrap(api.get) + + with app.test_request_context( + f"/apps/workflows/online-users?app_ids={excessive_ids}", + method="GET", + ): + with pytest.raises(HTTPException) as exc: + handler(api) + + assert exc.value.code == 400 + assert "Maximum" in exc.value.description + accessible_app_ids.assert_not_called() diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow_app_log_api.py b/api/tests/unit_tests/controllers/console/app/test_workflow_app_log_api.py new file mode 100644 index 0000000000..a9853f25b0 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_workflow_app_log_api.py @@ -0,0 +1,84 @@ +from __future__ import annotations + +from datetime import UTC, datetime + +from controllers.console.app import workflow_app_log as workflow_app_log_module +from graphon.enums import WorkflowExecutionStatus + + +def test_workflow_app_log_query_parses_bool_and_datetime(): + query = workflow_app_log_module.WorkflowAppLogQuery.model_validate( + { + "detail": "true", + "created_at__before": "2026-01-02T03:04:05Z", + "page": "2", + "limit": "10", + } + ) + + assert query.detail is True + assert query.created_at__before == datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + assert query.page == 2 + assert query.limit == 10 + + +def test_workflow_app_log_pagination_response_normalizes_nested_fields(): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + response = workflow_app_log_module.WorkflowAppLogPaginationResponse.model_validate( + { + "page": 1, + "limit": 20, + "total": 1, + "has_more": False, + "data": [ + { + "id": "log-1", + "workflow_run": { + "id": "run-1", + "status": WorkflowExecutionStatus.SUCCEEDED, + "created_at": created_at, + "finished_at": created_at, + }, + "details": {"trigger_metadata": {}}, + "created_by_account": {"id": "acc-1", "name": "acc", "email": "acc@example.com"}, + "created_at": created_at, + } + ], + } + ).model_dump(mode="json") + + assert response["data"][0]["workflow_run"]["status"] == "succeeded" + assert response["data"][0]["workflow_run"]["created_at"] == int(created_at.timestamp()) + assert response["data"][0]["created_at"] == int(created_at.timestamp()) + + +def test_workflow_archived_log_pagination_response_normalizes_nested_fields(): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + response = workflow_app_log_module.WorkflowArchivedLogPaginationResponse.model_validate( + { + "page": 1, + "limit": 20, + "total": 1, + "has_more": False, + "data": [ + { + "id": "archived-1", + "workflow_run": { + "id": "run-1", + "status": WorkflowExecutionStatus.FAILED, + }, + "trigger_metadata": {"type": "trigger-plugin"}, + "created_by_end_user": { + "id": "eu-1", + "type": "anonymous", + "is_anonymous": True, + "session_id": "session-1", + }, + "created_at": created_at, + } + ], + } + ).model_dump(mode="json") + + assert response["data"][0]["workflow_run"]["status"] == "failed" + assert response["data"][0]["created_at"] == int(created_at.timestamp()) diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow_comment_api.py b/api/tests/unit_tests/controllers/console/app/test_workflow_comment_api.py new file mode 100644 index 0000000000..85afcf0e60 --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_workflow_comment_api.py @@ -0,0 +1,201 @@ +from __future__ import annotations + +from contextlib import nullcontext +from dataclasses import dataclass +from datetime import datetime +from types import SimpleNamespace +from unittest.mock import MagicMock, PropertyMock, patch + +import pytest +from flask import Flask +from werkzeug.exceptions import Forbidden + +from controllers.console import console_ns +from controllers.console import wraps as console_wraps +from controllers.console.app import workflow_comment as workflow_comment_module +from controllers.console.app import wraps as app_wraps +from libs import login as login_lib +from models.account import Account, AccountStatus, TenantAccountRole + + +def _make_account(role: TenantAccountRole) -> Account: + account = Account(name="tester", email="tester@example.com") + account.status = AccountStatus.ACTIVE + account.role = role + account.id = "account-123" # type: ignore[assignment] + account._current_tenant = SimpleNamespace(id="tenant-123") # type: ignore[attr-defined] + account._get_current_object = lambda: account # type: ignore[attr-defined] + return account + + +def _make_app() -> SimpleNamespace: + return SimpleNamespace(id="app-123", tenant_id="tenant-123", status="normal", mode="workflow") + + +def _patch_console_guards(monkeypatch: pytest.MonkeyPatch, account: Account, app_model: SimpleNamespace) -> None: + monkeypatch.setattr(login_lib.dify_config, "LOGIN_DISABLED", True) + monkeypatch.setattr(login_lib, "current_user", account) + monkeypatch.setattr(login_lib, "current_account_with_tenant", lambda: (account, account.current_tenant_id)) + monkeypatch.setattr(login_lib, "check_csrf_token", lambda *_, **__: None) + monkeypatch.setattr(console_wraps, "current_account_with_tenant", lambda: (account, account.current_tenant_id)) + monkeypatch.setattr(console_wraps.dify_config, "EDITION", "CLOUD") + monkeypatch.setattr(app_wraps, "current_account_with_tenant", lambda: (account, account.current_tenant_id)) + monkeypatch.setattr(app_wraps, "_load_app_model", lambda _app_id: app_model) + monkeypatch.setattr(workflow_comment_module, "current_user", account) + + +def _patch_write_services(monkeypatch: pytest.MonkeyPatch) -> None: + for method_name in ( + "create_comment", + "update_comment", + "delete_comment", + "resolve_comment", + "validate_comment_access", + "create_reply", + "update_reply", + "delete_reply", + ): + monkeypatch.setattr(workflow_comment_module.WorkflowCommentService, method_name, MagicMock()) + + +def _patch_payload(payload: dict[str, object] | None): + if payload is None: + return nullcontext() + return patch.object( + type(console_ns), + "payload", + new_callable=PropertyMock, + return_value=payload, + ) + + +@dataclass(frozen=True) +class WriteCase: + resource_cls: type + method_name: str + path: str + kwargs: dict[str, str] + payload: dict[str, object] | None = None + + +@pytest.mark.parametrize( + "case", + [ + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentListApi, + method_name="post", + path="/console/api/apps/app-123/workflow/comments", + kwargs={"app_id": "app-123"}, + payload={"content": "hello", "position_x": 1.0, "position_y": 2.0, "mentioned_user_ids": []}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentDetailApi, + method_name="put", + path="/console/api/apps/app-123/workflow/comments/comment-1", + kwargs={"app_id": "app-123", "comment_id": "comment-1"}, + payload={"content": "hello", "position_x": 1.0, "position_y": 2.0, "mentioned_user_ids": []}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentDetailApi, + method_name="delete", + path="/console/api/apps/app-123/workflow/comments/comment-1", + kwargs={"app_id": "app-123", "comment_id": "comment-1"}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentResolveApi, + method_name="post", + path="/console/api/apps/app-123/workflow/comments/comment-1/resolve", + kwargs={"app_id": "app-123", "comment_id": "comment-1"}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentReplyApi, + method_name="post", + path="/console/api/apps/app-123/workflow/comments/comment-1/replies", + kwargs={"app_id": "app-123", "comment_id": "comment-1"}, + payload={"content": "reply", "mentioned_user_ids": []}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentReplyDetailApi, + method_name="put", + path="/console/api/apps/app-123/workflow/comments/comment-1/replies/reply-1", + kwargs={"app_id": "app-123", "comment_id": "comment-1", "reply_id": "reply-1"}, + payload={"content": "reply", "mentioned_user_ids": []}, + ), + WriteCase( + resource_cls=workflow_comment_module.WorkflowCommentReplyDetailApi, + method_name="delete", + path="/console/api/apps/app-123/workflow/comments/comment-1/replies/reply-1", + kwargs={"app_id": "app-123", "comment_id": "comment-1", "reply_id": "reply-1"}, + ), + ], +) +def test_write_endpoints_require_edit_permission(app: Flask, monkeypatch: pytest.MonkeyPatch, case: WriteCase) -> None: + app.config.setdefault("RESTX_MASK_HEADER", "X-Fields") + account = _make_account(TenantAccountRole.NORMAL) + app_model = _make_app() + _patch_console_guards(monkeypatch, account, app_model) + _patch_write_services(monkeypatch) + + with app.test_request_context(case.path, method=case.method_name.upper(), json=case.payload): + with _patch_payload(case.payload): + handler = getattr(case.resource_cls(), case.method_name) + with pytest.raises(Forbidden): + handler(**case.kwargs) + + +def test_create_comment_allows_editor(app: Flask, monkeypatch: pytest.MonkeyPatch) -> None: + app.config.setdefault("RESTX_MASK_HEADER", "X-Fields") + account = _make_account(TenantAccountRole.EDITOR) + app_model = _make_app() + _patch_console_guards(monkeypatch, account, app_model) + + create_comment_mock = MagicMock(return_value={"id": "comment-1"}) + monkeypatch.setattr(workflow_comment_module.WorkflowCommentService, "create_comment", create_comment_mock) + payload = {"content": "hello", "position_x": 1.0, "position_y": 2.0, "mentioned_user_ids": []} + + with app.test_request_context("/console/api/apps/app-123/workflow/comments", method="POST", json=payload): + with _patch_payload(payload): + result = workflow_comment_module.WorkflowCommentListApi().post(app_id="app-123") + + if isinstance(result, tuple): + response = result[0] + else: + response = result + assert response["id"] == "comment-1" + create_comment_mock.assert_called_once_with( + tenant_id="tenant-123", + app_id="app-123", + created_by="account-123", + content="hello", + position_x=1.0, + position_y=2.0, + mentioned_user_ids=[], + ) + + +def test_update_comment_omits_mentions_when_payload_does_not_include_them( + app: Flask, monkeypatch: pytest.MonkeyPatch +) -> None: + app.config.setdefault("RESTX_MASK_HEADER", "X-Fields") + account = _make_account(TenantAccountRole.EDITOR) + app_model = _make_app() + _patch_console_guards(monkeypatch, account, app_model) + + update_comment_mock = MagicMock(return_value={"id": "comment-1", "updated_at": datetime(2024, 1, 1, 12, 0, 0)}) + monkeypatch.setattr(workflow_comment_module.WorkflowCommentService, "update_comment", update_comment_mock) + payload = {"content": "hello", "position_x": 10.0, "position_y": 20.0} + + with app.test_request_context("/console/api/apps/app-123/workflow/comments/comment-1", method="PUT", json=payload): + with _patch_payload(payload): + workflow_comment_module.WorkflowCommentDetailApi().put(app_id="app-123", comment_id="comment-1") + + update_comment_mock.assert_called_once_with( + tenant_id="tenant-123", + app_id="app-123", + comment_id="comment-1", + user_id="account-123", + content="hello", + position_x=10.0, + position_y=20.0, + mentioned_user_ids=None, + ) diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow_pause_details_api.py b/api/tests/unit_tests/controllers/console/app/test_workflow_pause_details_api.py index e11102acb1..c4a8148446 100644 --- a/api/tests/unit_tests/controllers/console/app/test_workflow_pause_details_api.py +++ b/api/tests/unit_tests/controllers/console/app/test_workflow_pause_details_api.py @@ -6,14 +6,14 @@ from unittest.mock import Mock import pytest from flask import Flask -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus -from graphon.nodes.human_input.entities import FormInput, UserAction -from graphon.nodes.human_input.enums import FormInputType from controllers.console import wraps as console_wraps from controllers.console.app import workflow_run as workflow_run_module from controllers.web.error import NotFoundError +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus +from graphon.nodes.human_input.entities import FormInput, UserAction +from graphon.nodes.human_input.enums import FormInputType from libs import login as login_lib from models.account import Account, AccountStatus, TenantAccountRole from models.workflow import WorkflowRun diff --git a/api/tests/unit_tests/controllers/console/app/test_workflow_trigger_api.py b/api/tests/unit_tests/controllers/console/app/test_workflow_trigger_api.py new file mode 100644 index 0000000000..5363aa154f --- /dev/null +++ b/api/tests/unit_tests/controllers/console/app/test_workflow_trigger_api.py @@ -0,0 +1,54 @@ +from __future__ import annotations + +from datetime import UTC, datetime +from types import SimpleNamespace + +from controllers.console.app import workflow_trigger as workflow_trigger_module + + +def test_parser_models_validate(): + parser = workflow_trigger_module.Parser(node_id="node-1") + enable_parser = workflow_trigger_module.ParserEnable( + trigger_id="550e8400-e29b-41d4-a716-446655440000", enable_trigger=True + ) + + assert parser.node_id == "node-1" + assert enable_parser.enable_trigger is True + + +def test_workflow_trigger_response_serializes_datetime(): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + trigger = SimpleNamespace( + id="trigger-1", + trigger_type="trigger-plugin", + title="Trigger", + node_id="node-1", + provider_name="provider", + icon="https://example.com/icon", + status="enabled", + created_at=created_at, + updated_at=created_at, + ) + + payload = workflow_trigger_module.WorkflowTriggerResponse.model_validate(trigger, from_attributes=True).model_dump( + mode="json" + ) + assert payload["id"] == "trigger-1" + assert payload["created_at"] == "2026-01-02T03:04:05Z" + assert payload["updated_at"] == "2026-01-02T03:04:05Z" + + +def test_webhook_trigger_response_serializes_datetime(): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + webhook = { + "id": "webhook-1", + "webhook_id": "whk-1", + "webhook_url": "https://example.com/hook", + "webhook_debug_url": "https://example.com/hook/debug", + "node_id": "node-1", + "created_at": created_at, + } + + payload = workflow_trigger_module.WebhookTriggerResponse.model_validate(webhook).model_dump(mode="json") + assert payload["webhook_id"] == "whk-1" + assert payload["created_at"] == "2026-01-02T03:04:05Z" diff --git a/api/tests/unit_tests/controllers/console/app/workflow_draft_variables_test.py b/api/tests/unit_tests/controllers/console/app/workflow_draft_variables_test.py index 740da1f1df..b19a1740eb 100644 --- a/api/tests/unit_tests/controllers/console/app/workflow_draft_variables_test.py +++ b/api/tests/unit_tests/controllers/console/app/workflow_draft_variables_test.py @@ -5,7 +5,6 @@ from unittest.mock import MagicMock, patch import pytest from flask_restx import marshal -from graphon.variables.types import SegmentType from controllers.console.app.workflow_draft_variable import ( _WORKFLOW_DRAFT_VARIABLE_FIELDS, @@ -16,6 +15,7 @@ from controllers.console.app.workflow_draft_variable import ( ) from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID from factories.variable_factory import build_segment +from graphon.variables.types import SegmentType from libs.datetime_utils import naive_utc_now from libs.uuid_utils import uuidv7 from models.workflow import WorkflowDraftVariable, WorkflowDraftVariableFile diff --git a/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_datasource_auth.py b/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_datasource_auth.py index 9c9f8da87c..5136922e88 100644 --- a/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_datasource_auth.py +++ b/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_datasource_auth.py @@ -1,7 +1,6 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from werkzeug.exceptions import Forbidden, NotFound from controllers.console import console_ns @@ -18,6 +17,7 @@ from controllers.console.datasets.rag_pipeline.datasource_auth import ( DatasourceUpdateProviderNameApi, ) from core.plugin.impl.oauth import OAuthHandler +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from services.datasource_provider_service import DatasourceProviderService from services.plugin.oauth_service import OAuthProxyService diff --git a/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_rag_pipeline_draft_variable.py b/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_rag_pipeline_draft_variable.py index 6ef8ccfdbd..63950736c5 100644 --- a/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_rag_pipeline_draft_variable.py +++ b/api/tests/unit_tests/controllers/console/datasets/rag_pipeline/test_rag_pipeline_draft_variable.py @@ -2,7 +2,6 @@ from unittest.mock import MagicMock, patch import pytest from flask import Response -from graphon.variables.types import SegmentType from controllers.console import console_ns from controllers.console.app.error import DraftWorkflowNotExist @@ -16,6 +15,7 @@ from controllers.console.datasets.rag_pipeline.rag_pipeline_draft_variable impor ) from controllers.web.error import InvalidArgumentError, NotFoundError from core.workflow.variable_prefixes import SYSTEM_VARIABLE_NODE_ID +from graphon.variables.types import SegmentType from models.account import Account diff --git a/api/tests/unit_tests/controllers/console/datasets/test_datasets_document.py b/api/tests/unit_tests/controllers/console/datasets/test_datasets_document.py index ce2278de4f..d9b02ac453 100644 --- a/api/tests/unit_tests/controllers/console/datasets/test_datasets_document.py +++ b/api/tests/unit_tests/controllers/console/datasets/test_datasets_document.py @@ -1,3 +1,4 @@ +from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest @@ -215,17 +216,23 @@ class TestDatasetDocumentListApi: method = unwrap(api.post) payload = {"indexing_technique": "economy"} + created_dataset = SimpleNamespace(id="ds-1", name="Dataset", indexing_technique="economy") + created_document = SimpleNamespace(id="doc-1", name="Document", doc_metadata_details=None) with ( app.test_request_context("/", json=payload), patch.object(type(console_ns), "payload", payload), + patch( + "controllers.console.datasets.datasets_document.DatasetService.get_dataset", + return_value=created_dataset, + ), patch( "controllers.console.datasets.datasets_document.DocumentService.document_create_args_validate", return_value=None, ), patch( "controllers.console.datasets.datasets_document.DocumentService.save_document_with_dataset_id", - return_value=([MagicMock()], "batch-1"), + return_value=([created_document], "batch-1"), ), ): response = method(api, "ds-1") diff --git a/api/tests/unit_tests/controllers/console/datasets/test_hit_testing.py b/api/tests/unit_tests/controllers/console/datasets/test_hit_testing.py index 726c0a5cf3..09ed2aaf69 100644 --- a/api/tests/unit_tests/controllers/console/datasets/test_hit_testing.py +++ b/api/tests/unit_tests/controllers/console/datasets/test_hit_testing.py @@ -99,6 +99,57 @@ class TestHitTestingApi: assert "records" in result assert result["records"] == [] + def test_hit_testing_success_with_optional_record_fields(self, app, dataset, dataset_id): + api = HitTestingApi() + method = unwrap(api.post) + + payload = { + "query": "what is vector search", + } + records = [ + { + "segment": None, + "child_chunks": [], + "score": None, + "tsne_position": None, + "files": [], + "summary": None, + } + ] + + with ( + app.test_request_context("/"), + patch.object( + type(console_ns), + "payload", + new_callable=PropertyMock, + return_value=payload, + ), + patch.object( + HitTestingPayload, + "model_validate", + return_value=MagicMock(model_dump=lambda **_: payload), + ), + patch.object( + HitTestingApi, + "get_and_validate_dataset", + return_value=dataset, + ), + patch.object( + HitTestingApi, + "hit_testing_args_check", + ), + patch.object( + HitTestingApi, + "perform_hit_testing", + return_value={"query": payload["query"], "records": records}, + ), + ): + result = method(api, dataset_id) + + assert result["query"] == payload["query"] + assert result["records"] == records + def test_hit_testing_dataset_not_found(self, app, dataset_id): api = HitTestingApi() method = unwrap(api.post) diff --git a/api/tests/unit_tests/controllers/console/datasets/test_hit_testing_base.py b/api/tests/unit_tests/controllers/console/datasets/test_hit_testing_base.py index 710c9be684..e4acd91b76 100644 --- a/api/tests/unit_tests/controllers/console/datasets/test_hit_testing_base.py +++ b/api/tests/unit_tests/controllers/console/datasets/test_hit_testing_base.py @@ -1,7 +1,6 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import Forbidden, InternalServerError, NotFound import services @@ -21,6 +20,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from models.account import Account from services.dataset_service import DatasetService from services.hit_testing_service import HitTestingService diff --git a/api/tests/unit_tests/controllers/console/explore/test_audio.py b/api/tests/unit_tests/controllers/console/explore/test_audio.py index 66c9ba48c5..b4b57022e2 100644 --- a/api/tests/unit_tests/controllers/console/explore/test_audio.py +++ b/api/tests/unit_tests/controllers/console/explore/test_audio.py @@ -2,7 +2,6 @@ from io import BytesIO from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError import controllers.console.explore.audio as audio_module @@ -20,6 +19,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from services.errors.audio import ( AudioTooLargeServiceError, NoAudioUploadedServiceError, diff --git a/api/tests/unit_tests/controllers/console/explore/test_message.py b/api/tests/unit_tests/controllers/console/explore/test_message.py index 2e4ca4f2a4..145cc9cdd7 100644 --- a/api/tests/unit_tests/controllers/console/explore/test_message.py +++ b/api/tests/unit_tests/controllers/console/explore/test_message.py @@ -1,7 +1,6 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import InternalServerError, NotFound import controllers.console.explore.message as module @@ -22,6 +21,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from services.errors.conversation import ConversationNotExistsError from services.errors.message import ( FirstMessageNotExistsError, diff --git a/api/tests/unit_tests/controllers/console/explore/test_recommended_app.py b/api/tests/unit_tests/controllers/console/explore/test_recommended_app.py index 02c7507ea7..76c863577a 100644 --- a/api/tests/unit_tests/controllers/console/explore/test_recommended_app.py +++ b/api/tests/unit_tests/controllers/console/explore/test_recommended_app.py @@ -1,6 +1,7 @@ from unittest.mock import MagicMock, patch import controllers.console.explore.recommended_app as module +from models.model import AppMode, IconType def unwrap(func): @@ -90,3 +91,48 @@ class TestRecommendedAppApi: service_mock.assert_called_once_with("11111111-1111-1111-1111-111111111111") assert result == result_data + + +class TestRecommendedAppResponseModels: + def test_recommended_app_info_response_computes_icon_url(self): + with patch.object(module, "build_icon_url", return_value="https://signed/icon.png"): + payload = module.RecommendedAppInfoResponse.model_validate( + { + "id": "app-1", + "name": "App", + "mode": AppMode.CHAT, + "icon": "icon.png", + "icon_type": IconType.IMAGE, + "icon_background": "#fff", + } + ).model_dump(mode="json") + + assert payload["icon_url"] == "https://signed/icon.png" + + def test_recommended_app_list_response_serialization(self): + response = module.RecommendedAppListResponse.model_validate( + { + "recommended_apps": [ + { + "app": { + "id": "app-1", + "name": "App", + "mode": "chat", + "icon": "icon.png", + "icon_type": "emoji", + "icon_background": "#fff", + }, + "app_id": "app-1", + "description": "desc", + "category": "cat", + "position": 1, + "is_listed": True, + "can_trial": False, + } + ], + "categories": ["cat"], + } + ).model_dump(mode="json") + + assert response["recommended_apps"][0]["app_id"] == "app-1" + assert response["categories"] == ["cat"] diff --git a/api/tests/unit_tests/controllers/console/explore/test_trial.py b/api/tests/unit_tests/controllers/console/explore/test_trial.py index 04beb31389..3625056af9 100644 --- a/api/tests/unit_tests/controllers/console/explore/test_trial.py +++ b/api/tests/unit_tests/controllers/console/explore/test_trial.py @@ -3,7 +3,6 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.exceptions import Forbidden, InternalServerError, NotFound import controllers.console.explore.trial as module @@ -26,6 +25,7 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.errors.invoke import InvokeError from models import Account from models.account import TenantStatus from models.model import AppMode @@ -94,7 +94,7 @@ class TestTrialAppWorkflowRunApi: with app.test_request_context("/"): with pytest.raises(NotWorkflowAppError): - method(MagicMock(mode=AppMode.CHAT)) + method(api, MagicMock(mode=AppMode.CHAT)) def test_success(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -106,7 +106,7 @@ class TestTrialAppWorkflowRunApi: patch.object(module.AppGenerateService, "generate", return_value=MagicMock()), patch.object(module.RecommendedAppService, "add_trial_app_record"), ): - result = method(trial_app_workflow) + result = method(api, trial_app_workflow) assert result is not None @@ -124,7 +124,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(ProviderNotInitializeError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_quota_exceeded(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -140,7 +140,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(ProviderQuotaExceededError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_model_not_support(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -156,7 +156,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(ProviderModelCurrentlyNotSupportError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_invoke_error(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -172,7 +172,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(CompletionRequestError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_rate_limit_error(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -188,7 +188,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(InvokeRateLimitHttpError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_value_error(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -204,7 +204,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(ValueError): - method(trial_app_workflow) + method(api, trial_app_workflow) def test_workflow_generic_exception(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowRunApi() @@ -220,7 +220,7 @@ class TestTrialAppWorkflowRunApi: ), ): with pytest.raises(InternalServerError): - method(trial_app_workflow) + method(api, trial_app_workflow) class TestTrialChatApi: @@ -566,7 +566,7 @@ class TestTrialMessageSuggestedQuestionApi: with app.test_request_context("/"): with pytest.raises(NotChatAppError): - method(api, MagicMock(mode="completion"), str(uuid4())) + method(MagicMock(mode="completion"), str(uuid4())) def test_success(self, app, trial_app_chat, account): api = module.TrialMessageSuggestedQuestionApi() @@ -581,7 +581,7 @@ class TestTrialMessageSuggestedQuestionApi: return_value=["q1", "q2"], ), ): - result = method(api, trial_app_chat, str(uuid4())) + result = method(trial_app_chat, str(uuid4())) assert result == {"data": ["q1", "q2"]} @@ -599,7 +599,7 @@ class TestTrialMessageSuggestedQuestionApi: ), ): with pytest.raises(NotFound): - method(api, trial_app_chat, str(uuid4())) + method(trial_app_chat, str(uuid4())) class TestTrialAppParameterApi: @@ -931,7 +931,7 @@ class TestTrialAppWorkflowTaskStopApi: with app.test_request_context("/"): with pytest.raises(NotWorkflowAppError): - method(trial_app_chat, str(uuid4())) + method(api, trial_app_chat, str(uuid4())) def test_success(self, app, trial_app_workflow, account): api = module.TrialAppWorkflowTaskStopApi() @@ -944,7 +944,7 @@ class TestTrialAppWorkflowTaskStopApi: patch.object(module.AppQueueManager, "set_stop_flag_no_user_check") as mock_set_flag, patch.object(module.GraphEngineManager, "send_stop_command") as mock_send_cmd, ): - result = method(trial_app_workflow, task_id) + result = method(api, trial_app_workflow, task_id) assert result == {"result": "success"} mock_set_flag.assert_called_once_with(task_id) diff --git a/api/tests/unit_tests/controllers/console/tag/test_tags.py b/api/tests/unit_tests/controllers/console/tag/test_tags.py index e89b89c8b1..2be5a21f28 100644 --- a/api/tests/unit_tests/controllers/console/tag/test_tags.py +++ b/api/tests/unit_tests/controllers/console/tag/test_tags.py @@ -1,9 +1,11 @@ +from types import SimpleNamespace from unittest.mock import MagicMock, PropertyMock, patch import pytest from flask import Flask from werkzeug.exceptions import Forbidden +import controllers.console.tag.tags as module from controllers.console import console_ns from controllers.console.tag.tags import ( TagBindingCreateApi, @@ -83,13 +85,20 @@ class TestTagListApi: ), patch( "controllers.console.tag.tags.TagService.get_tags", - return_value=[{"id": "1", "name": "tag"}], + return_value=[ + SimpleNamespace( + id="1", + name="tag", + type=TagType.KNOWLEDGE, + binding_count=1, + ) + ], ), ): result, status = method(api) assert status == 200 - assert isinstance(result, list) + assert result == [{"id": "1", "name": "tag", "type": "knowledge", "binding_count": "1"}] def test_post_success(self, app, admin_user, tag, payload_patch): api = TagListApi() @@ -113,6 +122,7 @@ class TestTagListApi: assert status == 200 assert result["name"] == "test-tag" + assert result["binding_count"] == "0" def test_post_forbidden(self, app, readonly_user, payload_patch): api = TagListApi() @@ -158,7 +168,7 @@ class TestTagUpdateDeleteApi: result, status = method(api, "tag-1") assert status == 200 - assert result["binding_count"] == 3 + assert result["binding_count"] == "3" def test_patch_forbidden(self, app, readonly_user, payload_patch): api = TagUpdateDeleteApi() @@ -277,3 +287,13 @@ class TestTagBindingDeleteApi: ): with pytest.raises(Forbidden): method(api) + + +class TestTagResponseModel: + def test_tag_response_normalizes_enum_type(self): + payload = module.TagResponse.model_validate( + {"id": "tag-1", "name": "tag", "type": TagType.KNOWLEDGE, "binding_count": 1} + ).model_dump(mode="json") + + assert payload["type"] == "knowledge" + assert payload["binding_count"] == "1" diff --git a/api/tests/unit_tests/controllers/console/test_workspace_account.py b/api/tests/unit_tests/controllers/console/test_workspace_account.py index dd643faac9..c513be950b 100644 --- a/api/tests/unit_tests/controllers/console/test_workspace_account.py +++ b/api/tests/unit_tests/controllers/console/test_workspace_account.py @@ -11,7 +11,7 @@ from controllers.console.workspace.account import ( ChangeEmailSendEmailApi, CheckEmailUnique, ) -from models import Account +from models import Account, AccountStatus from services.account_service import AccountService @@ -33,7 +33,7 @@ def _build_account(email: str, account_id: str = "acc", tenant: object | None = account = Account(name=account_id, email=email) account.email = email account.id = account_id - account.status = "active" + account.status = AccountStatus.ACTIVE account._current_tenant = tenant_obj return account diff --git a/api/tests/unit_tests/controllers/console/workspace/test_load_balancing_config.py b/api/tests/unit_tests/controllers/console/workspace/test_load_balancing_config.py index 9c42ee9529..b2f949c6e2 100644 --- a/api/tests/unit_tests/controllers/console/workspace/test_load_balancing_config.py +++ b/api/tests/unit_tests/controllers/console/workspace/test_load_balancing_config.py @@ -11,9 +11,10 @@ from unittest.mock import MagicMock import pytest from flask import Flask from flask.views import MethodView +from werkzeug.exceptions import Forbidden + from graphon.model_runtime.entities.model_entities import ModelType from graphon.model_runtime.errors.validate import CredentialsValidateFailedError -from werkzeug.exceptions import Forbidden if not hasattr(builtins, "MethodView"): builtins.MethodView = MethodView # type: ignore[attr-defined] diff --git a/api/tests/unit_tests/controllers/console/workspace/test_model_providers.py b/api/tests/unit_tests/controllers/console/workspace/test_model_providers.py index fb9eec98cb..168479af1e 100644 --- a/api/tests/unit_tests/controllers/console/workspace/test_model_providers.py +++ b/api/tests/unit_tests/controllers/console/workspace/test_model_providers.py @@ -1,7 +1,6 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from pydantic_core import ValidationError from werkzeug.exceptions import Forbidden @@ -14,6 +13,7 @@ from controllers.console.workspace.model_providers import ( ModelProviderValidateApi, PreferredProviderTypeUpdateApi, ) +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError VALID_UUID = "123e4567-e89b-12d3-a456-426614174000" INVALID_UUID = "123" diff --git a/api/tests/unit_tests/controllers/console/workspace/test_models.py b/api/tests/unit_tests/controllers/console/workspace/test_models.py index c829327bc7..f0d32f81fb 100644 --- a/api/tests/unit_tests/controllers/console/workspace/test_models.py +++ b/api/tests/unit_tests/controllers/console/workspace/test_models.py @@ -2,8 +2,6 @@ from unittest.mock import MagicMock, patch import pytest from flask import Flask -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from controllers.console.workspace.models import ( DefaultModelApi, @@ -16,6 +14,8 @@ from controllers.console.workspace.models import ( ModelProviderModelParameterRuleApi, ModelProviderModelValidateApi, ) +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError def unwrap(func): diff --git a/api/tests/unit_tests/controllers/console/workspace/test_workspace.py b/api/tests/unit_tests/controllers/console/workspace/test_workspace.py index b2d13dbbdf..e82a29f045 100644 --- a/api/tests/unit_tests/controllers/console/workspace/test_workspace.py +++ b/api/tests/unit_tests/controllers/console/workspace/test_workspace.py @@ -18,6 +18,7 @@ from controllers.console.workspace.workspace import ( CustomConfigWorkspaceApi, SwitchWorkspaceApi, TenantApi, + TenantInfoResponse, TenantListApi, WebappLogoWorkspaceApi, WorkspaceInfoApi, @@ -435,6 +436,23 @@ class TestTenantApi: assert status == 200 +class TestTenantInfoResponse: + def test_tenant_info_response_normalizes_enum_and_datetime(self): + created_at = naive_utc_now() + payload = TenantInfoResponse.model_validate( + { + "id": "t1", + "status": TenantStatus.NORMAL, + "plan": CloudPlan.TEAM, + "created_at": created_at, + } + ).model_dump(mode="json") + + assert payload["status"] == "normal" + assert payload["plan"] == "team" + assert payload["created_at"] == int(created_at.timestamp()) + + class TestSwitchWorkspaceApi: def test_switch_success(self, app): api = SwitchWorkspaceApi() diff --git a/api/tests/unit_tests/controllers/inner_api/app/test_dsl.py b/api/tests/unit_tests/controllers/inner_api/app/test_dsl.py index 974d8f7bc6..71381e6a2b 100644 --- a/api/tests/unit_tests/controllers/inner_api/app/test_dsl.py +++ b/api/tests/unit_tests/controllers/inner_api/app/test_dsl.py @@ -18,6 +18,7 @@ from controllers.inner_api.app.dsl import ( InnerAppDSLImportPayload, _get_active_account, ) +from models.account import AccountStatus from services.app_dsl_service import ImportStatus @@ -63,7 +64,7 @@ class TestGetActiveAccount: @patch("controllers.inner_api.app.dsl.db") def test_returns_active_account(self, mock_db): mock_account = MagicMock() - mock_account.status = "active" + mock_account.status = AccountStatus.ACTIVE mock_db.session.scalar.return_value = mock_account result = _get_active_account("user@example.com") @@ -74,7 +75,7 @@ class TestGetActiveAccount: @patch("controllers.inner_api.app.dsl.db") def test_returns_none_for_inactive_account(self, mock_db): mock_account = MagicMock() - mock_account.status = "banned" + mock_account.status = AccountStatus.BANNED mock_db.session.scalar.return_value = mock_account result = _get_active_account("banned@example.com") @@ -102,16 +103,16 @@ class TestEnterpriseAppDSLImport: @pytest.fixture def _mock_import_deps(self): - """Patch db, sessionmaker, and AppDslService for import handler tests.""" - mock_session_ctx = MagicMock() - mock_session_ctx.__enter__ = MagicMock(return_value=MagicMock()) - mock_session_ctx.__exit__ = MagicMock(return_value=False) - mock_sessionmaker = MagicMock(return_value=MagicMock(begin=MagicMock(return_value=mock_session_ctx))) + """Patch db, Session, and AppDslService for import handler tests.""" + mock_session = MagicMock() + mock_session.__enter__ = MagicMock(return_value=mock_session) + mock_session.__exit__ = MagicMock(return_value=False) with ( patch("controllers.inner_api.app.dsl.db"), - patch("controllers.inner_api.app.dsl.sessionmaker", mock_sessionmaker), + patch("controllers.inner_api.app.dsl.Session", return_value=mock_session), patch("controllers.inner_api.app.dsl.AppDslService") as mock_dsl_cls, ): + self._mock_session = mock_session self._mock_dsl = MagicMock() mock_dsl_cls.return_value = self._mock_dsl yield @@ -147,6 +148,8 @@ class TestEnterpriseAppDSLImport: assert status_code == 200 assert body["status"] == "completed" mock_account.set_tenant_id.assert_called_once_with("ws-123") + self._mock_session.commit.assert_called_once_with() + self._mock_session.rollback.assert_not_called() @pytest.mark.usefixtures("_mock_import_deps") @patch("controllers.inner_api.app.dsl._get_active_account") @@ -162,6 +165,8 @@ class TestEnterpriseAppDSLImport: assert status_code == 202 assert body["status"] == "pending" + self._mock_session.commit.assert_called_once_with() + self._mock_session.rollback.assert_not_called() @pytest.mark.usefixtures("_mock_import_deps") @patch("controllers.inner_api.app.dsl._get_active_account") @@ -177,6 +182,8 @@ class TestEnterpriseAppDSLImport: assert status_code == 400 assert body["status"] == "failed" + self._mock_session.rollback.assert_called_once_with() + self._mock_session.commit.assert_not_called() @patch("controllers.inner_api.app.dsl._get_active_account") def test_import_account_not_found_returns_404(self, mock_get_account, api_instance, app: Flask): diff --git a/api/tests/unit_tests/controllers/inner_api/plugin/test_plugin_wraps.py b/api/tests/unit_tests/controllers/inner_api/plugin/test_plugin_wraps.py index 957d7fbd9b..0895fac3a4 100644 --- a/api/tests/unit_tests/controllers/inner_api/plugin/test_plugin_wraps.py +++ b/api/tests/unit_tests/controllers/inner_api/plugin/test_plugin_wraps.py @@ -2,6 +2,7 @@ Unit tests for inner_api plugin decorators """ +from typing import Any from unittest.mock import MagicMock, patch import pytest @@ -232,11 +233,11 @@ class TestGetUserTenant: class PluginTestPayload: """Simple test payload class""" - def __init__(self, data: dict): + def __init__(self, data: dict[str, Any]): self.value = data.get("value") @classmethod - def model_validate(cls, data: dict): + def model_validate(cls, data: dict[str, Any]): return cls(data) @@ -277,7 +278,7 @@ class TestPluginData: # Arrange class InvalidPayload: @classmethod - def model_validate(cls, data: dict): + def model_validate(cls, data: dict[str, Any]): raise Exception("Validation failed") @plugin_data(payload_type=InvalidPayload) diff --git a/api/tests/unit_tests/controllers/inner_api/workspace/test_workspace.py b/api/tests/unit_tests/controllers/inner_api/workspace/test_workspace.py index 56a8f94963..7d2193adc6 100644 --- a/api/tests/unit_tests/controllers/inner_api/workspace/test_workspace.py +++ b/api/tests/unit_tests/controllers/inner_api/workspace/test_workspace.py @@ -20,6 +20,7 @@ from controllers.inner_api.workspace.workspace import ( WorkspaceCreatePayload, WorkspaceOwnerlessPayload, ) +from models.account import TenantStatus class TestWorkspaceCreatePayload: @@ -98,7 +99,7 @@ class TestEnterpriseWorkspace: mock_tenant.id = "tenant-id" mock_tenant.name = "My Workspace" mock_tenant.plan = "sandbox" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_tenant.created_at = now mock_tenant.updated_at = now mock_tenant_svc.create_tenant.return_value = mock_tenant @@ -162,7 +163,7 @@ class TestEnterpriseWorkspaceNoOwnerEmail: mock_tenant.name = "My Workspace" mock_tenant.encrypt_public_key = "pub-key" mock_tenant.plan = "sandbox" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_tenant.custom_config = None mock_tenant.created_at = now mock_tenant.updated_at = now diff --git a/api/tests/unit_tests/controllers/service_api/app/test_app.py b/api/tests/unit_tests/controllers/service_api/app/test_app.py index 1507bf7a5f..f48ace427d 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_app.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_app.py @@ -10,6 +10,7 @@ from flask import Flask from controllers.service_api.app.app import AppInfoApi, AppMetaApi, AppParameterApi from controllers.service_api.app.error import AppUnavailableError +from models.account import TenantStatus from models.model import App, AppMode from tests.unit_tests.conftest import setup_mock_tenant_account_query @@ -62,7 +63,7 @@ class TestAppParameterApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL # Mock DB queries for app and tenant mock_db.session.get.side_effect = [ @@ -110,7 +111,7 @@ class TestAppParameterApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -151,7 +152,7 @@ class TestAppParameterApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -190,7 +191,7 @@ class TestAppParameterApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -253,7 +254,7 @@ class TestAppMetaApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -321,7 +322,7 @@ class TestAppInfoApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app_model, @@ -378,7 +379,7 @@ class TestAppInfoApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app, @@ -424,7 +425,7 @@ class TestAppInfoApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app, @@ -476,7 +477,7 @@ class TestAppInfoApi: mock_validate_token.return_value = mock_api_token mock_tenant = Mock() - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_db.session.get.side_effect = [ mock_app, diff --git a/api/tests/unit_tests/controllers/service_api/app/test_audio.py b/api/tests/unit_tests/controllers/service_api/app/test_audio.py index 5a8cb4619f..c16ebad739 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_audio.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_audio.py @@ -13,7 +13,6 @@ from types import SimpleNamespace from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from werkzeug.datastructures import FileStorage from werkzeug.exceptions import InternalServerError @@ -30,6 +29,7 @@ from controllers.service_api.app.error import ( UnsupportedAudioTypeError, ) from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from services.audio_service import AudioService from services.errors.app_model_config import AppModelConfigBrokenError from services.errors.audio import ( @@ -95,30 +95,6 @@ class TestTextToAudioPayload: assert payload.streaming is True -# --------------------------------------------------------------------------- -# AudioService Interface Tests -# --------------------------------------------------------------------------- - - -class TestAudioServiceInterface: - """Test AudioService method interfaces exist.""" - - def test_transcript_asr_method_exists(self): - """Test that AudioService.transcript_asr exists.""" - assert hasattr(AudioService, "transcript_asr") - assert callable(AudioService.transcript_asr) - - def test_transcript_tts_method_exists(self): - """Test that AudioService.transcript_tts exists.""" - assert hasattr(AudioService, "transcript_tts") - assert callable(AudioService.transcript_tts) - - -# --------------------------------------------------------------------------- -# Audio Service Tests -# --------------------------------------------------------------------------- - - class TestAudioServiceInterface: """Test suite for AudioService interface methods.""" diff --git a/api/tests/unit_tests/controllers/service_api/app/test_completion.py b/api/tests/unit_tests/controllers/service_api/app/test_completion.py index 57681d8f5b..3364c07e62 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_completion.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_completion.py @@ -16,7 +16,6 @@ from types import SimpleNamespace from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeError from pydantic import ValidationError from werkzeug.exceptions import BadRequest, NotFound @@ -35,6 +34,7 @@ from controllers.service_api.app.error import ( NotChatAppError, ) from core.errors.error import QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from models.model import App, AppMode, EndUser from services.app_generate_service import AppGenerateService from services.app_task_service import AppTaskService diff --git a/api/tests/unit_tests/controllers/service_api/app/test_conversation.py b/api/tests/unit_tests/controllers/service_api/app/test_conversation.py index dbd06677d8..14c35a9ed5 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_conversation.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_conversation.py @@ -15,6 +15,7 @@ Focus on: import sys import uuid +from datetime import UTC, datetime from types import SimpleNamespace from unittest.mock import Mock, patch @@ -29,11 +30,14 @@ from controllers.service_api.app.conversation import ( ConversationRenameApi, ConversationRenamePayload, ConversationVariableDetailApi, + ConversationVariableInfiniteScrollPaginationResponse, + ConversationVariableResponse, ConversationVariablesApi, ConversationVariablesQuery, ConversationVariableUpdatePayload, ) from controllers.service_api.app.error import NotChatAppError +from graphon.variables.types import SegmentType from models.model import App, AppMode, EndUser from services.conversation_service import ConversationService from services.errors.conversation import ( @@ -261,6 +265,46 @@ class TestConversationVariableUpdatePayload: assert payload.value == nested +class TestConversationVariableResponseModels: + def test_variable_response_normalizes_value_type_and_timestamps(self): + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + response = ConversationVariableResponse.model_validate( + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "name": "foo", + "value_type": SegmentType.INTEGER, + "value": 1, + "description": "desc", + "created_at": created_at, + "updated_at": created_at, + } + ) + assert response.value_type == "number" + assert response.value == "1" + assert response.created_at == int(created_at.timestamp()) + assert response.updated_at == int(created_at.timestamp()) + + def test_variable_pagination_response(self): + response = ConversationVariableInfiniteScrollPaginationResponse.model_validate( + { + "limit": 1, + "has_more": False, + "data": [ + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "name": "foo", + "value_type": "string", + "value": "bar", + } + ], + } + ) + assert response.limit == 1 + assert response.has_more is False + assert len(response.data) == 1 + assert response.data[0].name == "foo" + + class TestConversationAppModeValidation: """Test app mode validation for conversation endpoints.""" @@ -549,6 +593,44 @@ class TestConversationVariablesApiController: with pytest.raises(NotFound): handler(api, app_model=app_model, end_user=end_user, c_id="00000000-0000-0000-0000-000000000001") + def test_success_serializes_response(self, app, monkeypatch: pytest.MonkeyPatch) -> None: + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + monkeypatch.setattr( + ConversationService, + "get_conversational_variable", + lambda *_args, **_kwargs: SimpleNamespace( + limit=1, + has_more=False, + data=[ + { + "id": "550e8400-e29b-41d4-a716-446655440000", + "name": "foo", + "value_type": SegmentType.INTEGER, + "value": 1, + "created_at": created_at, + "updated_at": created_at, + } + ], + ), + ) + + api = ConversationVariablesApi() + handler = _unwrap(api.get) + app_model = SimpleNamespace(mode=AppMode.CHAT.value) + end_user = SimpleNamespace() + + with app.test_request_context( + "/conversations/1/variables?limit=20", + method="GET", + ): + result = handler(api, app_model=app_model, end_user=end_user, c_id="00000000-0000-0000-0000-000000000001") + + assert result["limit"] == 1 + assert result["has_more"] is False + assert result["data"][0]["value_type"] == "number" + assert result["data"][0]["value"] == "1" + assert result["data"][0]["created_at"] == int(created_at.timestamp()) + class TestConversationVariableDetailApiController: def test_update_type_mismatch(self, app, monkeypatch: pytest.MonkeyPatch) -> None: @@ -602,3 +684,41 @@ class TestConversationVariableDetailApiController: c_id="00000000-0000-0000-0000-000000000001", variable_id="00000000-0000-0000-0000-000000000002", ) + + def test_update_success_serializes_response(self, app, monkeypatch: pytest.MonkeyPatch) -> None: + created_at = datetime(2026, 1, 2, 3, 4, 5, tzinfo=UTC) + monkeypatch.setattr( + ConversationService, + "update_conversation_variable", + lambda *_args, **_kwargs: { + "id": "550e8400-e29b-41d4-a716-446655440000", + "name": "foo", + "value_type": SegmentType.INTEGER, + "value": 1, + "created_at": created_at, + "updated_at": created_at, + }, + ) + + api = ConversationVariableDetailApi() + handler = _unwrap(api.put) + app_model = SimpleNamespace(mode=AppMode.CHAT.value) + end_user = SimpleNamespace() + + with app.test_request_context( + "/conversations/1/variables/2", + method="PUT", + json={"value": 1}, + ): + result = handler( + api, + app_model=app_model, + end_user=end_user, + c_id="00000000-0000-0000-0000-000000000001", + variable_id="00000000-0000-0000-0000-000000000002", + ) + + assert result["id"] == "550e8400-e29b-41d4-a716-446655440000" + assert result["value_type"] == "number" + assert result["value"] == "1" + assert result["created_at"] == int(created_at.timestamp()) diff --git a/api/tests/unit_tests/controllers/service_api/app/test_workflow.py b/api/tests/unit_tests/controllers/service_api/app/test_workflow.py index cfa21bf2dd..da09ec13ce 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_workflow.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_workflow.py @@ -15,11 +15,11 @@ Focus on: import sys import uuid +from datetime import UTC, datetime from types import SimpleNamespace from unittest.mock import Mock, patch import pytest -from graphon.enums import WorkflowExecutionStatus from werkzeug.exceptions import BadRequest, NotFound from controllers.service_api.app.error import NotWorkflowAppError @@ -36,6 +36,7 @@ from controllers.service_api.app.workflow import ( WorkflowTaskStopApi, ) from controllers.web.error import InvokeRateLimitError as InvokeRateLimitHttpError +from graphon.enums import WorkflowExecutionStatus from models.model import App, AppMode from services.app_generate_service import AppGenerateService from services.errors.app import IsDraftWorkflowError, WorkflowNotFoundError @@ -43,6 +44,22 @@ from services.errors.llm import InvokeRateLimitError from services.workflow_app_service import WorkflowAppService +def _make_mock_workflow_run(run_id: str = "run-1"): + run = Mock() + run.id = run_id + run.workflow_id = "wf-1" + run.status = WorkflowExecutionStatus.SUCCEEDED + run.inputs = {"input": "value"} + run.outputs_dict = {"output": "value"} + run.error = None + run.total_steps = 1 + run.total_tokens = 10 + run.created_at = datetime(2026, 1, 1, tzinfo=UTC) + run.finished_at = datetime(2026, 1, 1, tzinfo=UTC) + run.elapsed_time = 0.1 + return run + + class TestWorkflowRunPayload: """Test suite for WorkflowRunPayload Pydantic model.""" @@ -359,7 +376,7 @@ class TestWorkflowRunDetailApi: handler(api, app_model=app_model, workflow_run_id="run") def test_success(self, monkeypatch: pytest.MonkeyPatch) -> None: - run = SimpleNamespace(id="run") + run = _make_mock_workflow_run(run_id="run") repo = SimpleNamespace(get_workflow_run_by_id=lambda **_kwargs: run) workflow_module = sys.modules["controllers.service_api.app.workflow"] monkeypatch.setattr(workflow_module, "db", SimpleNamespace(engine=object())) @@ -373,7 +390,10 @@ class TestWorkflowRunDetailApi: handler = _unwrap(api.get) app_model = SimpleNamespace(mode=AppMode.WORKFLOW.value, tenant_id="t1", id="a1") - assert handler(api, app_model=app_model, workflow_run_id="run") == run + result = handler(api, app_model=app_model, workflow_run_id="run") + assert result["id"] == "run" + assert result["workflow_id"] == "wf-1" + assert result["status"] == "succeeded" class TestWorkflowRunApi: @@ -490,7 +510,7 @@ class TestWorkflowAppLogApi: monkeypatch.setattr( WorkflowAppService, "get_paginate_workflow_app_logs", - lambda *_args, **_kwargs: {"items": [], "total": 0}, + lambda *_args, **_kwargs: {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []}, ) api = WorkflowAppLogApi() @@ -500,7 +520,7 @@ class TestWorkflowAppLogApi: with app.test_request_context("/workflows/logs", method="GET"): response = handler(api, app_model=app_model) - assert response == {"items": [], "total": 0} + assert response == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} # ============================================================================= @@ -527,9 +547,8 @@ def mock_workflow_app(): class TestWorkflowRunDetailApiGet: """Test suite for WorkflowRunDetailApi.get() endpoint. - ``get`` is wrapped by ``@validate_app_token`` (preserves ``__wrapped__``) - and ``@service_api_ns.marshal_with``. We call the unwrapped method - directly; ``marshal_with`` is a no-op when calling directly. + ``get`` is wrapped by ``@validate_app_token`` (preserves ``__wrapped__``), + and we call the unwrapped method directly in tests. """ @patch("controllers.service_api.app.workflow.DifyAPIRepositoryFactory") @@ -542,9 +561,7 @@ class TestWorkflowRunDetailApiGet: mock_workflow_app, ): """Test successful workflow run detail retrieval.""" - mock_run = Mock() - mock_run.id = "run-1" - mock_run.status = "succeeded" + mock_run = _make_mock_workflow_run(run_id="run-1") mock_repo = Mock() mock_repo.get_workflow_run_by_id.return_value = mock_run mock_repo_factory.create_api_workflow_run_repository.return_value = mock_repo @@ -558,7 +575,8 @@ class TestWorkflowRunDetailApiGet: api = WorkflowRunDetailApi() result = _unwrap(api.get)(api, app_model=mock_workflow_app, workflow_run_id=mock_run.id) - assert result == mock_run + assert result["id"] == mock_run.id + assert result["status"] == "succeeded" @patch("controllers.service_api.app.workflow.db") def test_get_workflow_run_wrong_app_mode(self, mock_db, app): @@ -622,8 +640,7 @@ class TestWorkflowTaskStopApiPost: class TestWorkflowAppLogApiGet: """Test suite for WorkflowAppLogApi.get() endpoint. - ``get`` is wrapped by ``@validate_app_token`` and - ``@service_api_ns.marshal_with``. + ``get`` is wrapped by ``@validate_app_token``. """ @patch("controllers.service_api.app.workflow.WorkflowAppService") @@ -637,6 +654,10 @@ class TestWorkflowAppLogApiGet: ): """Test successful workflow log retrieval.""" mock_pagination = Mock() + mock_pagination.page = 1 + mock_pagination.limit = 20 + mock_pagination.total = 0 + mock_pagination.has_more = False mock_pagination.data = [] mock_svc_instance = Mock() mock_svc_instance.get_paginate_workflow_app_logs.return_value = mock_pagination @@ -661,4 +682,4 @@ class TestWorkflowAppLogApiGet: api = WorkflowAppLogApi() result = _unwrap(api.get)(api, app_model=mock_workflow_app) - assert result == mock_pagination + assert result == {"page": 1, "limit": 20, "total": 0, "has_more": False, "data": []} diff --git a/api/tests/unit_tests/controllers/service_api/app/test_workflow_fields.py b/api/tests/unit_tests/controllers/service_api/app/test_workflow_fields.py index 4b8e3a738c..eda270258d 100644 --- a/api/tests/unit_tests/controllers/service_api/app/test_workflow_fields.py +++ b/api/tests/unit_tests/controllers/service_api/app/test_workflow_fields.py @@ -1,8 +1,7 @@ from types import SimpleNamespace -from graphon.enums import WorkflowExecutionStatus - from controllers.service_api.app.workflow import WorkflowRunOutputsField, WorkflowRunStatusField +from graphon.enums import WorkflowExecutionStatus def test_workflow_run_status_field_with_enum() -> None: diff --git a/api/tests/unit_tests/controllers/service_api/test_site.py b/api/tests/unit_tests/controllers/service_api/test_site.py deleted file mode 100644 index c0b40d070a..0000000000 --- a/api/tests/unit_tests/controllers/service_api/test_site.py +++ /dev/null @@ -1,270 +0,0 @@ -""" -Unit tests for Service API Site controller -""" - -import uuid -from unittest.mock import Mock, patch - -import pytest -from werkzeug.exceptions import Forbidden - -from controllers.service_api.app.site import AppSiteApi -from models.account import TenantStatus -from models.model import App, Site -from tests.unit_tests.conftest import setup_mock_tenant_account_query - - -class TestAppSiteApi: - """Test suite for AppSiteApi""" - - @pytest.fixture - def mock_app_model(self): - """Create a mock App model with tenant.""" - app = Mock(spec=App) - app.id = str(uuid.uuid4()) - app.tenant_id = str(uuid.uuid4()) - app.status = "normal" - app.enable_api = True - - mock_tenant = Mock() - mock_tenant.id = app.tenant_id - mock_tenant.status = TenantStatus.NORMAL - app.tenant = mock_tenant - - return app - - @pytest.fixture - def mock_site(self): - """Create a mock Site model.""" - site = Mock(spec=Site) - site.id = str(uuid.uuid4()) - site.app_id = str(uuid.uuid4()) - site.title = "Test Site" - site.icon = "icon-url" - site.icon_background = "#ffffff" - site.description = "Site description" - site.copyright = "Copyright 2024" - site.privacy_policy = "Privacy policy text" - site.custom_disclaimer = "Custom disclaimer" - site.default_language = "en-US" - site.prompt_public = True - site.show_workflow_steps = True - site.use_icon_as_answer_icon = False - site.chat_color_theme = "light" - site.chat_color_theme_inverted = False - site.icon_type = "image" - site.created_at = "2024-01-01T00:00:00" - site.updated_at = "2024-01-01T00:00:00" - return site - - @patch("controllers.service_api.wraps.user_logged_in") - @patch("controllers.service_api.app.site.db") - @patch("controllers.service_api.wraps.current_app") - @patch("controllers.service_api.wraps.validate_and_get_api_token") - @patch("controllers.service_api.wraps.db") - def test_get_site_success( - self, - mock_wraps_db, - mock_validate_token, - mock_current_app, - mock_db, - mock_user_logged_in, - app, - mock_app_model, - mock_site, - ): - """Test successful retrieval of site configuration.""" - # Arrange - mock_current_app.login_manager = Mock() - - # Mock authentication - mock_api_token = Mock() - mock_api_token.app_id = mock_app_model.id - mock_api_token.tenant_id = mock_app_model.tenant_id - mock_validate_token.return_value = mock_api_token - - mock_tenant = Mock() - mock_tenant.status = TenantStatus.NORMAL - mock_app_model.tenant = mock_tenant - - # Mock wraps.db for authentication - mock_wraps_db.session.get.side_effect = [ - mock_app_model, - mock_tenant, - ] - - mock_account = Mock() - mock_account.current_tenant = mock_tenant - setup_mock_tenant_account_query(mock_wraps_db, mock_tenant, mock_account) - - # Mock site.db for site query - mock_db.session.scalar.return_value = mock_site - - # Act - with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test_token"}): - api = AppSiteApi() - response = api.get() - - # Assert - assert response["title"] == "Test Site" - assert response["icon"] == "icon-url" - assert response["description"] == "Site description" - mock_db.session.scalar.assert_called_once() - - @patch("controllers.service_api.wraps.user_logged_in") - @patch("controllers.service_api.app.site.db") - @patch("controllers.service_api.wraps.current_app") - @patch("controllers.service_api.wraps.validate_and_get_api_token") - @patch("controllers.service_api.wraps.db") - def test_get_site_not_found( - self, - mock_wraps_db, - mock_validate_token, - mock_current_app, - mock_db, - mock_user_logged_in, - app, - mock_app_model, - ): - """Test that Forbidden is raised when site is not found.""" - # Arrange - mock_current_app.login_manager = Mock() - - # Mock authentication - mock_api_token = Mock() - mock_api_token.app_id = mock_app_model.id - mock_api_token.tenant_id = mock_app_model.tenant_id - mock_validate_token.return_value = mock_api_token - - mock_tenant = Mock() - mock_tenant.status = TenantStatus.NORMAL - mock_app_model.tenant = mock_tenant - - mock_wraps_db.session.get.side_effect = [ - mock_app_model, - mock_tenant, - ] - - mock_account = Mock() - mock_account.current_tenant = mock_tenant - setup_mock_tenant_account_query(mock_wraps_db, mock_tenant, mock_account) - - # Mock site query to return None - mock_db.session.scalar.return_value = None - - # Act & Assert - with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test_token"}): - api = AppSiteApi() - with pytest.raises(Forbidden): - api.get() - - @patch("controllers.service_api.wraps.user_logged_in") - @patch("controllers.service_api.app.site.db") - @patch("controllers.service_api.wraps.current_app") - @patch("controllers.service_api.wraps.validate_and_get_api_token") - @patch("controllers.service_api.wraps.db") - def test_get_site_tenant_archived( - self, - mock_wraps_db, - mock_validate_token, - mock_current_app, - mock_db, - mock_user_logged_in, - app, - mock_app_model, - mock_site, - ): - """Test that Forbidden is raised when tenant is archived.""" - # Arrange - mock_current_app.login_manager = Mock() - - # Mock authentication - mock_api_token = Mock() - mock_api_token.app_id = mock_app_model.id - mock_api_token.tenant_id = mock_app_model.tenant_id - mock_validate_token.return_value = mock_api_token - - mock_tenant = Mock() - mock_tenant.status = TenantStatus.NORMAL - - mock_wraps_db.session.get.side_effect = [ - mock_app_model, - mock_tenant, - ] - - mock_account = Mock() - mock_account.current_tenant = mock_tenant - setup_mock_tenant_account_query(mock_wraps_db, mock_tenant, mock_account) - - # Mock site query - mock_db.session.scalar.return_value = mock_site - - # Set tenant status to archived AFTER authentication - mock_app_model.tenant.status = TenantStatus.ARCHIVE - - # Act & Assert - with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test_token"}): - api = AppSiteApi() - with pytest.raises(Forbidden): - api.get() - - @patch("controllers.service_api.wraps.user_logged_in") - @patch("controllers.service_api.app.site.db") - @patch("controllers.service_api.wraps.current_app") - @patch("controllers.service_api.wraps.validate_and_get_api_token") - @patch("controllers.service_api.wraps.db") - def test_get_site_queries_by_app_id( - self, mock_wraps_db, mock_validate_token, mock_current_app, mock_db, mock_user_logged_in, app, mock_app_model - ): - """Test that site is queried using the app model's id.""" - # Arrange - mock_current_app.login_manager = Mock() - - # Mock authentication - mock_api_token = Mock() - mock_api_token.app_id = mock_app_model.id - mock_api_token.tenant_id = mock_app_model.tenant_id - mock_validate_token.return_value = mock_api_token - - mock_tenant = Mock() - mock_tenant.status = TenantStatus.NORMAL - mock_app_model.tenant = mock_tenant - - mock_wraps_db.session.get.side_effect = [ - mock_app_model, - mock_tenant, - ] - - mock_account = Mock() - mock_account.current_tenant = mock_tenant - setup_mock_tenant_account_query(mock_wraps_db, mock_tenant, mock_account) - - mock_site = Mock(spec=Site) - mock_site.id = str(uuid.uuid4()) - mock_site.app_id = mock_app_model.id - mock_site.title = "Test Site" - mock_site.icon = "icon-url" - mock_site.icon_background = "#ffffff" - mock_site.description = "Site description" - mock_site.copyright = "Copyright 2024" - mock_site.privacy_policy = "Privacy policy text" - mock_site.custom_disclaimer = "Custom disclaimer" - mock_site.default_language = "en-US" - mock_site.prompt_public = True - mock_site.show_workflow_steps = True - mock_site.use_icon_as_answer_icon = False - mock_site.chat_color_theme = "light" - mock_site.chat_color_theme_inverted = False - mock_site.icon_type = "image" - mock_site.created_at = "2024-01-01T00:00:00" - mock_site.updated_at = "2024-01-01T00:00:00" - mock_db.session.scalar.return_value = mock_site - - # Act - with app.test_request_context("/site", method="GET", headers={"Authorization": "Bearer test_token"}): - api = AppSiteApi() - api.get() - - # Assert - # The query was executed successfully (site returned), which validates the correct query was made - mock_db.session.scalar.assert_called_once() diff --git a/api/tests/unit_tests/controllers/web/test_audio.py b/api/tests/unit_tests/controllers/web/test_audio.py index cbfc8fa613..a6ca441801 100644 --- a/api/tests/unit_tests/controllers/web/test_audio.py +++ b/api/tests/unit_tests/controllers/web/test_audio.py @@ -8,7 +8,6 @@ from unittest.mock import MagicMock, patch import pytest from flask import Flask -from graphon.model_runtime.errors.invoke import InvokeError from controllers.web.audio import AudioApi, TextApi from controllers.web.error import ( @@ -22,6 +21,7 @@ from controllers.web.error import ( UnsupportedAudioTypeError, ) from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError from services.errors.audio import ( AudioTooLargeServiceError, NoAudioUploadedServiceError, diff --git a/api/tests/unit_tests/controllers/web/test_completion.py b/api/tests/unit_tests/controllers/web/test_completion.py index 49039d03fe..4f8d848637 100644 --- a/api/tests/unit_tests/controllers/web/test_completion.py +++ b/api/tests/unit_tests/controllers/web/test_completion.py @@ -7,7 +7,6 @@ from unittest.mock import MagicMock, patch import pytest from flask import Flask -from graphon.model_runtime.errors.invoke import InvokeError from controllers.web.completion import ChatApi, ChatStopApi, CompletionApi, CompletionStopApi from controllers.web.error import ( @@ -19,6 +18,7 @@ from controllers.web.error import ( ProviderQuotaExceededError, ) from core.errors.error import ModelCurrentlyNotSupportError, ProviderTokenNotInitError, QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeError def _completion_app() -> SimpleNamespace: diff --git a/api/tests/unit_tests/controllers/web/test_message_endpoints.py b/api/tests/unit_tests/controllers/web/test_message_endpoints.py index 89ab93d8d4..da88b109a8 100644 --- a/api/tests/unit_tests/controllers/web/test_message_endpoints.py +++ b/api/tests/unit_tests/controllers/web/test_message_endpoints.py @@ -129,12 +129,6 @@ class TestMessageSuggestedQuestionApi: with pytest.raises(NotChatAppError): MessageSuggestedQuestionApi().get(_completion_app(), _end_user(), msg_id) - def test_wrong_mode_raises(self, app: Flask) -> None: - msg_id = uuid4() - with app.test_request_context(f"/messages/{msg_id}/suggested-questions"): - with pytest.raises(NotChatAppError): - MessageSuggestedQuestionApi().get(_completion_app(), _end_user(), msg_id) - @patch("controllers.web.message.MessageService.get_suggested_questions_after_answer") def test_happy_path(self, mock_suggest: MagicMock, app: Flask) -> None: msg_id = uuid4() diff --git a/api/tests/unit_tests/core/agent/test_cot_agent_runner.py b/api/tests/unit_tests/core/agent/test_cot_agent_runner.py index bc7aea0ef9..cde8820e00 100644 --- a/api/tests/unit_tests/core/agent/test_cot_agent_runner.py +++ b/api/tests/unit_tests/core/agent/test_cot_agent_runner.py @@ -2,11 +2,11 @@ import json from unittest.mock import MagicMock import pytest -from graphon.model_runtime.entities.llm_entities import LLMUsage from core.agent.cot_agent_runner import CotAgentRunner from core.agent.entities import AgentScratchpadUnit from core.agent.errors import AgentMaxIterationError +from graphon.model_runtime.entities.llm_entities import LLMUsage class DummyRunner(CotAgentRunner): diff --git a/api/tests/unit_tests/core/agent/test_cot_chat_agent_runner.py b/api/tests/unit_tests/core/agent/test_cot_chat_agent_runner.py index 97206019b9..ea8cc8aa86 100644 --- a/api/tests/unit_tests/core/agent/test_cot_chat_agent_runner.py +++ b/api/tests/unit_tests/core/agent/test_cot_chat_agent_runner.py @@ -1,9 +1,9 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.message_entities import TextPromptMessageContent from core.agent.cot_chat_agent_runner import CotChatAgentRunner +from graphon.model_runtime.entities.message_entities import TextPromptMessageContent from tests.unit_tests.core.agent.conftest import ( DummyAgentConfig, DummyAppConfig, diff --git a/api/tests/unit_tests/core/agent/test_cot_completion_agent_runner.py b/api/tests/unit_tests/core/agent/test_cot_completion_agent_runner.py index defc8b4b64..2f5873d865 100644 --- a/api/tests/unit_tests/core/agent/test_cot_completion_agent_runner.py +++ b/api/tests/unit_tests/core/agent/test_cot_completion_agent_runner.py @@ -1,6 +1,8 @@ import json import pytest + +from core.agent.cot_completion_agent_runner import CotCompletionAgentRunner from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, ImagePromptMessageContent, @@ -8,8 +10,6 @@ from graphon.model_runtime.entities.message_entities import ( UserPromptMessage, ) -from core.agent.cot_completion_agent_runner import CotCompletionAgentRunner - # ----------------------------- # Fixtures # ----------------------------- diff --git a/api/tests/unit_tests/core/agent/test_fc_agent_runner.py b/api/tests/unit_tests/core/agent/test_fc_agent_runner.py index a44a0650eb..17ab5babcb 100644 --- a/api/tests/unit_tests/core/agent/test_fc_agent_runner.py +++ b/api/tests/unit_tests/core/agent/test_fc_agent_runner.py @@ -3,6 +3,11 @@ from typing import Any from unittest.mock import MagicMock import pytest + +from core.agent.errors import AgentMaxIterationError +from core.agent.fc_agent_runner import FunctionCallAgentRunner +from core.app.apps.base_app_queue_manager import PublishFrom +from core.app.entities.queue_entities import QueueMessageFileEvent from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.model_runtime.entities.message_entities import ( DocumentPromptMessageContent, @@ -11,11 +16,6 @@ from graphon.model_runtime.entities.message_entities import ( UserPromptMessage, ) -from core.agent.errors import AgentMaxIterationError -from core.agent.fc_agent_runner import FunctionCallAgentRunner -from core.app.apps.base_app_queue_manager import PublishFrom -from core.app.entities.queue_entities import QueueMessageFileEvent - # ============================== # Dummy Helper Classes # ============================== diff --git a/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_model_config_converter.py b/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_model_config_converter.py index 5ee66da94a..186b4a501d 100644 --- a/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_model_config_converter.py +++ b/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_model_config_converter.py @@ -2,8 +2,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.model_entities import ModelPropertyKey from core.app.app_config.easy_ui_based_app.model_config.converter import ModelConfigConverter from core.entities.model_entities import ModelStatus @@ -12,6 +10,8 @@ from core.errors.error import ( ProviderTokenNotInitError, QuotaExceededError, ) +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.model_entities import ModelPropertyKey class TestModelConfigConverter: diff --git a/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_variables_manager.py b/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_variables_manager.py index e2f3c16335..d9fe7004ff 100644 --- a/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_variables_manager.py +++ b/api/tests/unit_tests/core/app/app_config/easy_ui_based_app/test_variables_manager.py @@ -1,9 +1,9 @@ import pytest -from graphon.variables.input_entities import VariableEntityType from core.app.app_config.easy_ui_based_app.variables.manager import ( BasicVariablesConfigManager, ) +from graphon.variables.input_entities import VariableEntityType class TestBasicVariablesConfigManagerConvert: diff --git a/api/tests/unit_tests/core/app/app_config/features/file_upload/test_manager.py b/api/tests/unit_tests/core/app/app_config/features/file_upload/test_manager.py index 8bde9c1f97..11b53dd0f9 100644 --- a/api/tests/unit_tests/core/app/app_config/features/file_upload/test_manager.py +++ b/api/tests/unit_tests/core/app/app_config/features/file_upload/test_manager.py @@ -1,8 +1,7 @@ +from core.app.app_config.features.file_upload.manager import FileUploadConfigManager from graphon.file import FileTransferMethod, FileUploadConfig, ImageConfig from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent -from core.app.app_config.features.file_upload.manager import FileUploadConfigManager - def test_convert_with_vision(): config = { diff --git a/api/tests/unit_tests/core/app/app_config/test_entities.py b/api/tests/unit_tests/core/app/app_config/test_entities.py index 000f83cd5a..f2bc3076da 100644 --- a/api/tests/unit_tests/core/app/app_config/test_entities.py +++ b/api/tests/unit_tests/core/app/app_config/test_entities.py @@ -1,10 +1,10 @@ import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.app.app_config.entities import ( DatasetRetrieveConfigEntity, PromptTemplateEntity, ) +from graphon.variables.input_entities import VariableEntity, VariableEntityType class TestAppConfigEntities: diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_conversation_variables.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_conversation_variables.py index 1fb0dc6cf1..45d4b0e321 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_conversation_variables.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_app_runner_conversation_variables.py @@ -3,12 +3,12 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 -from graphon.variables import SegmentType from sqlalchemy.orm import Session from core.app.apps.advanced_chat.app_runner import AdvancedChatAppRunner from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom from factories import variable_factory +from graphon.variables import SegmentType from models import ConversationVariable, Workflow MINIMAL_GRAPH = { diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_response_converter.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_response_converter.py index e9fdeefee4..f2df35d7d0 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_response_converter.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_response_converter.py @@ -1,7 +1,5 @@ from collections.abc import Generator -from graphon.enums import WorkflowNodeExecutionStatus - from core.app.apps.advanced_chat.generate_response_converter import AdvancedChatAppGenerateResponseConverter from core.app.entities.task_entities import ( ChatbotAppBlockingResponse, @@ -12,6 +10,7 @@ from core.app.entities.task_entities import ( NodeStartStreamResponse, PingStreamResponse, ) +from graphon.enums import WorkflowNodeExecutionStatus class TestAdvancedChatGenerateResponseConverter: diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline.py index a6d8598955..99a386cd45 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline.py @@ -6,8 +6,6 @@ from types import SimpleNamespace from unittest import mock import pytest -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus from core.app.apps.advanced_chat import generate_task_pipeline as pipeline_module from core.app.entities.app_invoke_entities import InvokeFrom @@ -19,6 +17,8 @@ from core.app.entities.queue_entities import ( QueueWorkflowSucceededEvent, ) from core.app.entities.task_entities import StreamEvent +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus from models.enums import MessageStatus from models.execution_extra_content import HumanInputContent from models.model import AppMode, EndUser diff --git a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline_core.py b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline_core.py index 82b2e51019..29fd63c063 100644 --- a/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline_core.py +++ b/api/tests/unit_tests/core/app/apps/advanced_chat/test_generate_task_pipeline_core.py @@ -4,8 +4,6 @@ from contextlib import contextmanager from types import SimpleNamespace import pytest -from graphon.enums import BuiltinNodeTypes -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.app_config.entities import AppAdditionalFeatures, WorkflowUIBasedAppConfig from core.app.apps.advanced_chat.generate_task_pipeline import ( @@ -49,6 +47,8 @@ from core.app.entities.task_entities import ( ) from core.base.tts.app_generator_tts_publisher import AudioTrunk from core.workflow.system_variables import build_system_variables +from graphon.enums import BuiltinNodeTypes +from graphon.runtime import GraphRuntimeState, VariablePool from libs.datetime_utils import naive_utc_now from models.enums import MessageStatus from models.model import AppMode, EndUser diff --git a/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_generator.py b/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_generator.py index 7dc4358150..80f7f94b1a 100644 --- a/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_generator.py +++ b/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_generator.py @@ -1,12 +1,12 @@ import contextlib import pytest -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError from core.app.apps.agent_chat.app_generator import AgentChatAppGenerator from core.app.apps.exc import GenerateTaskStoppedError from core.app.entities.app_invoke_entities import InvokeFrom +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError class DummyAccount: diff --git a/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_runner.py b/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_runner.py index 08250bc3b6..4567b35480 100644 --- a/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_runner.py +++ b/api/tests/unit_tests/core/app/apps/agent_chat/test_agent_chat_app_runner.py @@ -1,10 +1,10 @@ import pytest -from graphon.model_runtime.entities.llm_entities import LLMMode -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey from core.agent.entities import AgentEntity from core.app.apps.agent_chat.app_runner import AgentChatAppRunner from core.moderation.base import ModerationError +from graphon.model_runtime.entities.llm_entities import LLMMode +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelPropertyKey @pytest.fixture diff --git a/api/tests/unit_tests/core/app/apps/chat/test_app_generator_and_runner.py b/api/tests/unit_tests/core/app/apps/chat/test_app_generator_and_runner.py index 68bcffb0e8..8f3c41701b 100644 --- a/api/tests/unit_tests/core/app/apps/chat/test_app_generator_and_runner.py +++ b/api/tests/unit_tests/core/app/apps/chat/test_app_generator_and_runner.py @@ -2,7 +2,6 @@ from types import SimpleNamespace from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from core.app.apps.chat.app_generator import ChatAppGenerator from core.app.apps.chat.app_runner import ChatAppRunner @@ -10,6 +9,7 @@ from core.app.apps.exc import GenerateTaskStoppedError from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueAnnotationReplyEvent from core.moderation.base import ModerationError +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/apps/chat/test_base_app_runner_multimodal.py b/api/tests/unit_tests/core/app/apps/chat/test_base_app_runner_multimodal.py index f255d2c7df..b3ea1a464f 100644 --- a/api/tests/unit_tests/core/app/apps/chat/test_base_app_runner_multimodal.py +++ b/api/tests/unit_tests/core/app/apps/chat/test_base_app_runner_multimodal.py @@ -4,13 +4,13 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest -from graphon.file import FileTransferMethod, FileType -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from core.app.apps.base_app_queue_manager import PublishFrom from core.app.apps.base_app_runner import AppRunner from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueMessageFileEvent +from graphon.file import FileTransferMethod, FileType +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent from models.enums import CreatorUserRole diff --git a/api/tests/unit_tests/core/app/apps/common/test_graph_runtime_state_support.py b/api/tests/unit_tests/core/app/apps/common/test_graph_runtime_state_support.py index 4a94a2b4f1..201923e0e4 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_graph_runtime_state_support.py +++ b/api/tests/unit_tests/core/app/apps/common/test_graph_runtime_state_support.py @@ -1,11 +1,11 @@ from types import SimpleNamespace import pytest -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.apps.common.graph_runtime_state_support import GraphRuntimeStateSupport from core.workflow.system_variables import build_system_variables from core.workflow.variable_pool_initializer import add_variables_to_pool +from graphon.runtime import GraphRuntimeState, VariablePool def _make_state(workflow_run_id: str | None) -> GraphRuntimeState: diff --git a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter.py b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter.py index 328cd12f12..3ab63aed25 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter.py +++ b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter.py @@ -1,10 +1,9 @@ from collections.abc import Mapping, Sequence +from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter from graphon.file import FILE_MODEL_IDENTITY, File, FileTransferMethod, FileType from graphon.variables.segments import ArrayFileSegment, FileSegment -from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter - class TestWorkflowResponseConverterFetchFilesFromVariableValue: """Test class for WorkflowResponseConverter._fetch_files_from_variable_value method""" diff --git a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_human_input.py b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_human_input.py index bc11bf4174..1bef6f69cd 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_human_input.py +++ b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_human_input.py @@ -1,13 +1,12 @@ from datetime import UTC, datetime from types import SimpleNamespace -from graphon.entities import WorkflowStartReason -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueHumanInputFormFilledEvent, QueueHumanInputFormTimeoutEvent from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.runtime import GraphRuntimeState, VariablePool def _build_converter(): diff --git a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_resumption.py b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_resumption.py index c9e146ff12..936ac37e55 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_resumption.py +++ b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_resumption.py @@ -1,11 +1,10 @@ from types import SimpleNamespace -from graphon.entities import WorkflowStartReason -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter from core.app.entities.app_invoke_entities import InvokeFrom from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.runtime import GraphRuntimeState, VariablePool def _build_converter() -> WorkflowResponseConverter: diff --git a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_truncation.py b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_truncation.py index 0fde7565d2..b3c0eb74fa 100644 --- a/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_truncation.py +++ b/api/tests/unit_tests/core/app/apps/common/test_workflow_response_converter_truncation.py @@ -10,8 +10,6 @@ from typing import Any from unittest.mock import Mock import pytest -from graphon.entities import WorkflowStartReason -from graphon.enums import BuiltinNodeTypes from core.app.app_config.entities import WorkflowUIBasedAppConfig from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter @@ -27,6 +25,8 @@ from core.app.entities.queue_entities import ( QueueNodeSucceededEvent, ) from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.enums import BuiltinNodeTypes from libs.datetime_utils import naive_utc_now from models import Account from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/apps/completion/test_app_runner.py b/api/tests/unit_tests/core/app/apps/completion/test_app_runner.py index 619d66085a..aa2085177e 100644 --- a/api/tests/unit_tests/core/app/apps/completion/test_app_runner.py +++ b/api/tests/unit_tests/core/app/apps/completion/test_app_runner.py @@ -2,11 +2,11 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent import core.app.apps.completion.app_runner as module from core.app.apps.completion.app_runner import CompletionAppRunner from core.moderation.base import ModerationError +from graphon.model_runtime.entities.message_entities import ImagePromptMessageContent @pytest.fixture diff --git a/api/tests/unit_tests/core/app/apps/completion/test_completion_completion_app_generator.py b/api/tests/unit_tests/core/app/apps/completion/test_completion_completion_app_generator.py index 96af9fbdee..f2e35f9900 100644 --- a/api/tests/unit_tests/core/app/apps/completion/test_completion_completion_app_generator.py +++ b/api/tests/unit_tests/core/app/apps/completion/test_completion_completion_app_generator.py @@ -3,13 +3,13 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from pydantic import ValidationError import core.app.apps.completion.app_generator as module from core.app.apps.completion.app_generator import CompletionAppGenerator from core.app.apps.exc import GenerateTaskStoppedError from core.app.entities.app_invoke_entities import InvokeFrom +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError from services.errors.app import MoreLikeThisDisabledError from services.errors.message import MessageNotExistsError diff --git a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_generate_response_converter.py b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_generate_response_converter.py index 6cdcab29ab..cfe797aa76 100644 --- a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_generate_response_converter.py +++ b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_generate_response_converter.py @@ -1,7 +1,5 @@ from collections.abc import Generator -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus - from core.app.apps.pipeline.generate_response_converter import WorkflowAppGenerateResponseConverter from core.app.entities.task_entities import ( AppStreamResponse, @@ -12,6 +10,7 @@ from core.app.entities.task_entities import ( WorkflowAppBlockingResponse, WorkflowAppStreamResponse, ) +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus def test_convert_blocking_full_and_simple_response(): diff --git a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_queue_manager.py b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_queue_manager.py index 4fe82efcb3..9db83f5531 100644 --- a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_queue_manager.py +++ b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_queue_manager.py @@ -1,5 +1,4 @@ import pytest -from graphon.model_runtime.entities.llm_entities import LLMResult import core.app.apps.pipeline.pipeline_queue_manager as module from core.app.apps.base_app_queue_manager import PublishFrom @@ -14,6 +13,7 @@ from core.app.entities.queue_entities import ( QueueWorkflowPartialSuccessEvent, QueueWorkflowSucceededEvent, ) +from graphon.model_runtime.entities.llm_entities import LLMResult def test_publish_sets_stop_listen_and_raises_on_stopped(mocker): diff --git a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_runner.py b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_runner.py index c8ae288e6f..618c8fd76f 100644 --- a/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_runner.py +++ b/api/tests/unit_tests/core/app/apps/pipeline/test_pipeline_runner.py @@ -22,11 +22,11 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.graph_events import GraphRunFailedEvent import core.app.apps.pipeline.pipeline_runner as module from core.app.apps.pipeline.pipeline_runner import PipelineRunner from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from graphon.graph_events import GraphRunFailedEvent def _build_app_generate_entity() -> SimpleNamespace: diff --git a/api/tests/unit_tests/core/app/apps/test_base_app_generator.py b/api/tests/unit_tests/core/app/apps/test_base_app_generator.py index 6167be3bbd..b0f8b423e1 100644 --- a/api/tests/unit_tests/core/app/apps/test_base_app_generator.py +++ b/api/tests/unit_tests/core/app/apps/test_base_app_generator.py @@ -1,7 +1,7 @@ import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.app.apps.base_app_generator import BaseAppGenerator +from graphon.variables.input_entities import VariableEntity, VariableEntityType def test_validate_inputs_with_zero(): @@ -476,9 +476,8 @@ class TestBaseAppGeneratorExtras: assert converted[1] == "event: ping\n\n" def test_get_draft_var_saver_factory_debugger(self): - from graphon.enums import BuiltinNodeTypes - from core.app.entities.app_invoke_entities import InvokeFrom + from graphon.enums import BuiltinNodeTypes from models import Account base_app_generator = BaseAppGenerator() diff --git a/api/tests/unit_tests/core/app/apps/test_base_app_runner.py b/api/tests/unit_tests/core/app/apps/test_base_app_runner.py index 1dee7fdab6..17de39ca99 100644 --- a/api/tests/unit_tests/core/app/apps/test_base_app_runner.py +++ b/api/tests/unit_tests/core/app/apps/test_base_app_runner.py @@ -4,15 +4,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - ImagePromptMessageContent, - PromptMessageRole, - TextPromptMessageContent, -) -from graphon.model_runtime.entities.model_entities import ModelPropertyKey -from graphon.model_runtime.errors.invoke import InvokeBadRequestError from core.app.app_config.entities import ( AdvancedChatMessageEntity, @@ -23,6 +14,15 @@ from core.app.app_config.entities import ( from core.app.apps.base_app_runner import AppRunner from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueAgentMessageEvent, QueueLLMChunkEvent, QueueMessageEndEvent +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + ImagePromptMessageContent, + PromptMessageRole, + TextPromptMessageContent, +) +from graphon.model_runtime.entities.model_entities import ModelPropertyKey +from graphon.model_runtime.errors.invoke import InvokeBadRequestError from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/apps/test_pause_resume.py b/api/tests/unit_tests/core/app/apps/test_pause_resume.py index a126bc85f7..a04a7b7576 100644 --- a/api/tests/unit_tests/core/app/apps/test_pause_resume.py +++ b/api/tests/unit_tests/core/app/apps/test_pause_resume.py @@ -4,6 +4,11 @@ from types import ModuleType, SimpleNamespace from typing import Any import graphon.nodes.human_input.entities # noqa: F401 +from core.app.apps.advanced_chat import app_generator as adv_app_gen_module +from core.app.apps.workflow import app_generator as wf_app_gen_module +from core.app.entities.app_invoke_entities import InvokeFrom +from core.workflow.node_factory import DifyNodeFactory +from core.workflow.system_variables import build_system_variables from graphon.entities import WorkflowStartReason from graphon.entities.base_node_data import BaseNodeData, RetryConfig from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter @@ -25,12 +30,6 @@ from graphon.nodes.base.node import Node from graphon.nodes.end.entities import EndNodeData from graphon.nodes.start.entities import StartNodeData from graphon.runtime import GraphRuntimeState, VariablePool - -from core.app.apps.advanced_chat import app_generator as adv_app_gen_module -from core.app.apps.workflow import app_generator as wf_app_gen_module -from core.app.entities.app_invoke_entities import InvokeFrom -from core.workflow.node_factory import DifyNodeFactory -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params if "core.ops.ops_trace_manager" not in sys.modules: diff --git a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_core.py b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_core.py index de5bca161c..58c7bfa4bc 100644 --- a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_core.py +++ b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_core.py @@ -4,6 +4,23 @@ from datetime import UTC, datetime from types import SimpleNamespace import pytest + +from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.app.entities.queue_entities import ( + QueueAgentLogEvent, + QueueIterationCompletedEvent, + QueueLoopCompletedEvent, + QueueNodeExceptionEvent, + QueueNodeFailedEvent, + QueueNodeRetryEvent, + QueueNodeSucceededEvent, + QueueTextChunkEvent, + QueueWorkflowPausedEvent, + QueueWorkflowStartedEvent, + QueueWorkflowSucceededEvent, +) +from core.workflow.system_variables import default_system_variables from graphon.entities.pause_reason import HumanInputRequired from graphon.enums import BuiltinNodeTypes from graphon.graph_events import ( @@ -24,23 +41,6 @@ from graphon.node_events import NodeRunResult from graphon.runtime import GraphRuntimeState, VariablePool from graphon.variables.variables import StringVariable -from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.app.entities.queue_entities import ( - QueueAgentLogEvent, - QueueIterationCompletedEvent, - QueueLoopCompletedEvent, - QueueNodeExceptionEvent, - QueueNodeFailedEvent, - QueueNodeRetryEvent, - QueueNodeSucceededEvent, - QueueTextChunkEvent, - QueueWorkflowPausedEvent, - QueueWorkflowStartedEvent, - QueueWorkflowSucceededEvent, -) -from core.workflow.system_variables import default_system_variables - class TestWorkflowBasedAppRunner: def test_resolve_user_from(self): diff --git a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_notifications.py b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_notifications.py index aa789d9ff3..10fb2271f4 100644 --- a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_notifications.py +++ b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_notifications.py @@ -1,11 +1,11 @@ from unittest.mock import MagicMock import pytest -from graphon.entities.pause_reason import HumanInputRequired -from graphon.graph_events import GraphRunPausedEvent from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner from core.app.entities.queue_entities import QueueWorkflowPausedEvent +from graphon.entities.pause_reason import HumanInputRequired +from graphon.graph_events import GraphRunPausedEvent class _DummyQueueManager: diff --git a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_single_node.py b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_single_node.py index 9e30faecf2..620a153204 100644 --- a/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_single_node.py +++ b/api/tests/unit_tests/core/app/apps/test_workflow_app_runner_single_node.py @@ -4,14 +4,14 @@ from typing import Any from unittest.mock import MagicMock, patch import pytest -from graphon.entities.graph_config import NodeConfigDictAdapter -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.apps.workflow.app_runner import WorkflowAppRunner from core.app.apps.workflow_app_runner import WorkflowBasedAppRunner from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity from core.workflow.system_variables import default_system_variables +from graphon.entities.graph_config import NodeConfigDictAdapter +from graphon.runtime import GraphRuntimeState, VariablePool from models.workflow import Workflow diff --git a/api/tests/unit_tests/core/app/apps/test_workflow_pause_events.py b/api/tests/unit_tests/core/app/apps/test_workflow_pause_events.py index 8a717e1dcc..a3ab379b66 100644 --- a/api/tests/unit_tests/core/app/apps/test_workflow_pause_events.py +++ b/api/tests/unit_tests/core/app/apps/test_workflow_pause_events.py @@ -3,11 +3,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.entities import WorkflowStartReason -from graphon.entities.pause_reason import HumanInputRequired -from graphon.graph_events import GraphRunPausedEvent -from graphon.nodes.human_input.entities import FormInput, UserAction -from graphon.nodes.human_input.enums import FormInputType from core.app.apps.common import workflow_response_converter from core.app.apps.common.workflow_response_converter import WorkflowResponseConverter @@ -16,6 +11,11 @@ from core.app.entities.app_invoke_entities import InvokeFrom from core.app.entities.queue_entities import QueueWorkflowPausedEvent from core.app.entities.task_entities import HumanInputRequiredResponse, WorkflowPauseStreamResponse from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.entities.pause_reason import HumanInputRequired +from graphon.graph_events import GraphRunPausedEvent +from graphon.nodes.human_input.entities import FormInput, UserAction +from graphon.nodes.human_input.enums import FormInputType from models.account import Account from models.human_input import RecipientType diff --git a/api/tests/unit_tests/core/app/apps/workflow/test_generate_response_converter.py b/api/tests/unit_tests/core/app/apps/workflow/test_generate_response_converter.py index b768e813bd..7dd7ffd727 100644 --- a/api/tests/unit_tests/core/app/apps/workflow/test_generate_response_converter.py +++ b/api/tests/unit_tests/core/app/apps/workflow/test_generate_response_converter.py @@ -1,7 +1,5 @@ from collections.abc import Generator -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus - from core.app.apps.workflow.generate_response_converter import WorkflowAppGenerateResponseConverter from core.app.entities.task_entities import ( ErrorStreamResponse, @@ -11,6 +9,7 @@ from core.app.entities.task_entities import ( WorkflowAppBlockingResponse, WorkflowAppStreamResponse, ) +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus class TestWorkflowGenerateResponseConverter: diff --git a/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline.py b/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline.py index 29df903aa8..1f6e7e12ef 100644 --- a/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline.py +++ b/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline.py @@ -2,15 +2,14 @@ import time from contextlib import contextmanager from unittest.mock import MagicMock -from graphon.entities import WorkflowStartReason -from graphon.runtime import GraphRuntimeState - from core.app.app_config.entities import WorkflowUIBasedAppConfig from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.apps.workflow.generate_task_pipeline import WorkflowAppGenerateTaskPipeline from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity from core.app.entities.queue_entities import QueueWorkflowStartedEvent from core.workflow.system_variables import build_system_variables +from graphon.entities import WorkflowStartReason +from graphon.runtime import GraphRuntimeState from models.account import Account from models.model import AppMode from tests.workflow_test_utils import build_test_variable_pool diff --git a/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline_core.py b/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline_core.py index d91bb85aee..99433478d3 100644 --- a/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline_core.py +++ b/api/tests/unit_tests/core/app/apps/workflow/test_generate_task_pipeline_core.py @@ -5,8 +5,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.app_config.entities import AppAdditionalFeatures, WorkflowUIBasedAppConfig from core.app.apps.workflow.generate_task_pipeline import WorkflowAppGenerateTaskPipeline @@ -47,6 +45,8 @@ from core.app.entities.task_entities import ( ) from core.base.tts.app_generator_tts_publisher import AudioTrunk from core.workflow.system_variables import build_system_variables, system_variables_to_mapping +from graphon.enums import BuiltinNodeTypes, WorkflowExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool from libs.datetime_utils import naive_utc_now from models.enums import CreatorUserRole from models.model import AppMode, EndUser diff --git a/api/tests/unit_tests/core/app/entities/test_task_entities.py b/api/tests/unit_tests/core/app/entities/test_task_entities.py index 014a0cba72..7c79780641 100644 --- a/api/tests/unit_tests/core/app/entities/test_task_entities.py +++ b/api/tests/unit_tests/core/app/entities/test_task_entities.py @@ -1,11 +1,10 @@ -from graphon.enums import WorkflowNodeExecutionStatus - from core.app.entities.task_entities import ( NodeFinishStreamResponse, NodeRetryStreamResponse, NodeStartStreamResponse, StreamEvent, ) +from graphon.enums import WorkflowNodeExecutionStatus class TestTaskEntities: diff --git a/api/tests/unit_tests/core/app/layers/test_conversation_variable_persist_layer.py b/api/tests/unit_tests/core/app/layers/test_conversation_variable_persist_layer.py index a78c1b428f..ba55e8f695 100644 --- a/api/tests/unit_tests/core/app/layers/test_conversation_variable_persist_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_conversation_variable_persist_layer.py @@ -1,6 +1,9 @@ from collections.abc import Sequence from unittest.mock import Mock +from core.app.layers.conversation_variable_persist_layer import ConversationVariablePersistenceLayer +from core.workflow.system_variables import SystemVariableKey +from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.graph_engine.command_channels import CommandChannel from graphon.graph_events import NodeRunSucceededEvent, NodeRunVariableUpdatedEvent @@ -8,10 +11,6 @@ from graphon.node_events import NodeRunResult from graphon.runtime import ReadOnlyGraphRuntimeState from graphon.variables import StringVariable from graphon.variables.segments import Segment, StringSegment - -from core.app.layers.conversation_variable_persist_layer import ConversationVariablePersistenceLayer -from core.workflow.system_variables import SystemVariableKey -from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID from libs.datetime_utils import naive_utc_now diff --git a/api/tests/unit_tests/core/app/layers/test_pause_state_persist_layer.py b/api/tests/unit_tests/core/app/layers/test_pause_state_persist_layer.py index 035e64325b..539944d683 100644 --- a/api/tests/unit_tests/core/app/layers/test_pause_state_persist_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_pause_state_persist_layer.py @@ -4,6 +4,16 @@ from time import time from unittest.mock import Mock import pytest + +from core.app.app_config.entities import WorkflowUIBasedAppConfig +from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom, WorkflowAppGenerateEntity +from core.app.layers.pause_state_persist_layer import ( + PauseStatePersistenceLayer, + WorkflowResumptionContext, + _AdvancedChatAppGenerateEntityWrapper, + _WorkflowGenerateEntityWrapper, +) +from core.workflow.system_variables import SystemVariableKey from graphon.entities.pause_reason import SchedulingPause from graphon.graph_engine.entities.commands import GraphEngineCommand from graphon.graph_engine.layers.base import GraphEngineLayerNotInitializedError @@ -15,16 +25,6 @@ from graphon.graph_events import ( ) from graphon.runtime import ReadOnlyVariablePool from graphon.variables.segments import Segment - -from core.app.app_config.entities import WorkflowUIBasedAppConfig -from core.app.entities.app_invoke_entities import AdvancedChatAppGenerateEntity, InvokeFrom, WorkflowAppGenerateEntity -from core.app.layers.pause_state_persist_layer import ( - PauseStatePersistenceLayer, - WorkflowResumptionContext, - _AdvancedChatAppGenerateEntityWrapper, - _WorkflowGenerateEntityWrapper, -) -from core.workflow.system_variables import SystemVariableKey from models.model import AppMode from repositories.factory import DifyAPIRepositoryFactory diff --git a/api/tests/unit_tests/core/app/layers/test_suspend_layer.py b/api/tests/unit_tests/core/app/layers/test_suspend_layer.py index 95931f4f8b..12d49be0f1 100644 --- a/api/tests/unit_tests/core/app/layers/test_suspend_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_suspend_layer.py @@ -1,6 +1,5 @@ -from graphon.graph_events import GraphRunPausedEvent - from core.app.layers.suspend_layer import SuspendLayer +from graphon.graph_events import GraphRunPausedEvent class TestSuspendLayer: diff --git a/api/tests/unit_tests/core/app/layers/test_timeslice_layer.py b/api/tests/unit_tests/core/app/layers/test_timeslice_layer.py index 7cf6eb4f31..1ac9a4d8c0 100644 --- a/api/tests/unit_tests/core/app/layers/test_timeslice_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_timeslice_layer.py @@ -1,8 +1,7 @@ from unittest.mock import Mock, patch -from graphon.graph_engine.entities.commands import CommandType, GraphEngineCommand - from core.app.layers.timeslice_layer import TimeSliceLayer +from graphon.graph_engine.entities.commands import CommandType, GraphEngineCommand from services.workflow.entities import WorkflowScheduleCFSPlanEntity from services.workflow.scheduler import SchedulerCommand diff --git a/api/tests/unit_tests/core/app/layers/test_trigger_post_layer.py b/api/tests/unit_tests/core/app/layers/test_trigger_post_layer.py index aa9285789b..d3bd15b6f3 100644 --- a/api/tests/unit_tests/core/app/layers/test_trigger_post_layer.py +++ b/api/tests/unit_tests/core/app/layers/test_trigger_post_layer.py @@ -2,11 +2,10 @@ from datetime import UTC, datetime, timedelta from types import SimpleNamespace from unittest.mock import Mock, patch -from graphon.graph_events import GraphRunFailedEvent, GraphRunSucceededEvent -from graphon.runtime import VariablePool - from core.app.layers.trigger_post_layer import TriggerPostLayer from core.workflow.system_variables import build_system_variables +from graphon.graph_events import GraphRunFailedEvent, GraphRunSucceededEvent +from graphon.runtime import VariablePool from models.enums import WorkflowTriggerStatus diff --git a/api/tests/unit_tests/core/app/task_pipeline/test_based_generate_task_pipeline.py b/api/tests/unit_tests/core/app/task_pipeline/test_based_generate_task_pipeline.py index 58aa7d7478..c246f7b783 100644 --- a/api/tests/unit_tests/core/app/task_pipeline/test_based_generate_task_pipeline.py +++ b/api/tests/unit_tests/core/app/task_pipeline/test_based_generate_task_pipeline.py @@ -2,11 +2,11 @@ from types import SimpleNamespace from unittest.mock import Mock import pytest -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from core.app.entities.queue_entities import QueueErrorEvent from core.app.task_pipeline.based_generate_task_pipeline import BasedGenerateTaskPipeline from core.errors.error import QuotaExceededError +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from models.enums import MessageStatus diff --git a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline.py b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline.py index 4aaa10a81a..1c1bf391d3 100644 --- a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline.py +++ b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline.py @@ -2,8 +2,6 @@ from types import SimpleNamespace from unittest.mock import ANY, Mock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMResult as RuntimeLLMResult -from graphon.model_runtime.entities.message_entities import TextPromptMessageContent from core.app.apps.base_app_queue_manager import AppQueueManager from core.app.entities.app_invoke_entities import ChatAppGenerateEntity @@ -28,6 +26,8 @@ from core.app.entities.task_entities import ( from core.app.task_pipeline.easy_ui_based_generate_task_pipeline import EasyUIBasedGenerateTaskPipeline from core.base.tts import AppGeneratorTTSPublisher from core.ops.ops_trace_manager import TraceQueueManager +from graphon.model_runtime.entities.llm_entities import LLMResult as RuntimeLLMResult +from graphon.model_runtime.entities.message_entities import TextPromptMessageContent from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline_core.py b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline_core.py index f22602a400..a20d89d807 100644 --- a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline_core.py +++ b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_based_generate_task_pipeline_core.py @@ -5,9 +5,6 @@ from types import SimpleNamespace from unittest.mock import Mock import pytest -from graphon.file import FileTransferMethod -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage -from graphon.model_runtime.entities.message_entities import AssistantPromptMessage, TextPromptMessageContent from core.app.app_config.entities import ( AppAdditionalFeatures, @@ -41,6 +38,9 @@ from core.app.entities.task_entities import ( ) from core.app.task_pipeline.easy_ui_based_generate_task_pipeline import EasyUIBasedGenerateTaskPipeline from core.base.tts import AudioTrunk +from graphon.file import FileTransferMethod +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMResultChunk, LLMResultChunkDelta, LLMUsage +from graphon.model_runtime.entities.message_entities import AssistantPromptMessage, TextPromptMessageContent from models.model import AppMode diff --git a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_message_end_files.py b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_message_end_files.py index 31b7313066..595d716666 100644 --- a/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_message_end_files.py +++ b/api/tests/unit_tests/core/app/task_pipeline/test_easy_ui_message_end_files.py @@ -17,11 +17,11 @@ import uuid from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.file import FileTransferMethod, FileType from sqlalchemy.orm import Session from core.app.entities.task_entities import MessageEndStreamResponse from core.app.task_pipeline.easy_ui_based_generate_task_pipeline import EasyUIBasedGenerateTaskPipeline +from graphon.file import FileTransferMethod, FileType from models.model import MessageFile, UploadFile diff --git a/api/tests/unit_tests/core/app/test_easy_ui_model_config_manager.py b/api/tests/unit_tests/core/app/test_easy_ui_model_config_manager.py index 29df7eea86..21c761c579 100644 --- a/api/tests/unit_tests/core/app/test_easy_ui_model_config_manager.py +++ b/api/tests/unit_tests/core/app/test_easy_ui_model_config_manager.py @@ -1,10 +1,9 @@ from types import SimpleNamespace from unittest.mock import patch -from graphon.model_runtime.entities.model_entities import ModelPropertyKey - from core.app.app_config.easy_ui_based_app.model_config.manager import ModelConfigManager from core.app.app_config.entities import ModelConfigEntity +from graphon.model_runtime.entities.model_entities import ModelPropertyKey from models.provider_ids import ModelProviderID diff --git a/api/tests/unit_tests/core/app/workflow/layers/test_persistence.py b/api/tests/unit_tests/core/app/workflow/layers/test_persistence.py index dc2d82ccd6..5c50cb78da 100644 --- a/api/tests/unit_tests/core/app/workflow/layers/test_persistence.py +++ b/api/tests/unit_tests/core/app/workflow/layers/test_persistence.py @@ -2,14 +2,14 @@ from datetime import UTC, datetime from unittest.mock import Mock import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus, WorkflowType -from graphon.node_events import NodeRunResult from core.app.workflow.layers.persistence import ( PersistenceWorkflowInfo, WorkflowPersistenceLayer, _NodeRuntimeSnapshot, ) +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus, WorkflowType +from graphon.node_events import NodeRunResult def _build_layer() -> WorkflowPersistenceLayer: diff --git a/api/tests/unit_tests/core/app/workflow/test_file_runtime.py b/api/tests/unit_tests/core/app/workflow/test_file_runtime.py index 7be9d6ac1e..cddd03f4b0 100644 --- a/api/tests/unit_tests/core/app/workflow/test_file_runtime.py +++ b/api/tests/unit_tests/core/app/workflow/test_file_runtime.py @@ -8,13 +8,13 @@ from unittest.mock import MagicMock, patch from urllib.parse import parse_qs, urlparse import pytest -from graphon.file import File, FileTransferMethod, FileType from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.app.file_access import DatabaseFileAccessController, FileAccessScope from core.app.workflow import file_runtime from core.app.workflow.file_runtime import DifyWorkflowFileRuntime, bind_dify_workflow_file_runtime from core.workflow.file_reference import build_file_reference +from graphon.file import File, FileTransferMethod, FileType from models import ToolFile, UploadFile diff --git a/api/tests/unit_tests/core/app/workflow/test_node_factory.py b/api/tests/unit_tests/core/app/workflow/test_node_factory.py index 8497261d45..c4bfb23272 100644 --- a/api/tests/unit_tests/core/app/workflow/test_node_factory.py +++ b/api/tests/unit_tests/core/app/workflow/test_node_factory.py @@ -1,10 +1,10 @@ from types import SimpleNamespace import pytest -from graphon.enums import BuiltinNodeTypes from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context from core.workflow.node_factory import DifyNodeFactory +from graphon.enums import BuiltinNodeTypes class DummyNode: diff --git a/api/tests/unit_tests/core/app/workflow/test_observability_layer_extra.py b/api/tests/unit_tests/core/app/workflow/test_observability_layer_extra.py index a47d3db6f5..82552470a9 100644 --- a/api/tests/unit_tests/core/app/workflow/test_observability_layer_extra.py +++ b/api/tests/unit_tests/core/app/workflow/test_observability_layer_extra.py @@ -2,9 +2,8 @@ from __future__ import annotations from types import SimpleNamespace -from graphon.enums import BuiltinNodeTypes - from core.app.workflow.layers.observability import ObservabilityLayer +from graphon.enums import BuiltinNodeTypes class TestObservabilityLayerExtras: diff --git a/api/tests/unit_tests/core/app/workflow/test_persistence_layer.py b/api/tests/unit_tests/core/app/workflow/test_persistence_layer.py index d8a68f6d00..cacb4dd4fa 100644 --- a/api/tests/unit_tests/core/app/workflow/test_persistence_layer.py +++ b/api/tests/unit_tests/core/app/workflow/test_persistence_layer.py @@ -4,6 +4,10 @@ from datetime import UTC, datetime from types import SimpleNamespace import pytest + +from core.app.entities.app_invoke_entities import WorkflowAppGenerateEntity +from core.app.workflow.layers.persistence import PersistenceWorkflowInfo, WorkflowPersistenceLayer +from core.workflow.system_variables import SystemVariableKey, build_system_variables from graphon.entities import WorkflowNodeExecution from graphon.entities.pause_reason import SchedulingPause from graphon.enums import ( @@ -29,10 +33,6 @@ from graphon.graph_events import ( from graphon.node_events import NodeRunResult from graphon.runtime import GraphRuntimeState, ReadOnlyGraphRuntimeStateWrapper, VariablePool -from core.app.entities.app_invoke_entities import WorkflowAppGenerateEntity -from core.app.workflow.layers.persistence import PersistenceWorkflowInfo, WorkflowPersistenceLayer -from core.workflow.system_variables import SystemVariableKey, build_system_variables - class _RepoRecorder: def __init__(self) -> None: diff --git a/api/tests/unit_tests/core/base/test_app_generator_tts_publisher.py b/api/tests/unit_tests/core/base/test_app_generator_tts_publisher.py index 5ff9774b52..7b433ab57b 100644 --- a/api/tests/unit_tests/core/base/test_app_generator_tts_publisher.py +++ b/api/tests/unit_tests/core/base/test_app_generator_tts_publisher.py @@ -301,6 +301,7 @@ class TestAppGeneratorTTSPublisher: publisher = AppGeneratorTTSPublisher("tenant", "voice1") publisher.executor = MagicMock() + from core.app.entities.queue_entities import QueueAgentMessageEvent from graphon.model_runtime.entities.llm_entities import LLMResultChunk, LLMResultChunkDelta from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, @@ -308,8 +309,6 @@ class TestAppGeneratorTTSPublisher: TextPromptMessageContent, ) - from core.app.entities.queue_entities import QueueAgentMessageEvent - chunk = LLMResultChunk( model="model", delta=LLMResultChunkDelta( @@ -337,11 +336,10 @@ class TestAppGeneratorTTSPublisher: publisher = AppGeneratorTTSPublisher("tenant", "voice1") publisher.executor = MagicMock() + from core.app.entities.queue_entities import QueueAgentMessageEvent from graphon.model_runtime.entities.llm_entities import LLMResultChunk, LLMResultChunkDelta from graphon.model_runtime.entities.message_entities import AssistantPromptMessage - from core.app.entities.queue_entities import QueueAgentMessageEvent - chunk = LLMResultChunk( model="model", delta=LLMResultChunkDelta( diff --git a/api/tests/unit_tests/core/datasource/test_datasource_manager.py b/api/tests/unit_tests/core/datasource/test_datasource_manager.py index d338cadb77..81315d2508 100644 --- a/api/tests/unit_tests/core/datasource/test_datasource_manager.py +++ b/api/tests/unit_tests/core/datasource/test_datasource_manager.py @@ -2,15 +2,15 @@ import types from collections.abc import Generator import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.file import File, FileTransferMethod, FileType -from graphon.node_events import StreamChunkEvent, StreamCompletedEvent from contexts.wrapper import RecyclableContextVar from core.datasource.datasource_manager import DatasourceManager from core.datasource.entities.datasource_entities import DatasourceMessage, DatasourceProviderType from core.datasource.errors import DatasourceProviderNotFoundError from core.workflow.file_reference import parse_file_reference +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.file import File, FileTransferMethod, FileType +from graphon.node_events import StreamChunkEvent, StreamCompletedEvent def _gen_messages_text_only(text: str) -> Generator[DatasourceMessage, None, None]: diff --git a/api/tests/unit_tests/core/datasource/utils/test_message_transformer.py b/api/tests/unit_tests/core/datasource/utils/test_message_transformer.py index fbaf6d497d..0fca43cd0b 100644 --- a/api/tests/unit_tests/core/datasource/utils/test_message_transformer.py +++ b/api/tests/unit_tests/core/datasource/utils/test_message_transformer.py @@ -1,10 +1,10 @@ from unittest.mock import MagicMock, patch import pytest -from graphon.file import File, FileTransferMethod, FileType from core.datasource.entities.datasource_entities import DatasourceMessage from core.datasource.utils.message_transformer import DatasourceFileMessageTransformer +from graphon.file import File, FileTransferMethod, FileType from models.tools import ToolFile diff --git a/api/tests/unit_tests/core/entities/test_entities_execution_extra_content.py b/api/tests/unit_tests/core/entities/test_entities_execution_extra_content.py index ff9fd0d8f3..ef8f360dbf 100644 --- a/api/tests/unit_tests/core/entities/test_entities_execution_extra_content.py +++ b/api/tests/unit_tests/core/entities/test_entities_execution_extra_content.py @@ -1,12 +1,11 @@ -from graphon.nodes.human_input.entities import FormInput, UserAction -from graphon.nodes.human_input.enums import FormInputType - from core.entities.execution_extra_content import ( ExecutionExtraContentDomainModel, HumanInputContent, HumanInputFormDefinition, HumanInputFormSubmissionData, ) +from graphon.nodes.human_input.entities import FormInput, UserAction +from graphon.nodes.human_input.enums import FormInputType from models.execution_extra_content import ExecutionContentType diff --git a/api/tests/unit_tests/core/entities/test_entities_model_entities.py b/api/tests/unit_tests/core/entities/test_entities_model_entities.py index 2acd278a31..a0b2820157 100644 --- a/api/tests/unit_tests/core/entities/test_entities_model_entities.py +++ b/api/tests/unit_tests/core/entities/test_entities_model_entities.py @@ -8,9 +8,6 @@ drive provider mapping behavior. """ import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType -from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity from core.entities.model_entities import ( DefaultModelEntity, @@ -19,6 +16,9 @@ from core.entities.model_entities import ( ProviderModelWithStatusEntity, SimpleModelProviderEntity, ) +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType +from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity def _build_model_with_status(status: ModelStatus) -> ProviderModelWithStatusEntity: diff --git a/api/tests/unit_tests/core/entities/test_entities_provider_configuration.py b/api/tests/unit_tests/core/entities/test_entities_provider_configuration.py index 8cf0409c4c..fe2c226843 100644 --- a/api/tests/unit_tests/core/entities/test_entities_provider_configuration.py +++ b/api/tests/unit_tests/core/entities/test_entities_provider_configuration.py @@ -6,17 +6,6 @@ from typing import Any from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - CredentialFormSchema, - FieldModelSchema, - FormType, - ModelCredentialSchema, - ProviderCredentialSchema, - ProviderEntity, -) from constants import HIDDEN_VALUE from core.entities.model_entities import ModelStatus @@ -35,6 +24,17 @@ from core.entities.provider_entities import ( SystemConfiguration, SystemConfigurationStatus, ) +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + CredentialFormSchema, + FieldModelSchema, + FormType, + ModelCredentialSchema, + ProviderCredentialSchema, + ProviderEntity, +) from models.enums import CredentialSourceType from models.provider import ProviderType from models.provider_ids import ModelProviderID diff --git a/api/tests/unit_tests/core/entities/test_entities_provider_entities.py b/api/tests/unit_tests/core/entities/test_entities_provider_entities.py index 8685d16283..a159d3ad4d 100644 --- a/api/tests/unit_tests/core/entities/test_entities_provider_entities.py +++ b/api/tests/unit_tests/core/entities/test_entities_provider_entities.py @@ -1,5 +1,4 @@ import pytest -from graphon.model_runtime.entities.model_entities import ModelType from core.entities.parameter_entities import AppSelectorScope from core.entities.provider_entities import ( @@ -9,6 +8,7 @@ from core.entities.provider_entities import ( ProviderQuotaType, ) from core.tools.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType def test_provider_quota_type_value_of_returns_enum_member() -> None: diff --git a/api/tests/unit_tests/core/external_data_tool/test_base.py b/api/tests/unit_tests/core/external_data_tool/test_base.py index 216cda83c5..63e887f904 100644 --- a/api/tests/unit_tests/core/external_data_tool/test_base.py +++ b/api/tests/unit_tests/core/external_data_tool/test_base.py @@ -1,3 +1,5 @@ +from typing import Any + import pytest from core.extension.extensible import ExtensionModule @@ -12,10 +14,10 @@ class TestExternalDataTool: # Create a concrete subclass to test init class ConcreteTool(ExternalDataTool): @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): return super().validate_config(tenant_id, config) - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: dict[str, Any], query: str | None = None) -> str: return super().query(inputs, query) tool = ConcreteTool(tenant_id="tenant_1", app_id="app_1", variable="var_1", config={"key": "value"}) @@ -28,10 +30,10 @@ class TestExternalDataTool: # Create a concrete subclass to test init class ConcreteTool(ExternalDataTool): @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): pass - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: dict[str, Any], query: str | None = None) -> str: return "" tool = ConcreteTool(tenant_id="tenant_1", app_id="app_1", variable="var_1") @@ -43,10 +45,10 @@ class TestExternalDataTool: def test_validate_config_raises_not_implemented(self): class ConcreteTool(ExternalDataTool): @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): return super().validate_config(tenant_id, config) - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: dict[str, Any], query: str | None = None) -> str: return "" with pytest.raises(NotImplementedError): @@ -55,10 +57,10 @@ class TestExternalDataTool: def test_query_raises_not_implemented(self): class ConcreteTool(ExternalDataTool): @classmethod - def validate_config(cls, tenant_id: str, config: dict): + def validate_config(cls, tenant_id: str, config: dict[str, Any]): pass - def query(self, inputs: dict, query: str | None = None) -> str: + def query(self, inputs: dict[str, Any], query: str | None = None) -> str: return super().query(inputs, query) tool = ConcreteTool(tenant_id="tenant_1", app_id="app_1", variable="var_1") diff --git a/api/tests/unit_tests/core/helper/test_moderation.py b/api/tests/unit_tests/core/helper/test_moderation.py index 4a84099b74..a0dfa86d20 100644 --- a/api/tests/unit_tests/core/helper/test_moderation.py +++ b/api/tests/unit_tests/core/helper/test_moderation.py @@ -2,11 +2,11 @@ from types import SimpleNamespace from typing import cast import pytest -from graphon.model_runtime.errors.invoke import InvokeBadRequestError from pytest_mock import MockerFixture from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.helper.moderation import check_moderation +from graphon.model_runtime.errors.invoke import InvokeBadRequestError from models.provider import ProviderType diff --git a/api/tests/unit_tests/core/llm_generator/output_parser/test_structured_output.py b/api/tests/unit_tests/core/llm_generator/output_parser/test_structured_output.py index b45f6fd9a7..6ed9ddb476 100644 --- a/api/tests/unit_tests/core/llm_generator/output_parser/test_structured_output.py +++ b/api/tests/unit_tests/core/llm_generator/output_parser/test_structured_output.py @@ -2,20 +2,6 @@ import json from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.llm_entities import ( - LLMResult, - LLMResultChunk, - LLMResultChunkDelta, - LLMResultWithStructuredOutput, - LLMUsage, -) -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - SystemPromptMessage, - TextPromptMessageContent, - UserPromptMessage, -) -from graphon.model_runtime.entities.model_entities import AIModelEntity, ParameterRule, ParameterType from core.llm_generator.output_parser.errors import OutputParserError from core.llm_generator.output_parser.structured_output import ( @@ -30,6 +16,20 @@ from core.llm_generator.output_parser.structured_output import ( remove_additional_properties, ) from core.model_manager import ModelInstance +from graphon.model_runtime.entities.llm_entities import ( + LLMResult, + LLMResultChunk, + LLMResultChunkDelta, + LLMResultWithStructuredOutput, + LLMUsage, +) +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + SystemPromptMessage, + TextPromptMessageContent, + UserPromptMessage, +) +from graphon.model_runtime.entities.model_entities import AIModelEntity, ParameterRule, ParameterType class TestStructuredOutput: diff --git a/api/tests/unit_tests/core/llm_generator/test_llm_generator.py b/api/tests/unit_tests/core/llm_generator/test_llm_generator.py index 7cdfb31189..2716f4712c 100644 --- a/api/tests/unit_tests/core/llm_generator/test_llm_generator.py +++ b/api/tests/unit_tests/core/llm_generator/test_llm_generator.py @@ -2,12 +2,12 @@ import json from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMMode, LLMResult -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError from core.app.app_config.entities import ModelConfig from core.llm_generator.entities import RuleCodeGeneratePayload, RuleGeneratePayload, RuleStructuredOutputPayload from core.llm_generator.llm_generator import LLMGenerator +from graphon.model_runtime.entities.llm_entities import LLMMode, LLMResult +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError, InvokeError class TestLLMGenerator: diff --git a/api/tests/unit_tests/core/mcp/server/test_streamable_http.py b/api/tests/unit_tests/core/mcp/server/test_streamable_http.py index 9a815fb94d..57456085c3 100644 --- a/api/tests/unit_tests/core/mcp/server/test_streamable_http.py +++ b/api/tests/unit_tests/core/mcp/server/test_streamable_http.py @@ -3,7 +3,6 @@ from unittest.mock import Mock, patch import jsonschema import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.app.features.rate_limiting.rate_limit import RateLimitGenerator from core.mcp import types @@ -19,6 +18,7 @@ from core.mcp.server.streamable_http import ( prepare_tool_arguments, process_mapping_response, ) +from graphon.variables.input_entities import VariableEntity, VariableEntityType from models.model import App, AppMCPServer, AppMode, EndUser diff --git a/api/tests/unit_tests/core/memory/test_token_buffer_memory.py b/api/tests/unit_tests/core/memory/test_token_buffer_memory.py index 9a5fb319d7..f459250b8e 100644 --- a/api/tests/unit_tests/core/memory/test_token_buffer_memory.py +++ b/api/tests/unit_tests/core/memory/test_token_buffer_memory.py @@ -4,6 +4,8 @@ from unittest.mock import MagicMock, patch from uuid import uuid4 import pytest + +from core.memory.token_buffer_memory import TokenBufferMemory from graphon.model_runtime.entities import ( AssistantPromptMessage, ImagePromptMessageContent, @@ -11,8 +13,6 @@ from graphon.model_runtime.entities import ( TextPromptMessageContent, UserPromptMessage, ) - -from core.memory.token_buffer_memory import TokenBufferMemory from models.model import AppMode # --------------------------------------------------------------------------- diff --git a/api/tests/unit_tests/core/model_runtime/test_model_provider_factory.py b/api/tests/unit_tests/core/model_runtime/test_model_provider_factory.py index 6a672fdfd5..249ecb5006 100644 --- a/api/tests/unit_tests/core/model_runtime/test_model_provider_factory.py +++ b/api/tests/unit_tests/core/model_runtime/test_model_provider_factory.py @@ -1,6 +1,7 @@ from unittest.mock import Mock import pytest + from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType from graphon.model_runtime.entities.provider_entities import ( diff --git a/api/tests/unit_tests/core/moderation/test_content_moderation.py b/api/tests/unit_tests/core/moderation/test_content_moderation.py index 3a97ad5c5d..4c668ee96b 100644 --- a/api/tests/unit_tests/core/moderation/test_content_moderation.py +++ b/api/tests/unit_tests/core/moderation/test_content_moderation.py @@ -10,6 +10,7 @@ This module tests all aspects of the content moderation system including: - Configuration validation """ +from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest @@ -28,7 +29,7 @@ class TestKeywordsModeration: """Test suite for custom keyword-based content moderation.""" @pytest.fixture - def keywords_config(self) -> dict: + def keywords_config(self) -> dict[str, Any]: """ Fixture providing a standard keywords moderation configuration. @@ -48,7 +49,7 @@ class TestKeywordsModeration: } @pytest.fixture - def keywords_moderation(self, keywords_config: dict) -> KeywordsModeration: + def keywords_moderation(self, keywords_config: dict[str, Any]) -> KeywordsModeration: """ Fixture providing a KeywordsModeration instance. @@ -64,7 +65,7 @@ class TestKeywordsModeration: config=keywords_config, ) - def test_validate_config_success(self, keywords_config: dict): + def test_validate_config_success(self, keywords_config: dict[str, Any]): """Test successful validation of keywords moderation configuration.""" # Should not raise any exception KeywordsModeration.validate_config("test-tenant", keywords_config) @@ -274,7 +275,7 @@ class TestOpenAIModeration: """Test suite for OpenAI-based content moderation.""" @pytest.fixture - def openai_config(self) -> dict: + def openai_config(self) -> dict[str, Any]: """ Fixture providing OpenAI moderation configuration. @@ -293,7 +294,7 @@ class TestOpenAIModeration: } @pytest.fixture - def openai_moderation(self, openai_config: dict) -> OpenAIModeration: + def openai_moderation(self, openai_config: dict[str, Any]) -> OpenAIModeration: """ Fixture providing an OpenAIModeration instance. @@ -309,7 +310,7 @@ class TestOpenAIModeration: config=openai_config, ) - def test_validate_config_success(self, openai_config: dict): + def test_validate_config_success(self, openai_config: dict[str, Any]): """Test successful validation of OpenAI moderation configuration.""" # Should not raise any exception OpenAIModeration.validate_config("test-tenant", openai_config) diff --git a/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace.py b/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace.py index 62d631a754..c2324fdec4 100644 --- a/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace.py +++ b/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace.py @@ -5,8 +5,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from opentelemetry.trace import Link, SpanContext, SpanKind, Status, StatusCode, TraceFlags import core.ops.aliyun_trace.aliyun_trace as aliyun_trace_module @@ -36,6 +34,8 @@ from core.ops.entities.trace_entity import ( ToolTraceInfo, WorkflowTraceInfo, ) +from graphon.entities import WorkflowNodeExecution +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey class RecordingTraceClient: diff --git a/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace_utils.py b/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace_utils.py index 2d2be12f05..e4d8f2d5ea 100644 --- a/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace_utils.py +++ b/api/tests/unit_tests/core/ops/aliyun_trace/test_aliyun_trace_utils.py @@ -1,8 +1,6 @@ import json from unittest.mock import MagicMock -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionStatus from opentelemetry.trace import Link, StatusCode from core.ops.aliyun_trace.entities.semconv import ( @@ -26,6 +24,8 @@ from core.ops.aliyun_trace.utils import ( serialize_json_data, ) from core.rag.models.document import Document +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionStatus from models import EndUser diff --git a/api/tests/unit_tests/core/ops/langfuse_trace/test_langfuse_trace.py b/api/tests/unit_tests/core/ops/langfuse_trace/test_langfuse_trace.py index 374371fb42..a0bcc92795 100644 --- a/api/tests/unit_tests/core/ops/langfuse_trace/test_langfuse_trace.py +++ b/api/tests/unit_tests/core/ops/langfuse_trace/test_langfuse_trace.py @@ -5,7 +5,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes from core.ops.entities.config_entity import LangfuseConfig from core.ops.entities.trace_entity import ( @@ -26,6 +25,7 @@ from core.ops.langfuse_trace.entities.langfuse_trace_entity import ( UnitEnum, ) from core.ops.langfuse_trace.langfuse_trace import LangFuseDataTrace +from graphon.enums import BuiltinNodeTypes from models import EndUser from models.enums import MessageStatus diff --git a/api/tests/unit_tests/core/ops/langsmith_trace/test_langsmith_trace.py b/api/tests/unit_tests/core/ops/langsmith_trace/test_langsmith_trace.py index bfe916f018..34c64c54a1 100644 --- a/api/tests/unit_tests/core/ops/langsmith_trace/test_langsmith_trace.py +++ b/api/tests/unit_tests/core/ops/langsmith_trace/test_langsmith_trace.py @@ -3,7 +3,6 @@ from datetime import datetime, timedelta from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from core.ops.entities.config_entity import LangSmithConfig from core.ops.entities.trace_entity import ( @@ -22,6 +21,7 @@ from core.ops.langsmith_trace.entities.langsmith_trace_entity import ( LangSmithRunUpdateModel, ) from core.ops.langsmith_trace.langsmith_trace import LangSmithDataTrace +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser diff --git a/api/tests/unit_tests/core/ops/mlflow_trace/test_mlflow_trace.py b/api/tests/unit_tests/core/ops/mlflow_trace/test_mlflow_trace.py index f4c485a9fc..afc5726ede 100644 --- a/api/tests/unit_tests/core/ops/mlflow_trace/test_mlflow_trace.py +++ b/api/tests/unit_tests/core/ops/mlflow_trace/test_mlflow_trace.py @@ -9,7 +9,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.enums import BuiltinNodeTypes from core.ops.entities.config_entity import DatabricksConfig, MLflowConfig from core.ops.entities.trace_entity import ( @@ -22,6 +21,7 @@ from core.ops.entities.trace_entity import ( WorkflowTraceInfo, ) from core.ops.mlflow_trace.mlflow_trace import MLflowDataTrace, datetime_to_nanoseconds +from graphon.enums import BuiltinNodeTypes # ── Helpers ────────────────────────────────────────────────────────────────── diff --git a/api/tests/unit_tests/core/ops/opik_trace/test_opik_trace.py b/api/tests/unit_tests/core/ops/opik_trace/test_opik_trace.py index 1cb32f2ee0..c02ac413f2 100644 --- a/api/tests/unit_tests/core/ops/opik_trace/test_opik_trace.py +++ b/api/tests/unit_tests/core/ops/opik_trace/test_opik_trace.py @@ -5,7 +5,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from core.ops.entities.config_entity import OpikConfig from core.ops.entities.trace_entity import ( @@ -19,6 +18,7 @@ from core.ops.entities.trace_entity import ( WorkflowTraceInfo, ) from core.ops.opik_trace.opik_trace import OpikDataTrace, prepare_opik_uuid, wrap_dict, wrap_metadata +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from models import EndUser from models.enums import MessageStatus diff --git a/api/tests/unit_tests/core/ops/tencent_trace/test_span_builder.py b/api/tests/unit_tests/core/ops/tencent_trace/test_span_builder.py index 696f859b6f..6113e5c6c8 100644 --- a/api/tests/unit_tests/core/ops/tencent_trace/test_span_builder.py +++ b/api/tests/unit_tests/core/ops/tencent_trace/test_span_builder.py @@ -1,8 +1,6 @@ from datetime import datetime from unittest.mock import MagicMock, patch -from graphon.entities import WorkflowNodeExecution -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from opentelemetry.trace import StatusCode from core.ops.entities.trace_entity import ( @@ -27,6 +25,8 @@ from core.ops.tencent_trace.entities.semconv import ( ) from core.ops.tencent_trace.span_builder import TencentSpanBuilder from core.rag.models.document import Document +from graphon.entities import WorkflowNodeExecution +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus class TestTencentSpanBuilder: diff --git a/api/tests/unit_tests/core/ops/tencent_trace/test_tencent_trace.py b/api/tests/unit_tests/core/ops/tencent_trace/test_tencent_trace.py index f67abba807..7afd0b824a 100644 --- a/api/tests/unit_tests/core/ops/tencent_trace/test_tencent_trace.py +++ b/api/tests/unit_tests/core/ops/tencent_trace/test_tencent_trace.py @@ -2,8 +2,6 @@ import logging from unittest.mock import MagicMock, patch import pytest -from graphon.entities import WorkflowNodeExecution -from graphon.enums import BuiltinNodeTypes from core.ops.entities.config_entity import TencentConfig from core.ops.entities.trace_entity import ( @@ -16,6 +14,8 @@ from core.ops.entities.trace_entity import ( WorkflowTraceInfo, ) from core.ops.tencent_trace.tencent_trace import TencentDataTrace +from graphon.entities import WorkflowNodeExecution +from graphon.enums import BuiltinNodeTypes from models import Account, App, TenantAccountJoin logger = logging.getLogger(__name__) diff --git a/api/tests/unit_tests/core/ops/test_arize_phoenix_trace.py b/api/tests/unit_tests/core/ops/test_arize_phoenix_trace.py index 6b5cb5b09a..4b925390d9 100644 --- a/api/tests/unit_tests/core/ops/test_arize_phoenix_trace.py +++ b/api/tests/unit_tests/core/ops/test_arize_phoenix_trace.py @@ -1,7 +1,7 @@ -from graphon.enums import BUILT_IN_NODE_TYPES, BuiltinNodeTypes from openinference.semconv.trace import OpenInferenceSpanKindValues from core.ops.arize_phoenix_trace.arize_phoenix_trace import _NODE_TYPE_TO_SPAN_KIND, _get_node_span_kind +from graphon.enums import BUILT_IN_NODE_TYPES, BuiltinNodeTypes class TestGetNodeSpanKind: diff --git a/api/tests/unit_tests/core/ops/test_langfuse_trace.py b/api/tests/unit_tests/core/ops/test_langfuse_trace.py index f8951d2b4a..017ac8c891 100644 --- a/api/tests/unit_tests/core/ops/test_langfuse_trace.py +++ b/api/tests/unit_tests/core/ops/test_langfuse_trace.py @@ -4,11 +4,10 @@ from datetime import datetime, timedelta from types import SimpleNamespace from unittest.mock import MagicMock, patch -from graphon.enums import BuiltinNodeTypes - from core.ops.entities.config_entity import LangfuseConfig from core.ops.entities.trace_entity import MessageTraceInfo, WorkflowTraceInfo from core.ops.langfuse_trace.langfuse_trace import LangFuseDataTrace +from graphon.enums import BuiltinNodeTypes def _create_trace_instance() -> LangFuseDataTrace: diff --git a/api/tests/unit_tests/core/ops/weave_trace/test_weave_trace.py b/api/tests/unit_tests/core/ops/weave_trace/test_weave_trace.py index 5014f40afc..531c7de05f 100644 --- a/api/tests/unit_tests/core/ops/weave_trace/test_weave_trace.py +++ b/api/tests/unit_tests/core/ops/weave_trace/test_weave_trace.py @@ -7,7 +7,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey from weave.trace_server.trace_server_interface import TraceStatus from core.ops.entities.config_entity import WeaveConfig @@ -23,6 +22,7 @@ from core.ops.entities.trace_entity import ( ) from core.ops.weave_trace.entities.weave_trace_entity import WeaveTraceModel from core.ops.weave_trace.weave_trace import WeaveDataTrace +from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionMetadataKey # ── Helpers ────────────────────────────────────────────────────────────────── diff --git a/api/tests/unit_tests/core/plugin/test_backwards_invocation_model.py b/api/tests/unit_tests/core/plugin/test_backwards_invocation_model.py index 543b278715..c24d3ac012 100644 --- a/api/tests/unit_tests/core/plugin/test_backwards_invocation_model.py +++ b/api/tests/unit_tests/core/plugin/test_backwards_invocation_model.py @@ -1,10 +1,9 @@ from types import SimpleNamespace from unittest.mock import patch -from graphon.model_runtime.entities.message_entities import UserPromptMessage - from core.plugin.backwards_invocation.model import PluginModelBackwardsInvocation from core.plugin.entities.request import RequestInvokeSummary +from graphon.model_runtime.entities.message_entities import UserPromptMessage def test_system_model_helpers_forward_user_id(): diff --git a/api/tests/unit_tests/core/plugin/test_model_runtime_adapter.py b/api/tests/unit_tests/core/plugin/test_model_runtime_adapter.py index f8d0e127b1..68aa130518 100644 --- a/api/tests/unit_tests/core/plugin/test_model_runtime_adapter.py +++ b/api/tests/unit_tests/core/plugin/test_model_runtime_adapter.py @@ -6,15 +6,15 @@ from types import SimpleNamespace from unittest.mock import Mock, sentinel import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType -from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity from core.plugin.entities.plugin_daemon import PluginModelProviderEntity from core.plugin.impl import model_runtime as model_runtime_module from core.plugin.impl.model import PluginModelClient from core.plugin.impl.model_runtime import TENANT_SCOPE_SCHEMA_CACHE_USER_ID, PluginModelRuntime from core.plugin.impl.model_runtime_factory import create_plugin_model_runtime +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType +from graphon.model_runtime.entities.provider_entities import ConfigurateMethod, ProviderEntity def _build_model_schema() -> AIModelEntity: diff --git a/api/tests/unit_tests/core/plugin/test_plugin_entities.py b/api/tests/unit_tests/core/plugin/test_plugin_entities.py index a812b01c5b..f1c4c7e700 100644 --- a/api/tests/unit_tests/core/plugin/test_plugin_entities.py +++ b/api/tests/unit_tests/core/plugin/test_plugin_entities.py @@ -4,12 +4,6 @@ from enum import StrEnum import pytest from flask import Response -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - SystemPromptMessage, - ToolPromptMessage, - UserPromptMessage, -) from pydantic import ValidationError from core.plugin.entities.endpoint import EndpointEntityWithInstance @@ -31,6 +25,12 @@ from core.plugin.entities.request import ( ) from core.plugin.utils.http_parser import serialize_response from core.tools.entities.common_entities import I18nObject +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + SystemPromptMessage, + ToolPromptMessage, + UserPromptMessage, +) class TestEndpointEntity: diff --git a/api/tests/unit_tests/core/plugin/test_plugin_runtime.py b/api/tests/unit_tests/core/plugin/test_plugin_runtime.py index a3b1e5f6b0..704b82adc0 100644 --- a/api/tests/unit_tests/core/plugin/test_plugin_runtime.py +++ b/api/tests/unit_tests/core/plugin/test_plugin_runtime.py @@ -17,14 +17,6 @@ from unittest.mock import MagicMock, patch import httpx import pytest -from graphon.model_runtime.errors.invoke import ( - InvokeAuthorizationError, - InvokeBadRequestError, - InvokeConnectionError, - InvokeRateLimitError, - InvokeServerUnavailableError, -) -from graphon.model_runtime.errors.validate import CredentialsValidateFailedError from pydantic import BaseModel from core.plugin.entities.plugin_daemon import ( @@ -45,6 +37,14 @@ from core.plugin.impl.exc import ( ) from core.plugin.impl.plugin import PluginInstaller from core.plugin.impl.tool import PluginToolManager +from graphon.model_runtime.errors.invoke import ( + InvokeAuthorizationError, + InvokeBadRequestError, + InvokeConnectionError, + InvokeRateLimitError, + InvokeServerUnavailableError, +) +from graphon.model_runtime.errors.validate import CredentialsValidateFailedError @pytest.fixture(autouse=True) diff --git a/api/tests/unit_tests/core/plugin/utils/test_chunk_merger.py b/api/tests/unit_tests/core/plugin/utils/test_chunk_merger.py index 90730dff5a..d49b6e4b71 100644 --- a/api/tests/unit_tests/core/plugin/utils/test_chunk_merger.py +++ b/api/tests/unit_tests/core/plugin/utils/test_chunk_merger.py @@ -1,12 +1,12 @@ from collections.abc import Generator import pytest -from graphon.file import File, FileTransferMethod, FileType from core.agent.entities import AgentInvokeMessage from core.plugin.utils.chunk_merger import FileChunk, merge_blob_chunks from core.plugin.utils.converter import convert_parameters_to_plugin_format from core.tools.entities.tool_entities import ToolInvokeMessage, ToolParameter, ToolSelector +from graphon.file import File, FileTransferMethod, FileType class TestChunkMerger: diff --git a/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py index 2b280dd674..395d392127 100644 --- a/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py +++ b/api/tests/unit_tests/core/prompt/test_advanced_prompt_transform.py @@ -2,6 +2,13 @@ from typing import cast from unittest.mock import MagicMock, patch import pytest + +from configs import dify_config +from core.app.app_config.entities import ModelConfigEntity +from core.memory.token_buffer_memory import TokenBufferMemory +from core.prompt.advanced_prompt_transform import AdvancedPromptTransform +from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig +from core.prompt.utils.prompt_template_parser import PromptTemplateParser from graphon.file import File, FileTransferMethod, FileType from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, @@ -11,13 +18,6 @@ from graphon.model_runtime.entities.message_entities import ( TextPromptMessageContent, UserPromptMessage, ) - -from configs import dify_config -from core.app.app_config.entities import ModelConfigEntity -from core.memory.token_buffer_memory import TokenBufferMemory -from core.prompt.advanced_prompt_transform import AdvancedPromptTransform -from core.prompt.entities.advanced_prompt_entities import ChatModelMessage, CompletionModelPromptTemplate, MemoryConfig -from core.prompt.utils.prompt_template_parser import PromptTemplateParser from models.model import Conversation diff --git a/api/tests/unit_tests/core/prompt/test_agent_history_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_agent_history_prompt_transform.py index 4a54649b28..803afa54d7 100644 --- a/api/tests/unit_tests/core/prompt/test_agent_history_prompt_transform.py +++ b/api/tests/unit_tests/core/prompt/test_agent_history_prompt_transform.py @@ -1,19 +1,18 @@ from unittest.mock import MagicMock -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - SystemPromptMessage, - ToolPromptMessage, - UserPromptMessage, -) -from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel - from core.app.entities.app_invoke_entities import ( ModelConfigWithCredentialsEntity, ) from core.entities.provider_configuration import ProviderModelBundle from core.memory.token_buffer_memory import TokenBufferMemory from core.prompt.agent_history_prompt_transform import AgentHistoryPromptTransform +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + SystemPromptMessage, + ToolPromptMessage, + UserPromptMessage, +) +from graphon.model_runtime.model_providers.__base.large_language_model import LargeLanguageModel from models.model import Conversation diff --git a/api/tests/unit_tests/core/prompt/test_prompt_message.py b/api/tests/unit_tests/core/prompt/test_prompt_message.py index a4b3960b0a..5d865d934c 100644 --- a/api/tests/unit_tests/core/prompt/test_prompt_message.py +++ b/api/tests/unit_tests/core/prompt/test_prompt_message.py @@ -1,3 +1,5 @@ +from core.prompt.simple_prompt_transform import ModelMode +from core.prompt.utils.prompt_message_util import PromptMessageUtil from graphon.model_runtime.entities.message_entities import ( AssistantPromptMessage, AudioPromptMessageContent, @@ -7,9 +9,6 @@ from graphon.model_runtime.entities.message_entities import ( UserPromptMessage, ) -from core.prompt.simple_prompt_transform import ModelMode -from core.prompt.utils.prompt_message_util import PromptMessageUtil - def test_build_prompt_message_with_prompt_message_contents(): prompt = UserPromptMessage(content=[TextPromptMessageContent(data="Hello, World!")]) diff --git a/api/tests/unit_tests/core/prompt/test_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_prompt_transform.py index e35ce2c48a..9f9ea33695 100644 --- a/api/tests/unit_tests/core/prompt/test_prompt_transform.py +++ b/api/tests/unit_tests/core/prompt/test_prompt_transform.py @@ -2,9 +2,9 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.model_entities import ModelPropertyKey from core.prompt.prompt_transform import PromptTransform +from graphon.model_runtime.entities.model_entities import ModelPropertyKey # from core.app.app_config.entities import ModelConfigEntity # from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle diff --git a/api/tests/unit_tests/core/prompt/test_simple_prompt_transform.py b/api/tests/unit_tests/core/prompt/test_simple_prompt_transform.py index 3f188cfbb4..0dc74b33df 100644 --- a/api/tests/unit_tests/core/prompt/test_simple_prompt_transform.py +++ b/api/tests/unit_tests/core/prompt/test_simple_prompt_transform.py @@ -2,12 +2,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.message_entities import ( - AssistantPromptMessage, - ImagePromptMessageContent, - TextPromptMessageContent, - UserPromptMessage, -) from core.app.entities.app_invoke_entities import ModelConfigWithCredentialsEntity from core.memory.token_buffer_memory import TokenBufferMemory @@ -24,6 +18,12 @@ from core.prompt.prompt_templates.advanced_prompt_templates import ( CONTEXT, ) from core.prompt.simple_prompt_transform import SimplePromptTransform +from graphon.model_runtime.entities.message_entities import ( + AssistantPromptMessage, + ImagePromptMessageContent, + TextPromptMessageContent, + UserPromptMessage, +) from models.model import AppMode, Conversation diff --git a/api/tests/unit_tests/core/rag/data_post_processor/test_data_post_processor.py b/api/tests/unit_tests/core/rag/data_post_processor/test_data_post_processor.py index 006b4e7345..1f3247590c 100644 --- a/api/tests/unit_tests/core/rag/data_post_processor/test_data_post_processor.py +++ b/api/tests/unit_tests/core/rag/data_post_processor/test_data_post_processor.py @@ -1,13 +1,12 @@ from unittest.mock import MagicMock, patch -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.errors.invoke import InvokeAuthorizationError - from core.rag.data_post_processor.data_post_processor import DataPostProcessor from core.rag.data_post_processor.reorder import ReorderRunner from core.rag.index_processor.constant.query_type import QueryType from core.rag.models.document import Document from core.rag.rerank.rerank_type import RerankMode +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.errors.invoke import InvokeAuthorizationError def _doc(content: str) -> Document: diff --git a/api/tests/unit_tests/core/rag/datasource/keyword/jieba/test_jieba.py b/api/tests/unit_tests/core/rag/datasource/keyword/jieba/test_jieba.py index bbdd476914..136ac0c72a 100644 --- a/api/tests/unit_tests/core/rag/datasource/keyword/jieba/test_jieba.py +++ b/api/tests/unit_tests/core/rag/datasource/keyword/jieba/test_jieba.py @@ -1,5 +1,6 @@ import json from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock import pytest @@ -57,7 +58,7 @@ class _FakeSelect: return self -def _dataset_keyword_table(data_source_type: str = "database", keyword_table_dict: dict | None = None): +def _dataset_keyword_table(data_source_type: str = "database", keyword_table_dict: dict[str, Any] | None = None): return SimpleNamespace( data_source_type=data_source_type, keyword_table_dict=keyword_table_dict, diff --git a/api/tests/unit_tests/core/rag/datasource/test_datasource_retrieval.py b/api/tests/unit_tests/core/rag/datasource/test_datasource_retrieval.py index 8b104597a8..0baf85c314 100644 --- a/api/tests/unit_tests/core/rag/datasource/test_datasource_retrieval.py +++ b/api/tests/unit_tests/core/rag/datasource/test_datasource_retrieval.py @@ -1,4 +1,5 @@ from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock, Mock, call, patch from uuid import uuid4 @@ -20,7 +21,7 @@ def create_mock_document( doc_id: str, score: float = 0.8, provider: str = "dify", - additional_metadata: dict | None = None, + additional_metadata: dict[str, Any] | None = None, ) -> Document: """ Create a mock Document object for testing. diff --git a/api/tests/unit_tests/core/rag/embedding/test_cached_embedding.py b/api/tests/unit_tests/core/rag/embedding/test_cached_embedding.py index 3563186186..051a1455ae 100644 --- a/api/tests/unit_tests/core/rag/embedding/test_cached_embedding.py +++ b/api/tests/unit_tests/core/rag/embedding/test_cached_embedding.py @@ -12,11 +12,11 @@ from unittest.mock import Mock, patch import numpy as np import pytest -from graphon.model_runtime.entities.model_entities import ModelPropertyKey -from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult, EmbeddingUsage from sqlalchemy.exc import IntegrityError from core.rag.embedding.cached_embedding import CacheEmbedding +from graphon.model_runtime.entities.model_entities import ModelPropertyKey +from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult, EmbeddingUsage from models.dataset import Embedding diff --git a/api/tests/unit_tests/core/rag/embedding/test_embedding_service.py b/api/tests/unit_tests/core/rag/embedding/test_embedding_service.py index 408cf14a51..4b8175b0b4 100644 --- a/api/tests/unit_tests/core/rag/embedding/test_embedding_service.py +++ b/api/tests/unit_tests/core/rag/embedding/test_embedding_service.py @@ -49,6 +49,10 @@ from unittest.mock import Mock, patch import numpy as np import pytest +from sqlalchemy.exc import IntegrityError + +from core.entities.embedding_type import EmbeddingInputType +from core.rag.embedding.cached_embedding import CacheEmbedding from graphon.model_runtime.entities.model_entities import ModelPropertyKey from graphon.model_runtime.entities.text_embedding_entities import EmbeddingResult, EmbeddingUsage from graphon.model_runtime.errors.invoke import ( @@ -56,10 +60,6 @@ from graphon.model_runtime.errors.invoke import ( InvokeConnectionError, InvokeRateLimitError, ) -from sqlalchemy.exc import IntegrityError - -from core.entities.embedding_type import EmbeddingInputType -from core.rag.embedding.cached_embedding import CacheEmbedding from models.dataset import Embedding diff --git a/api/tests/unit_tests/core/rag/indexing/processor/test_paragraph_index_processor.py b/api/tests/unit_tests/core/rag/indexing/processor/test_paragraph_index_processor.py index d4b987c832..4ba4d54fa0 100644 --- a/api/tests/unit_tests/core/rag/indexing/processor/test_paragraph_index_processor.py +++ b/api/tests/unit_tests/core/rag/indexing/processor/test_paragraph_index_processor.py @@ -1,15 +1,16 @@ from types import SimpleNamespace +from typing import Any from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage -from graphon.model_runtime.entities.message_entities import AssistantPromptMessage, ImagePromptMessageContent -from graphon.model_runtime.entities.model_entities import ModelFeature from core.entities.knowledge_entities import PreviewDetail from core.rag.index_processor.constant.index_type import IndexTechniqueType from core.rag.index_processor.processor.paragraph_index_processor import ParagraphIndexProcessor from core.rag.models.document import AttachmentDocument, Document +from graphon.model_runtime.entities.llm_entities import LLMResult, LLMUsage +from graphon.model_runtime.entities.message_entities import AssistantPromptMessage, ImagePromptMessageContent +from graphon.model_runtime.entities.model_entities import ModelFeature class TestParagraphIndexProcessor: @@ -71,7 +72,9 @@ class TestParagraphIndexProcessor: with pytest.raises(ValueError, match="No rules found in process rule"): processor.transform([Document(page_content="text", metadata={})], process_rule={"mode": "custom"}) - def test_transform_validates_segmentation(self, processor: ParagraphIndexProcessor, process_rule: dict) -> None: + def test_transform_validates_segmentation( + self, processor: ParagraphIndexProcessor, process_rule: dict[str, Any] + ) -> None: rules_without_segmentation = SimpleNamespace(segmentation=None) with patch( @@ -84,7 +87,9 @@ class TestParagraphIndexProcessor: process_rule={"mode": "custom", "rules": {"enabled": True}}, ) - def test_transform_builds_split_documents(self, processor: ParagraphIndexProcessor, process_rule: dict) -> None: + def test_transform_builds_split_documents( + self, processor: ParagraphIndexProcessor, process_rule: dict[str, Any] + ) -> None: source_document = Document(page_content="source", metadata={"dataset_id": "dataset-1", "document_id": "doc-1"}) splitter = Mock() splitter.split_documents.return_value = [ diff --git a/api/tests/unit_tests/core/rag/indexing/processor/test_qa_index_processor.py b/api/tests/unit_tests/core/rag/indexing/processor/test_qa_index_processor.py index b1b1835a52..bfae9001b7 100644 --- a/api/tests/unit_tests/core/rag/indexing/processor/test_qa_index_processor.py +++ b/api/tests/unit_tests/core/rag/indexing/processor/test_qa_index_processor.py @@ -1,4 +1,5 @@ from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock, Mock, patch import pandas as pd @@ -77,7 +78,7 @@ class TestQAIndexProcessor: processor.transform([Document(page_content="text", metadata={})], process_rule={"mode": "custom"}) def test_transform_preview_calls_formatter_once( - self, processor: QAIndexProcessor, process_rule: dict, fake_flask_app + self, processor: QAIndexProcessor, process_rule: dict[str, Any], fake_flask_app ) -> None: document = Document(page_content="raw text", metadata={"dataset_id": "dataset-1", "document_id": "doc-1"}) split_node = Document(page_content=".question", metadata={}) @@ -119,7 +120,7 @@ class TestQAIndexProcessor: mock_format.assert_called_once() def test_transform_non_preview_uses_thread_batches( - self, processor: QAIndexProcessor, process_rule: dict, fake_flask_app + self, processor: QAIndexProcessor, process_rule: dict[str, Any], fake_flask_app ) -> None: documents = [ Document(page_content="doc-1", metadata={"document_id": "doc-1", "dataset_id": "dataset-1"}), diff --git a/api/tests/unit_tests/core/rag/indexing/test_indexing_runner.py b/api/tests/unit_tests/core/rag/indexing/test_indexing_runner.py index 641c5d9ba0..7c4defc180 100644 --- a/api/tests/unit_tests/core/rag/indexing/test_indexing_runner.py +++ b/api/tests/unit_tests/core/rag/indexing/test_indexing_runner.py @@ -53,7 +53,6 @@ from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.model_runtime.entities.model_entities import ModelType from sqlalchemy.orm.exc import ObjectDeletedError from core.errors.error import ProviderTokenNotInitError @@ -64,6 +63,7 @@ from core.indexing_runner import ( ) from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType from core.rag.models.document import ChildDocument, Document +from graphon.model_runtime.entities.model_entities import ModelType from libs.datetime_utils import naive_utc_now from models.dataset import Dataset, DatasetProcessRule from models.dataset import Document as DatasetDocument diff --git a/api/tests/unit_tests/core/rag/rerank/test_reranker.py b/api/tests/unit_tests/core/rag/rerank/test_reranker.py index c279b00d3b..8bc7dbf70d 100644 --- a/api/tests/unit_tests/core/rag/rerank/test_reranker.py +++ b/api/tests/unit_tests/core/rag/rerank/test_reranker.py @@ -17,7 +17,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.model_runtime.entities.rerank_entities import RerankDocument, RerankResult from core.model_manager import ModelInstance from core.rag.index_processor.constant.doc_type import DocType @@ -29,6 +28,7 @@ from core.rag.rerank.rerank_factory import RerankRunnerFactory from core.rag.rerank.rerank_model import RerankModelRunner from core.rag.rerank.rerank_type import RerankMode from core.rag.rerank.weight_rerank import WeightRerankRunner +from graphon.model_runtime.entities.rerank_entities import RerankDocument, RerankResult def create_mock_model_instance() -> ModelInstance: diff --git a/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval.py b/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval.py index 1b17cbc368..89830f7517 100644 --- a/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval.py +++ b/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval.py @@ -1,13 +1,12 @@ import threading from contextlib import contextmanager, nullcontext from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock, Mock, patch from uuid import uuid4 import pytest from flask import Flask, current_app -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.model_runtime.entities.model_entities import ModelFeature from core.app.app_config.entities import ( DatasetEntity, @@ -34,6 +33,8 @@ from core.rag.retrieval.dataset_retrieval import DatasetRetrieval from core.rag.retrieval.retrieval_methods import RetrievalMethod from core.workflow.nodes.knowledge_retrieval import exc from core.workflow.nodes.knowledge_retrieval.retrieval import KnowledgeRetrievalRequest +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.model_runtime.entities.model_entities import ModelFeature from models.dataset import Dataset from models.enums import CreatorUserRole @@ -45,7 +46,7 @@ def create_mock_document( doc_id: str, score: float = 0.8, provider: str = "dify", - additional_metadata: dict | None = None, + additional_metadata: dict[str, Any] | None = None, ) -> Document: """ Create a mock Document object for testing. @@ -2021,7 +2022,7 @@ def create_mock_document_methods( doc_id: str, score: float = 0.8, provider: str = "dify", - additional_metadata: dict | None = None, + additional_metadata: dict[str, Any] | None = None, ) -> Document: """ Create a mock Document object for testing. @@ -4091,7 +4092,7 @@ def _doc( dataset_id: str = "dataset-1", document_id: str = "document-1", doc_id: str = "node-1", - extra: dict | None = None, + extra: dict[str, Any] | None = None, ) -> Document: metadata = { "score": score, diff --git a/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval_methods.py b/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval_methods.py index 48782515d0..90feb4cf01 100644 --- a/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval_methods.py +++ b/api/tests/unit_tests/core/rag/retrieval/test_dataset_retrieval_methods.py @@ -1,3 +1,4 @@ +from typing import Any from unittest.mock import MagicMock, Mock, patch from uuid import uuid4 @@ -55,7 +56,7 @@ def create_mock_document( doc_id: str, score: float = 0.8, provider: str = "dify", - additional_metadata: dict | None = None, + additional_metadata: dict[str, Any] | None = None, ) -> Document: """ Create a mock Document object for testing. diff --git a/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_function_call_router.py b/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_function_call_router.py index 5a2ecb8220..43c521dcfd 100644 --- a/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_function_call_router.py +++ b/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_function_call_router.py @@ -1,8 +1,7 @@ from unittest.mock import Mock -from graphon.model_runtime.entities.llm_entities import LLMUsage - from core.rag.retrieval.router.multi_dataset_function_call_router import FunctionCallMultiDatasetRouter +from graphon.model_runtime.entities.llm_entities import LLMUsage class TestFunctionCallMultiDatasetRouter: diff --git a/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_react_route.py b/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_react_route.py index 539ac0f849..c56528cf55 100644 --- a/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_react_route.py +++ b/api/tests/unit_tests/core/rag/retrieval/test_multi_dataset_react_route.py @@ -1,13 +1,12 @@ from types import SimpleNamespace from unittest.mock import Mock, patch +from core.rag.retrieval.output_parser.react_output import ReactAction, ReactFinish +from core.rag.retrieval.router.multi_dataset_react_route import ReactMultiDatasetRouter from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.model_runtime.entities.message_entities import PromptMessageRole from graphon.model_runtime.entities.model_entities import ModelType -from core.rag.retrieval.output_parser.react_output import ReactAction, ReactFinish -from core.rag.retrieval.router.multi_dataset_react_route import ReactMultiDatasetRouter - class TestReactMultiDatasetRouter: def test_invoke_returns_none_when_no_tools(self) -> None: diff --git a/api/tests/unit_tests/core/repositories/test_celery_workflow_execution_repository.py b/api/tests/unit_tests/core/repositories/test_celery_workflow_execution_repository.py index e229d5fc1a..3d3322094e 100644 --- a/api/tests/unit_tests/core/repositories/test_celery_workflow_execution_repository.py +++ b/api/tests/unit_tests/core/repositories/test_celery_workflow_execution_repository.py @@ -9,10 +9,10 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowType from core.repositories.celery_workflow_execution_repository import CeleryWorkflowExecutionRepository +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowType from libs.datetime_utils import naive_utc_now from models import Account, EndUser from models.enums import WorkflowRunTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_celery_workflow_node_execution_repository.py b/api/tests/unit_tests/core/repositories/test_celery_workflow_node_execution_repository.py index 7dbf78d0f0..05b4f3a053 100644 --- a/api/tests/unit_tests/core/repositories/test_celery_workflow_node_execution_repository.py +++ b/api/tests/unit_tests/core/repositories/test_celery_workflow_node_execution_repository.py @@ -9,14 +9,14 @@ from unittest.mock import Mock, patch from uuid import uuid4 import pytest + +from core.repositories.celery_workflow_node_execution_repository import CeleryWorkflowNodeExecutionRepository +from core.repositories.factory import OrderConfig from graphon.entities.workflow_node_execution import ( WorkflowNodeExecution, WorkflowNodeExecutionStatus, ) from graphon.enums import BuiltinNodeTypes - -from core.repositories.celery_workflow_node_execution_repository import CeleryWorkflowNodeExecutionRepository -from core.repositories.factory import OrderConfig from libs.datetime_utils import naive_utc_now from models import Account, EndUser from models.workflow import WorkflowNodeExecutionTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_human_input_form_repository_impl.py b/api/tests/unit_tests/core/repositories/test_human_input_form_repository_impl.py index 0fc82dda53..8be1ac318c 100644 --- a/api/tests/unit_tests/core/repositories/test_human_input_form_repository_impl.py +++ b/api/tests/unit_tests/core/repositories/test_human_input_form_repository_impl.py @@ -7,11 +7,6 @@ from datetime import datetime from types import SimpleNamespace import pytest -from graphon.nodes.human_input.entities import ( - FormDefinition, - UserAction, -) -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from core.repositories.human_input_repository import ( HumanInputFormRecord, @@ -26,6 +21,11 @@ from core.workflow.human_input_compat import ( ExternalRecipient, MemberRecipient, ) +from graphon.nodes.human_input.entities import ( + FormDefinition, + UserAction, +) +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import naive_utc_now from models.human_input import ( EmailExternalRecipientPayload, diff --git a/api/tests/unit_tests/core/repositories/test_human_input_repository.py b/api/tests/unit_tests/core/repositories/test_human_input_repository.py index 8ff0e40587..1297a95df1 100644 --- a/api/tests/unit_tests/core/repositories/test_human_input_repository.py +++ b/api/tests/unit_tests/core/repositories/test_human_input_repository.py @@ -9,8 +9,6 @@ from typing import Any from unittest.mock import MagicMock import pytest -from graphon.nodes.human_input.entities import HumanInputNodeData, UserAction -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from core.repositories.human_input_repository import ( FormCreateParams, @@ -31,6 +29,8 @@ from core.workflow.human_input_compat import ( MemberRecipient, WebAppDeliveryMethod, ) +from graphon.nodes.human_input.entities import HumanInputNodeData, UserAction +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import naive_utc_now from models.human_input import HumanInputFormRecipient, RecipientType diff --git a/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_execution_repository.py b/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_execution_repository.py index e5c3e85487..a08c5729cb 100644 --- a/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_execution_repository.py +++ b/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_execution_repository.py @@ -3,12 +3,12 @@ from unittest.mock import MagicMock from uuid import uuid4 import pytest -from graphon.entities import WorkflowExecution -from graphon.enums import WorkflowExecutionStatus, WorkflowType from sqlalchemy.engine import Engine from sqlalchemy.orm import sessionmaker from core.repositories.sqlalchemy_workflow_execution_repository import SQLAlchemyWorkflowExecutionRepository +from graphon.entities import WorkflowExecution +from graphon.enums import WorkflowExecutionStatus, WorkflowType from models import Account, CreatorUserRole, EndUser, WorkflowRun from models.enums import WorkflowRunTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_node_execution_repository.py b/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_node_execution_repository.py index 5b4d26b780..6af7b02d4c 100644 --- a/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_node_execution_repository.py +++ b/api/tests/unit_tests/core/repositories/test_sqlalchemy_workflow_node_execution_repository.py @@ -10,12 +10,6 @@ from unittest.mock import MagicMock, Mock import psycopg2.errors import pytest -from graphon.entities import WorkflowNodeExecution -from graphon.enums import ( - BuiltinNodeTypes, - WorkflowNodeExecutionMetadataKey, - WorkflowNodeExecutionStatus, -) from sqlalchemy import Engine, create_engine from sqlalchemy.exc import IntegrityError from sqlalchemy.orm import sessionmaker @@ -29,6 +23,12 @@ from core.repositories.sqlalchemy_workflow_node_execution_repository import ( _find_first, _replace_or_append_offload, ) +from graphon.entities import WorkflowNodeExecution +from graphon.enums import ( + BuiltinNodeTypes, + WorkflowNodeExecutionMetadataKey, + WorkflowNodeExecutionStatus, +) from models import Account, EndUser from models.enums import ExecutionOffLoadType from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionOffload, WorkflowNodeExecutionTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_workflow_node_execution_conflict_handling.py b/api/tests/unit_tests/core/repositories/test_workflow_node_execution_conflict_handling.py index 84fe522388..abdbc72085 100644 --- a/api/tests/unit_tests/core/repositories/test_workflow_node_execution_conflict_handling.py +++ b/api/tests/unit_tests/core/repositories/test_workflow_node_execution_conflict_handling.py @@ -4,17 +4,17 @@ from unittest.mock import MagicMock, Mock import psycopg2.errors import pytest -from graphon.entities.workflow_node_execution import ( - WorkflowNodeExecution, - WorkflowNodeExecutionStatus, -) -from graphon.enums import BuiltinNodeTypes from sqlalchemy.exc import IntegrityError from sqlalchemy.orm import sessionmaker from core.repositories.sqlalchemy_workflow_node_execution_repository import ( SQLAlchemyWorkflowNodeExecutionRepository, ) +from graphon.entities.workflow_node_execution import ( + WorkflowNodeExecution, + WorkflowNodeExecutionStatus, +) +from graphon.enums import BuiltinNodeTypes from libs.datetime_utils import naive_utc_now from models import Account, WorkflowNodeExecutionTriggeredFrom diff --git a/api/tests/unit_tests/core/repositories/test_workflow_node_execution_truncation.py b/api/tests/unit_tests/core/repositories/test_workflow_node_execution_truncation.py index 27729e7f06..5af1376a0a 100644 --- a/api/tests/unit_tests/core/repositories/test_workflow_node_execution_truncation.py +++ b/api/tests/unit_tests/core/repositories/test_workflow_node_execution_truncation.py @@ -11,17 +11,17 @@ from datetime import UTC, datetime from typing import Any from unittest.mock import MagicMock -from graphon.entities.workflow_node_execution import ( - WorkflowNodeExecution, - WorkflowNodeExecutionStatus, -) -from graphon.enums import BuiltinNodeTypes from sqlalchemy import Engine from configs import dify_config from core.repositories.sqlalchemy_workflow_node_execution_repository import ( SQLAlchemyWorkflowNodeExecutionRepository, ) +from graphon.entities.workflow_node_execution import ( + WorkflowNodeExecution, + WorkflowNodeExecutionStatus, +) +from graphon.enums import BuiltinNodeTypes from models import Account, WorkflowNodeExecutionTriggeredFrom from models.enums import ExecutionOffLoadType from models.workflow import WorkflowNodeExecutionModel, WorkflowNodeExecutionOffload diff --git a/api/tests/unit_tests/core/test_file.py b/api/tests/unit_tests/core/test_file.py index ac65d0c02b..f17927f16b 100644 --- a/api/tests/unit_tests/core/test_file.py +++ b/api/tests/unit_tests/core/test_file.py @@ -1,7 +1,6 @@ import json from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig - from models.workflow import Workflow diff --git a/api/tests/unit_tests/core/test_model_manager.py b/api/tests/unit_tests/core/test_model_manager.py index f5efb78b61..afea9144c0 100644 --- a/api/tests/unit_tests/core/test_model_manager.py +++ b/api/tests/unit_tests/core/test_model_manager.py @@ -2,12 +2,12 @@ from unittest.mock import MagicMock, patch import pytest import redis -from graphon.model_runtime.entities.model_entities import ModelType from pytest_mock import MockerFixture from core.entities.provider_entities import ModelLoadBalancingConfiguration from core.model_manager import LBModelManager from extensions.ext_redis import redis_client +from graphon.model_runtime.entities.model_entities import ModelType @pytest.fixture diff --git a/api/tests/unit_tests/core/test_provider_configuration.py b/api/tests/unit_tests/core/test_provider_configuration.py index 331166fe63..b19a21d7f4 100644 --- a/api/tests/unit_tests/core/test_provider_configuration.py +++ b/api/tests/unit_tests/core/test_provider_configuration.py @@ -1,15 +1,6 @@ from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import ModelType -from graphon.model_runtime.entities.provider_entities import ( - ConfigurateMethod, - CredentialFormSchema, - FormOption, - FormType, - ProviderEntity, -) from core.entities.provider_configuration import ProviderConfiguration, SystemConfigurationStatus from core.entities.provider_entities import ( @@ -21,6 +12,15 @@ from core.entities.provider_entities import ( RestrictModel, SystemConfiguration, ) +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType +from graphon.model_runtime.entities.provider_entities import ( + ConfigurateMethod, + CredentialFormSchema, + FormOption, + FormType, + ProviderEntity, +) from models.provider import Provider, ProviderType diff --git a/api/tests/unit_tests/core/test_provider_manager.py b/api/tests/unit_tests/core/test_provider_manager.py index ee26172459..f45b43082c 100644 --- a/api/tests/unit_tests/core/test_provider_manager.py +++ b/api/tests/unit_tests/core/test_provider_manager.py @@ -2,12 +2,12 @@ from types import SimpleNamespace from unittest.mock import MagicMock, Mock, PropertyMock, patch import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import ModelType from pytest_mock import MockerFixture from core.entities.provider_entities import ModelSettings from core.provider_manager import ProviderManager +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import ModelType from models.provider import LoadBalancingModelConfig, ProviderModelSetting, TenantDefaultModel from models.provider_ids import ModelProviderID diff --git a/api/tests/unit_tests/core/tools/test_builtin_tool_base.py b/api/tests/unit_tests/core/tools/test_builtin_tool_base.py index 5d744f88c9..1ff81f6120 100644 --- a/api/tests/unit_tests/core/tools/test_builtin_tool_base.py +++ b/api/tests/unit_tests/core/tools/test_builtin_tool_base.py @@ -6,13 +6,13 @@ from typing import Any from unittest.mock import patch import pytest -from graphon.model_runtime.entities.message_entities import UserPromptMessage from core.app.entities.app_invoke_entities import InvokeFrom from core.tools.__base.tool_runtime import ToolRuntime from core.tools.builtin_tool.tool import BuiltinTool from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolEntity, ToolIdentity, ToolInvokeMessage, ToolProviderType +from graphon.model_runtime.entities.message_entities import UserPromptMessage class _BuiltinDummyTool(BuiltinTool): diff --git a/api/tests/unit_tests/core/tools/test_builtin_tools_extra.py b/api/tests/unit_tests/core/tools/test_builtin_tools_extra.py index ee0ce51eec..c7829fc0d7 100644 --- a/api/tests/unit_tests/core/tools/test_builtin_tools_extra.py +++ b/api/tests/unit_tests/core/tools/test_builtin_tools_extra.py @@ -6,8 +6,6 @@ from datetime import date from types import SimpleNamespace import pytest -from graphon.file import FileType -from graphon.model_runtime.entities.model_entities import ModelPropertyKey from core.app.entities.app_invoke_entities import InvokeFrom from core.tools.__base.tool_runtime import ToolRuntime @@ -29,6 +27,8 @@ from core.tools.builtin_tool.tool import BuiltinTool from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolEntity, ToolIdentity, ToolInvokeMessage from core.tools.errors import ToolInvokeError +from graphon.file import FileType +from graphon.model_runtime.entities.model_entities import ModelPropertyKey def _build_builtin_tool(tool_cls: type[BuiltinTool]) -> BuiltinTool: diff --git a/api/tests/unit_tests/core/tools/test_custom_tool.py b/api/tests/unit_tests/core/tools/test_custom_tool.py index 79b8eaaa87..f35546b025 100644 --- a/api/tests/unit_tests/core/tools/test_custom_tool.py +++ b/api/tests/unit_tests/core/tools/test_custom_tool.py @@ -1,6 +1,7 @@ from __future__ import annotations from types import SimpleNamespace +from typing import Any import httpx import pytest @@ -14,7 +15,7 @@ from core.tools.entities.tool_entities import ToolEntity, ToolIdentity, ToolInvo from core.tools.errors import ToolInvokeError, ToolParameterValidationError, ToolProviderCredentialValidationError -def _build_tool(*, openapi: dict | None = None) -> ApiTool: +def _build_tool(*, openapi: dict[str, Any] | None = None) -> ApiTool: entity = ToolEntity( identity=ToolIdentity( author="author", diff --git a/api/tests/unit_tests/core/tools/test_tool_file_manager.py b/api/tests/unit_tests/core/tools/test_tool_file_manager.py index 2889cb9db1..ccffdf16d1 100644 --- a/api/tests/unit_tests/core/tools/test_tool_file_manager.py +++ b/api/tests/unit_tests/core/tools/test_tool_file_manager.py @@ -12,9 +12,9 @@ from unittest.mock import MagicMock, Mock, patch import httpx import pytest -from graphon.file import FileTransferMethod from core.tools.tool_file_manager import ToolFileManager +from graphon.file import FileTransferMethod def _setup_tool_file_signing(monkeypatch: pytest.MonkeyPatch) -> dict[str, str]: diff --git a/api/tests/unit_tests/core/tools/test_tool_label_manager.py b/api/tests/unit_tests/core/tools/test_tool_label_manager.py index 8c0e7e9419..e13f430f9b 100644 --- a/api/tests/unit_tests/core/tools/test_tool_label_manager.py +++ b/api/tests/unit_tests/core/tools/test_tool_label_manager.py @@ -2,7 +2,7 @@ from __future__ import annotations from types import SimpleNamespace from typing import Any -from unittest.mock import PropertyMock, patch +from unittest.mock import MagicMock, PropertyMock, patch import pytest @@ -12,11 +12,13 @@ from core.tools.tool_label_manager import ToolLabelManager from core.tools.workflow_as_tool.provider import WorkflowToolProviderController +# Create a mock class for testing abstract/base classes class _ConcreteBuiltinToolProviderController(BuiltinToolProviderController): def _validate_credentials(self, user_id: str, credentials: dict[str, Any]): return None +# Factory function to create a "lightweight" controller for testing def _api_controller(provider_id: str = "api-1") -> ApiToolProviderController: controller = object.__new__(ApiToolProviderController) controller.provider_id = provider_id @@ -29,6 +31,7 @@ def _workflow_controller(provider_id: str = "wf-1") -> WorkflowToolProviderContr return controller +# Test pure logic: filtering and deduplication def test_tool_label_manager_filter_tool_labels(): filtered = ToolLabelManager.filter_tool_labels(["search", "search", "invalid", "news"]) assert set(filtered) == {"search", "news"} @@ -36,22 +39,68 @@ def test_tool_label_manager_filter_tool_labels(): def test_tool_label_manager_update_tool_labels_db(): + """ + Test the database update logic for tool labels. + Focus: Verify that labels are filtered, de-duplicated, and safely handled within a database session. + """ + # 1. Setup expected data from the controller controller = _api_controller("api-1") - with patch("core.tools.tool_label_manager.db") as mock_db: + expected_id = controller.provider_id + expected_type = controller.provider_type + + # 2. Patching External Dependencies + # - We patch 'db' to prevent Flask from trying to access a real database. + # - We patch 'sessionmaker' to intercept and control the creation of SQLAlchemy sessions. + with ( + patch("core.tools.tool_label_manager.db"), + patch("core.tools.tool_label_manager.sessionmaker") as mock_sessionmaker, + ): + # 3. Constructing the "Mocking Chain" + # In the business logic, we use: with sessionmaker(db.engine).begin() as _session: + # We need to link our 'mock_session' to the end of this complex context manager chain: + # Step A: sessionmaker(db.engine) -> returns an object (mock_sessionmaker.return_value) + # Step B: .begin() -> returns a context manager (begin.return_value) + # Step C: with ... as _session: -> calls __enter__(), and _session gets the __enter__.return_value + mock_session = MagicMock() + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + + # 4. Trigger the logic under test + # Input: ["search", "search", "invalid"] + # Logic: + # - "invalid" should be filtered out (not in default_tool_label_name_list). + # - The duplicate "search" should be merged (unique labels). ToolLabelManager.update_tool_labels(controller, ["search", "search", "invalid"]) - mock_db.session.execute.assert_called_once() - # only one valid unique label should be inserted. - assert mock_db.session.add.call_count == 1 - mock_db.session.commit.assert_called_once() + # 5. Behavior Assertion: DELETE operation + # Verify that the manager first attempts to clear existing labels for this specific tool. + # This ensures the update is idempotent. + mock_session.execute.assert_called_once() + + # 6. Behavior Assertion: INSERT operation + # Verify that only ONE valid label ("search") was added after filtering and deduplication. + # If call_count == 1, it proves filter_tool_labels() worked as expected. + assert mock_session.add.call_count == 1 + + # 7. State Assertion: Data Integrity & Isolation + # Inspect the actual object passed to session.add() to ensure it has correct properties. + # This confirms that the data isolation (tool_id + tool_type) we refactored is active. + call_args = mock_session.add.call_args + added_label = call_args[0][0] # Retrieve the ToolLabelBinding instance + + assert added_label.label_name == "search", "The label name should be 'search' after filtering." + assert added_label.tool_id == expected_id, "The tool_id must match the provider_id for correct binding." + assert added_label.tool_type == expected_type, "Isolation failed: tool_type must be verified during update." +# Test error handling def test_tool_label_manager_update_tool_labels_unsupported(): with pytest.raises(ValueError, match="Unsupported tool type"): ToolLabelManager.update_tool_labels(object(), ["search"]) # type: ignore[arg-type] +# Test retrieval logic def test_tool_label_manager_get_tool_labels_for_builtin_and_db(): + # Mocking a property (@property) using PropertyMock with patch.object( _ConcreteBuiltinToolProviderController, "tool_labels", @@ -62,29 +111,67 @@ def test_tool_label_manager_get_tool_labels_for_builtin_and_db(): assert ToolLabelManager.get_tool_labels(builtin) == ["search", "news"] api = _api_controller("api-1") - with patch("core.tools.tool_label_manager.db") as mock_db: - mock_db.session.scalars.return_value.all.return_value = ["search", "news"] - labels = ToolLabelManager.get_tool_labels(api) - assert labels == ["search", "news"] + with ( + patch("core.tools.tool_label_manager.db"), + patch("core.tools.tool_label_manager.sessionmaker") as mock_sessionmaker, + ): + mock_session = MagicMock() + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + # Inject mock data into the query result: session.scalars(stmt).all() + mock_session.scalars.return_value.all.return_value = ["search", "news"] + + labels = ToolLabelManager.get_tool_labels(api) + assert labels == ["search", "news"] + + +def test_tool_label_manager_get_tool_labels_unsupported(): + """ + Negative Test: Ensure get_tool_labels raises ValueError for unsupported controller types. + This protects the internal API contract against accidental regressions during refactoring. + """ + # Passing a generic object() which doesn't match Api, Workflow, or Builtin controllers. with pytest.raises(ValueError, match="Unsupported tool type"): ToolLabelManager.get_tool_labels(object()) # type: ignore[arg-type] +# Test batch processing and mapping def test_tool_label_manager_get_tools_labels_batch(): assert ToolLabelManager.get_tools_labels([]) == {} api = _api_controller("api-1") wf = _workflow_controller("wf-1") + + # SimpleNamespace is a quick way to simulate SQLAlchemy row objects records = [ SimpleNamespace(tool_id="api-1", label_name="search"), SimpleNamespace(tool_id="api-1", label_name="news"), SimpleNamespace(tool_id="wf-1", label_name="utilities"), ] - with patch("core.tools.tool_label_manager.db") as mock_db: - mock_db.session.scalars.return_value.all.return_value = records + + with ( + patch("core.tools.tool_label_manager.db"), + patch("core.tools.tool_label_manager.sessionmaker") as mock_sessionmaker, + ): + mock_session = MagicMock() + mock_sessionmaker.return_value.begin.return_value.__enter__.return_value = mock_session + + # Simulating the batch query result + mock_session.scalars.return_value.all.return_value = records + labels = ToolLabelManager.get_tools_labels([api, wf]) + + # Verify the final dictionary mapping assert labels == {"api-1": ["search", "news"], "wf-1": ["utilities"]} + +def test_tool_label_manager_get_tools_labels_unsupported(): + """ + Negative Test: Ensure get_tools_labels raises ValueError if the list contains + unsupported controller types, even alongside valid ones. + """ + api = _api_controller("api-1") + + # Passing a list with one valid controller and one invalid object() with pytest.raises(ValueError, match="Unsupported tool type"): ToolLabelManager.get_tools_labels([api, object()]) # type: ignore[list-item] diff --git a/api/tests/unit_tests/core/tools/utils/test_message_transformer.py b/api/tests/unit_tests/core/tools/utils/test_message_transformer.py index 6454a5bcd1..5f34135af4 100644 --- a/api/tests/unit_tests/core/tools/utils/test_message_transformer.py +++ b/api/tests/unit_tests/core/tools/utils/test_message_transformer.py @@ -1,3 +1,5 @@ +from typing import Any + import pytest import core.tools.utils.message_transformer as mt @@ -13,7 +15,7 @@ class _FakeToolFile: class _FakeToolFileManager: """Fake ToolFileManager to capture the mimetype passed in.""" - last_call: dict | None = None + last_call: dict[str, Any] | None = None def __init__(self, *args, **kwargs): pass diff --git a/api/tests/unit_tests/core/tools/utils/test_model_invocation_utils.py b/api/tests/unit_tests/core/tools/utils/test_model_invocation_utils.py index 52f262e1cf..44785f939c 100644 --- a/api/tests/unit_tests/core/tools/utils/test_model_invocation_utils.py +++ b/api/tests/unit_tests/core/tools/utils/test_model_invocation_utils.py @@ -10,9 +10,12 @@ from __future__ import annotations from decimal import Decimal from types import SimpleNamespace +from typing import Any from unittest.mock import Mock, patch import pytest + +from core.tools.utils.model_invocation_utils import InvokeModelError, ModelInvocationUtils from graphon.model_runtime.entities.model_entities import ModelPropertyKey from graphon.model_runtime.errors.invoke import ( InvokeAuthorizationError, @@ -22,10 +25,8 @@ from graphon.model_runtime.errors.invoke import ( InvokeServerUnavailableError, ) -from core.tools.utils.model_invocation_utils import InvokeModelError, ModelInvocationUtils - -def _mock_model_instance(*, schema: dict | None = None) -> SimpleNamespace: +def _mock_model_instance(*, schema: dict[str, Any] | None = None) -> SimpleNamespace: model_type_instance = Mock() model_type_instance.get_model_schema.return_value = ( SimpleNamespace(model_properties=schema or {}) if schema is not None else None diff --git a/api/tests/unit_tests/core/tools/utils/test_parser.py b/api/tests/unit_tests/core/tools/utils/test_parser.py index 40f91b12a0..032b1377a4 100644 --- a/api/tests/unit_tests/core/tools/utils/test_parser.py +++ b/api/tests/unit_tests/core/tools/utils/test_parser.py @@ -1,4 +1,5 @@ from json.decoder import JSONDecodeError +from typing import Any from unittest.mock import Mock, patch import pytest @@ -259,8 +260,8 @@ def test_parse_openapi_to_tool_bundle_server_env_and_refs(app): }, } - extra_info: dict = {} - warning: dict = {} + extra_info: dict[str, Any] = {} + warning: dict[str, Any] = {} with app.test_request_context(headers={"X-Request-Env": "prod"}): bundles = ApiBasedToolSchemaParser.parse_openapi_to_tool_bundle(openapi, extra_info=extra_info, warning=warning) @@ -298,7 +299,7 @@ def test_parse_swagger_to_openapi_branches(): } ) - warning: dict = {"seed": True} + warning: dict[str, Any] = {"seed": True} converted = ApiBasedToolSchemaParser.parse_swagger_to_openapi( { "servers": [{"url": "https://x"}], diff --git a/api/tests/unit_tests/core/tools/utils/test_workflow_configuration_sync.py b/api/tests/unit_tests/core/tools/utils/test_workflow_configuration_sync.py index 0e3a7e623a..43f3fbd5c9 100644 --- a/api/tests/unit_tests/core/tools/utils/test_workflow_configuration_sync.py +++ b/api/tests/unit_tests/core/tools/utils/test_workflow_configuration_sync.py @@ -1,9 +1,9 @@ import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.tools.entities.tool_entities import ToolParameter, WorkflowToolParameterConfiguration from core.tools.errors import WorkflowToolHumanInputNotSupportedError from core.tools.utils.workflow_configuration_sync import WorkflowToolConfigurationUtils +from graphon.variables.input_entities import VariableEntity, VariableEntityType def test_ensure_no_human_input_nodes_passes_for_non_human_input(): diff --git a/api/tests/unit_tests/core/tools/workflow_as_tool/test_provider.py b/api/tests/unit_tests/core/tools/workflow_as_tool/test_provider.py index 4767480a5a..5a585c609a 100644 --- a/api/tests/unit_tests/core/tools/workflow_as_tool/test_provider.py +++ b/api/tests/unit_tests/core/tools/workflow_as_tool/test_provider.py @@ -4,7 +4,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.variables.input_entities import VariableEntity, VariableEntityType from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ( @@ -14,6 +13,7 @@ from core.tools.entities.tool_entities import ( ToolProviderType, ) from core.tools.workflow_as_tool.provider import WorkflowToolProviderController +from graphon.variables.input_entities import VariableEntity, VariableEntityType def _controller() -> WorkflowToolProviderController: diff --git a/api/tests/unit_tests/core/tools/workflow_as_tool/test_tool.py b/api/tests/unit_tests/core/tools/workflow_as_tool/test_tool.py index c20edd7400..72a73dd936 100644 --- a/api/tests/unit_tests/core/tools/workflow_as_tool/test_tool.py +++ b/api/tests/unit_tests/core/tools/workflow_as_tool/test_tool.py @@ -11,7 +11,6 @@ from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod, FileType from core.app.entities.app_invoke_entities import InvokeFrom from core.tools.__base.tool_runtime import ToolRuntime @@ -25,6 +24,7 @@ from core.tools.entities.tool_entities import ( ) from core.tools.errors import ToolInvokeError from core.tools.workflow_as_tool.tool import WorkflowTool +from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod, FileType class StubScalars: diff --git a/api/tests/unit_tests/core/trigger/debug/test_debug_event_selectors.py b/api/tests/unit_tests/core/trigger/debug/test_debug_event_selectors.py index 78622b78b6..fb7dc52838 100644 --- a/api/tests/unit_tests/core/trigger/debug/test_debug_event_selectors.py +++ b/api/tests/unit_tests/core/trigger/debug/test_debug_event_selectors.py @@ -8,10 +8,10 @@ and select_trigger_debug_events orchestrator. from __future__ import annotations from datetime import datetime +from typing import Any from unittest.mock import MagicMock, patch import pytest -from graphon.enums import BuiltinNodeTypes, NodeType from core.plugin.entities.request import TriggerInvokeEventResponse from core.trigger.constants import ( @@ -27,10 +27,11 @@ from core.trigger.debug.event_selectors import ( select_trigger_debug_events, ) from core.trigger.debug.events import PluginTriggerDebugEvent, WebhookDebugEvent +from graphon.enums import BuiltinNodeTypes, NodeType from tests.unit_tests.core.trigger.conftest import VALID_PROVIDER_ID -def _make_poller_args(node_config: dict | None = None) -> dict: +def _make_poller_args(node_config: dict[str, Any] | None = None) -> dict[str, Any]: return { "tenant_id": "t1", "user_id": "u1", diff --git a/api/tests/unit_tests/core/variables/test_segment.py b/api/tests/unit_tests/core/variables/test_segment.py index 7406b88270..72052c8c05 100644 --- a/api/tests/unit_tests/core/variables/test_segment.py +++ b/api/tests/unit_tests/core/variables/test_segment.py @@ -2,6 +2,11 @@ import dataclasses import orjson import pytest +from pydantic import BaseModel + +from core.helper import encrypter +from core.workflow.system_variables import build_bootstrap_variables, build_system_variables +from core.workflow.variable_pool_initializer import add_variables_to_pool from graphon.file import File, FileTransferMethod, FileType from graphon.runtime import VariablePool from graphon.variables.segment_group import SegmentGroup @@ -42,11 +47,6 @@ from graphon.variables.variables import ( StringVariable, Variable, ) -from pydantic import BaseModel - -from core.helper import encrypter -from core.workflow.system_variables import build_bootstrap_variables, build_system_variables -from core.workflow.variable_pool_initializer import add_variables_to_pool def _build_variable_pool( diff --git a/api/tests/unit_tests/core/variables/test_segment_type.py b/api/tests/unit_tests/core/variables/test_segment_type.py index 37ecd2890b..d4e862220a 100644 --- a/api/tests/unit_tests/core/variables/test_segment_type.py +++ b/api/tests/unit_tests/core/variables/test_segment_type.py @@ -1,4 +1,5 @@ import pytest + from graphon.variables.segment_group import SegmentGroup from graphon.variables.segments import StringSegment from graphon.variables.types import ArrayValidation, SegmentType diff --git a/api/tests/unit_tests/core/variables/test_segment_type_validation.py b/api/tests/unit_tests/core/variables/test_segment_type_validation.py index 09254e17a3..94e788edb2 100644 --- a/api/tests/unit_tests/core/variables/test_segment_type_validation.py +++ b/api/tests/unit_tests/core/variables/test_segment_type_validation.py @@ -9,6 +9,7 @@ from dataclasses import dataclass from typing import Any import pytest + from graphon.file import File, FileTransferMethod, FileType from graphon.variables.segment_group import SegmentGroup from graphon.variables.segments import ( diff --git a/api/tests/unit_tests/core/variables/test_variables.py b/api/tests/unit_tests/core/variables/test_variables.py index 75b01bf42e..dae5e1ce98 100644 --- a/api/tests/unit_tests/core/variables/test_variables.py +++ b/api/tests/unit_tests/core/variables/test_variables.py @@ -1,4 +1,6 @@ import pytest +from pydantic import ValidationError + from graphon.variables import ( ArrayFileVariable, ArrayVariable, @@ -10,7 +12,6 @@ from graphon.variables import ( StringVariable, ) from graphon.variables.variables import VariableBase -from pydantic import ValidationError def test_frozen_variables(): diff --git a/api/tests/unit_tests/core/workflow/graph_engine/layers/conftest.py b/api/tests/unit_tests/core/workflow/graph_engine/layers/conftest.py index 41627f5e0b..025d79b25d 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/layers/conftest.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/layers/conftest.py @@ -5,12 +5,13 @@ Shared fixtures for ObservabilityLayer tests. from unittest.mock import MagicMock, patch import pytest -from graphon.enums import BuiltinNodeTypes from opentelemetry.sdk.trace import TracerProvider from opentelemetry.sdk.trace.export import SimpleSpanProcessor from opentelemetry.sdk.trace.export.in_memory_span_exporter import InMemorySpanExporter from opentelemetry.trace import set_tracer_provider +from graphon.enums import BuiltinNodeTypes + @pytest.fixture def memory_span_exporter(): @@ -61,9 +62,8 @@ def mock_llm_node(): @pytest.fixture def mock_tool_node(): """Create a mock Tool Node with tool-specific attributes.""" - from graphon.nodes.tool.entities import ToolNodeData - from core.tools.entities.tool_entities import ToolProviderType + from graphon.nodes.tool.entities import ToolNodeData node = MagicMock() node.id = "test-tool-node-id" diff --git a/api/tests/unit_tests/core/workflow/graph_engine/layers/test_llm_quota.py b/api/tests/unit_tests/core/workflow/graph_engine/layers/test_llm_quota.py index 99d131737e..5d6667257f 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/layers/test_llm_quota.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/layers/test_llm_quota.py @@ -3,17 +3,16 @@ from datetime import datetime from types import SimpleNamespace from unittest.mock import MagicMock, patch +from core.app.entities.app_invoke_entities import DifyRunContext, InvokeFrom, UserFrom +from core.app.workflow.layers.llm_quota import LLMQuotaLayer +from core.errors.error import QuotaExceededError +from core.model_manager import ModelInstance from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.graph_engine.entities.commands import CommandType from graphon.graph_events import NodeRunSucceededEvent from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.node_events import NodeRunResult -from core.app.entities.app_invoke_entities import DifyRunContext, InvokeFrom, UserFrom -from core.app.workflow.layers.llm_quota import LLMQuotaLayer -from core.errors.error import QuotaExceededError -from core.model_manager import ModelInstance - def _build_dify_context() -> DifyRunContext: return DifyRunContext( diff --git a/api/tests/unit_tests/core/workflow/graph_engine/layers/test_observability.py b/api/tests/unit_tests/core/workflow/graph_engine/layers/test_observability.py index 9cf72763ee..919f15efd0 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/layers/test_observability.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/layers/test_observability.py @@ -13,10 +13,10 @@ Test coverage: from unittest.mock import patch import pytest -from graphon.enums import BuiltinNodeTypes from opentelemetry.trace import StatusCode from core.app.workflow.layers.observability import ObservabilityLayer +from graphon.enums import BuiltinNodeTypes class TestObservabilityLayerInitialization: diff --git a/api/tests/unit_tests/core/workflow/graph_engine/test_mock_factory.py b/api/tests/unit_tests/core/workflow/graph_engine/test_mock_factory.py index 88989db856..76b2984a4b 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/test_mock_factory.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/test_mock_factory.py @@ -7,12 +7,11 @@ requiring external services (LLM, Agent, Tool, Knowledge Retrieval, HTTP Request from typing import TYPE_CHECKING, Any +from core.workflow.node_factory import DifyNodeFactory from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter from graphon.enums import BuiltinNodeTypes, NodeType from graphon.nodes.base.node import Node -from core.workflow.node_factory import DifyNodeFactory - from .test_mock_nodes import ( MockAgentNode, MockCodeNode, diff --git a/api/tests/unit_tests/core/workflow/graph_engine/test_mock_nodes.py b/api/tests/unit_tests/core/workflow/graph_engine/test_mock_nodes.py index 8b7fbd1b30..971b9b2bbf 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/test_mock_nodes.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/test_mock_nodes.py @@ -10,6 +10,10 @@ from collections.abc import Generator, Mapping from typing import TYPE_CHECKING, Any, Optional from unittest.mock import MagicMock +from core.model_manager import ModelInstance +from core.workflow.node_runtime import DifyToolNodeRuntime +from core.workflow.nodes.agent import AgentNode +from core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node import KnowledgeRetrievalNode from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent @@ -27,11 +31,6 @@ from graphon.nodes.template_transform import TemplateTransformNode from graphon.nodes.tool import ToolNode from graphon.template_rendering import Jinja2TemplateRenderer, TemplateRenderError -from core.model_manager import ModelInstance -from core.workflow.node_runtime import DifyToolNodeRuntime -from core.workflow.nodes.agent import AgentNode -from core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node import KnowledgeRetrievalNode - if TYPE_CHECKING: from graphon.entities import GraphInitParams from graphon.runtime import GraphRuntimeState diff --git a/api/tests/unit_tests/core/workflow/graph_engine/test_parallel_human_input_join_resume.py b/api/tests/unit_tests/core/workflow/graph_engine/test_parallel_human_input_join_resume.py index 8311a1e847..55a329eba9 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/test_parallel_human_input_join_resume.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/test_parallel_human_input_join_resume.py @@ -4,6 +4,13 @@ from dataclasses import dataclass from datetime import datetime, timedelta from typing import Any, Protocol +from core.repositories.human_input_repository import ( + FormCreateParams, + HumanInputFormEntity, + HumanInputFormRepository, +) +from core.workflow.node_runtime import DifyHumanInputNodeRuntime +from core.workflow.system_variables import build_system_variables from graphon.entities import WorkflowStartReason from graphon.graph import Graph from graphon.graph_engine import GraphEngine, GraphEngineConfig @@ -23,14 +30,6 @@ from graphon.nodes.human_input.human_input_node import HumanInputNode from graphon.nodes.start.entities import StartNodeData from graphon.nodes.start.start_node import StartNode from graphon.runtime import GraphRuntimeState, VariablePool - -from core.repositories.human_input_repository import ( - FormCreateParams, - HumanInputFormEntity, - HumanInputFormRepository, -) -from core.workflow.node_runtime import DifyHumanInputNodeRuntime -from core.workflow.system_variables import build_system_variables from libs.datetime_utils import naive_utc_now from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/graph_engine/test_table_runner.py b/api/tests/unit_tests/core/workflow/graph_engine/test_table_runner.py index b11f957677..7d23b63049 100644 --- a/api/tests/unit_tests/core/workflow/graph_engine/test_table_runner.py +++ b/api/tests/unit_tests/core/workflow/graph_engine/test_table_runner.py @@ -19,6 +19,11 @@ from functools import lru_cache from pathlib import Path from typing import Any +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom +from core.tools.utils.yaml_utils import _load_yaml_file +from core.workflow.node_factory import DifyNodeFactory, get_default_root_node_id +from core.workflow.system_variables import build_bootstrap_variables, build_system_variables +from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool from graphon.entities import GraphInitParams from graphon.graph import Graph from graphon.graph_engine import GraphEngine, GraphEngineConfig @@ -39,12 +44,6 @@ from graphon.variables import ( StringVariable, ) -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom -from core.tools.utils.yaml_utils import _load_yaml_file -from core.workflow.node_factory import DifyNodeFactory, get_default_root_node_id -from core.workflow.system_variables import build_bootstrap_variables, build_system_variables -from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool - from .test_mock_config import MockConfig from .test_mock_factory import MockNodeFactory diff --git a/api/tests/unit_tests/core/workflow/nodes/agent/test_message_transformer.py b/api/tests/unit_tests/core/workflow/nodes/agent/test_message_transformer.py index cbc920705c..1f4509af9a 100644 --- a/api/tests/unit_tests/core/workflow/nodes/agent/test_message_transformer.py +++ b/api/tests/unit_tests/core/workflow/nodes/agent/test_message_transformer.py @@ -1,9 +1,8 @@ from unittest.mock import patch -from graphon.enums import BuiltinNodeTypes - from core.tools.utils.message_transformer import ToolFileMessageTransformer from core.workflow.nodes.agent.message_transformer import AgentMessageTransformer +from graphon.enums import BuiltinNodeTypes def test_transform_passes_conversation_id_to_tool_file_message_transformer() -> None: diff --git a/api/tests/unit_tests/core/workflow/nodes/agent/test_runtime_support.py b/api/tests/unit_tests/core/workflow/nodes/agent/test_runtime_support.py index 59dd763b59..c86de7f6e6 100644 --- a/api/tests/unit_tests/core/workflow/nodes/agent/test_runtime_support.py +++ b/api/tests/unit_tests/core/workflow/nodes/agent/test_runtime_support.py @@ -1,9 +1,8 @@ from types import SimpleNamespace from unittest.mock import Mock, patch -from graphon.model_runtime.entities.model_entities import ModelType - from core.workflow.nodes.agent.runtime_support import AgentRuntimeSupport +from graphon.model_runtime.entities.model_entities import ModelType def test_fetch_model_reuses_single_model_assembly(): diff --git a/api/tests/unit_tests/core/workflow/nodes/answer/test_answer.py b/api/tests/unit_tests/core/workflow/nodes/answer/test_answer.py index 7195471eb6..9c0ad25b58 100644 --- a/api/tests/unit_tests/core/workflow/nodes/answer/test_answer.py +++ b/api/tests/unit_tests/core/workflow/nodes/answer/test_answer.py @@ -2,15 +2,14 @@ import time import uuid from unittest.mock import MagicMock -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.graph import Graph -from graphon.nodes.answer.answer_node import AnswerNode -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.workflow.node_factory import DifyNodeFactory from core.workflow.system_variables import build_system_variables from extensions.ext_database import db +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.graph import Graph +from graphon.nodes.answer.answer_node import AnswerNode +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/base/test_base_node.py b/api/tests/unit_tests/core/workflow/nodes/base/test_base_node.py index 343bcd3919..ec4cef1955 100644 --- a/api/tests/unit_tests/core/workflow/nodes/base/test_base_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/base/test_base_node.py @@ -1,10 +1,10 @@ import pytest + +from core.workflow.node_factory import get_node_type_classes_mapping from graphon.entities.base_node_data import BaseNodeData from graphon.enums import BuiltinNodeTypes, NodeType from graphon.nodes.base.node import Node -from core.workflow.node_factory import get_node_type_classes_mapping - # Ensures that all production node classes are imported and registered. _ = get_node_type_classes_mapping() diff --git a/api/tests/unit_tests/core/workflow/nodes/base/test_get_node_type_classes_mapping.py b/api/tests/unit_tests/core/workflow/nodes/base/test_get_node_type_classes_mapping.py index b9371a34f4..ef0df55995 100644 --- a/api/tests/unit_tests/core/workflow/nodes/base/test_get_node_type_classes_mapping.py +++ b/api/tests/unit_tests/core/workflow/nodes/base/test_get_node_type_classes_mapping.py @@ -1,6 +1,7 @@ import types from collections.abc import Mapping +from core.workflow.node_factory import get_node_type_classes_mapping from graphon.entities.base_node_data import BaseNodeData from graphon.enums import BuiltinNodeTypes, NodeType from graphon.nodes.base.node import Node @@ -13,8 +14,6 @@ from graphon.nodes.variable_assigner.v2.node import ( VariableAssignerNode as VariableAssignerV2, ) -from core.workflow.node_factory import get_node_type_classes_mapping - def test_variable_assigner_latest_prefers_highest_numeric_version(): # Act diff --git a/api/tests/unit_tests/core/workflow/nodes/code/code_node_spec.py b/api/tests/unit_tests/core/workflow/nodes/code/code_node_spec.py index d155124c50..ce0c9b79c6 100644 --- a/api/tests/unit_tests/core/workflow/nodes/code/code_node_spec.py +++ b/api/tests/unit_tests/core/workflow/nodes/code/code_node_spec.py @@ -1,3 +1,4 @@ +from configs import dify_config from graphon.nodes.code.code_node import CodeNode from graphon.nodes.code.entities import CodeLanguage, CodeNodeData from graphon.nodes.code.exc import ( @@ -8,8 +9,6 @@ from graphon.nodes.code.exc import ( from graphon.nodes.code.limits import CodeNodeLimits from graphon.variables.types import SegmentType -from configs import dify_config - CodeNode._limits = CodeNodeLimits( max_string_length=dify_config.CODE_MAX_STRING_LENGTH, max_number=dify_config.CODE_MAX_NUMBER, diff --git a/api/tests/unit_tests/core/workflow/nodes/datasource/test_datasource_node.py b/api/tests/unit_tests/core/workflow/nodes/datasource/test_datasource_node.py index fb03ae9998..9cceadde49 100644 --- a/api/tests/unit_tests/core/workflow/nodes/datasource/test_datasource_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/datasource/test_datasource_node.py @@ -1,8 +1,7 @@ -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent - from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY from core.workflow.nodes.datasource.datasource_node import DatasourceNode +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.node_events import NodeRunResult, StreamChunkEvent, StreamCompletedEvent class _VarSeg: diff --git a/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_executor.py b/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_executor.py index a5026b40cf..be7cc073db 100644 --- a/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_executor.py +++ b/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_executor.py @@ -1,4 +1,8 @@ import pytest + +from configs import dify_config +from core.helper.ssrf_proxy import ssrf_proxy +from core.workflow.system_variables import default_system_variables from graphon.file.file_manager import file_manager from graphon.nodes.http_request import ( BodyData, @@ -12,10 +16,6 @@ from graphon.nodes.http_request.exc import AuthorizationConfigError from graphon.nodes.http_request.executor import Executor from graphon.runtime import VariablePool -from configs import dify_config -from core.helper.ssrf_proxy import ssrf_proxy -from core.workflow.system_variables import default_system_variables - HTTP_REQUEST_CONFIG = HttpRequestNodeConfig( max_connect_timeout=dify_config.HTTP_REQUEST_MAX_CONNECT_TIMEOUT, max_read_timeout=dify_config.HTTP_REQUEST_MAX_READ_TIMEOUT, diff --git a/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_node.py b/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_node.py index 4705b3f76e..a3cadc0681 100644 --- a/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/http_request/test_http_request_node.py @@ -3,17 +3,17 @@ from typing import Any import httpx import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.file.file_manager import file_manager -from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, HttpRequestNode, HttpRequestNodeConfig -from graphon.nodes.http_request.entities import HttpRequestNodeTimeout, Response -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.helper.ssrf_proxy import ssrf_proxy from core.tools.tool_file_manager import ToolFileManager from core.workflow.node_runtime import DifyFileReferenceFactory from core.workflow.system_variables import build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.file.file_manager import file_manager +from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, HttpRequestNode, HttpRequestNodeConfig +from graphon.nodes.http_request.entities import HttpRequestNodeTimeout, Response +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_graph_init_params HTTP_REQUEST_CONFIG = HttpRequestNodeConfig( diff --git a/api/tests/unit_tests/core/workflow/nodes/human_input/test_email_delivery_config.py b/api/tests/unit_tests/core/workflow/nodes/human_input/test_email_delivery_config.py index d16e1233ac..1d6a4da7c4 100644 --- a/api/tests/unit_tests/core/workflow/nodes/human_input/test_email_delivery_config.py +++ b/api/tests/unit_tests/core/workflow/nodes/human_input/test_email_delivery_config.py @@ -1,6 +1,5 @@ -from graphon.runtime import VariablePool - from core.workflow.human_input_compat import EmailDeliveryConfig, EmailRecipients +from graphon.runtime import VariablePool def test_render_body_template_replaces_variable_values(): diff --git a/api/tests/unit_tests/core/workflow/nodes/human_input/test_entities.py b/api/tests/unit_tests/core/workflow/nodes/human_input/test_entities.py index a2cdbbf132..c0e21d0bf7 100644 --- a/api/tests/unit_tests/core/workflow/nodes/human_input/test_entities.py +++ b/api/tests/unit_tests/core/workflow/nodes/human_input/test_entities.py @@ -10,24 +10,6 @@ from typing import Any from unittest.mock import MagicMock import pytest -from graphon.entities import GraphInitParams -from graphon.node_events import PauseRequestedEvent -from graphon.node_events.node import StreamCompletedEvent -from graphon.nodes.human_input.entities import ( - FormInput, - FormInputDefault, - HumanInputNodeData, - UserAction, -) -from graphon.nodes.human_input.enums import ( - ButtonStyle, - FormInputType, - HumanInputFormStatus, - PlaceholderType, - TimeoutUnit, -) -from graphon.nodes.human_input.human_input_node import HumanInputNode -from graphon.runtime import GraphRuntimeState, VariablePool from pydantic import ValidationError from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY @@ -50,6 +32,24 @@ from core.workflow.human_input_compat import ( ) from core.workflow.node_runtime import DifyHumanInputNodeRuntime from core.workflow.system_variables import build_system_variables +from graphon.entities import GraphInitParams +from graphon.node_events import PauseRequestedEvent +from graphon.node_events.node import StreamCompletedEvent +from graphon.nodes.human_input.entities import ( + FormInput, + FormInputDefault, + HumanInputNodeData, + UserAction, +) +from graphon.nodes.human_input.enums import ( + ButtonStyle, + FormInputType, + HumanInputFormStatus, + PlaceholderType, + TimeoutUnit, +) +from graphon.nodes.human_input.human_input_node import HumanInputNode +from graphon.runtime import GraphRuntimeState, VariablePool from libs.datetime_utils import naive_utc_now diff --git a/api/tests/unit_tests/core/workflow/nodes/human_input/test_human_input_form_filled_event.py b/api/tests/unit_tests/core/workflow/nodes/human_input/test_human_input_form_filled_event.py index 52802c7ce1..bc98028d5b 100644 --- a/api/tests/unit_tests/core/workflow/nodes/human_input/test_human_input_form_filled_event.py +++ b/api/tests/unit_tests/core/workflow/nodes/human_input/test_human_input_form_filled_event.py @@ -1,6 +1,9 @@ import datetime from types import SimpleNamespace +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom +from core.workflow.node_runtime import DifyHumanInputNodeRuntime +from core.workflow.system_variables import default_system_variables from graphon.entities import GraphInitParams from graphon.enums import BuiltinNodeTypes from graphon.graph_events import ( @@ -11,10 +14,6 @@ from graphon.graph_events import ( from graphon.nodes.human_input.enums import HumanInputFormStatus from graphon.nodes.human_input.human_input_node import HumanInputNode from graphon.runtime import GraphRuntimeState, VariablePool - -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom -from core.workflow.node_runtime import DifyHumanInputNodeRuntime -from core.workflow.system_variables import default_system_variables from libs.datetime_utils import naive_utc_now diff --git a/api/tests/unit_tests/core/workflow/nodes/iteration/test_iteration_child_engine_errors.py b/api/tests/unit_tests/core/workflow/nodes/iteration/test_iteration_child_engine_errors.py index bbfe350f7e..82cc734274 100644 --- a/api/tests/unit_tests/core/workflow/nodes/iteration/test_iteration_child_engine_errors.py +++ b/api/tests/unit_tests/core/workflow/nodes/iteration/test_iteration_child_engine_errors.py @@ -2,6 +2,8 @@ from collections.abc import Mapping from typing import Any import pytest + +from core.workflow.system_variables import default_system_variables from graphon.entities import GraphInitParams from graphon.nodes.iteration.exc import IterationGraphNotFoundError from graphon.nodes.iteration.iteration_node import IterationNode @@ -11,8 +13,6 @@ from graphon.runtime import ( GraphRuntimeState, VariablePool, ) - -from core.workflow.system_variables import default_system_variables from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/knowledge_index/test_knowledge_index_node.py b/api/tests/unit_tests/core/workflow/nodes/knowledge_index/test_knowledge_index_node.py index f8802138b5..a6fca1bfb4 100644 --- a/api/tests/unit_tests/core/workflow/nodes/knowledge_index/test_knowledge_index_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/knowledge_index/test_knowledge_index_node.py @@ -3,9 +3,6 @@ import uuid from unittest.mock import Mock import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variables.segments import StringSegment from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.rag.index_processor.constant.index_type import IndexTechniqueType @@ -19,6 +16,9 @@ from core.workflow.nodes.knowledge_index.protocols import ( SummaryIndexServiceProtocol, ) from core.workflow.system_variables import SystemVariableKey, build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variables.segments import StringSegment from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/knowledge_retrieval/test_knowledge_retrieval_node.py b/api/tests/unit_tests/core/workflow/nodes/knowledge_retrieval/test_knowledge_retrieval_node.py index ab64be59ad..45e8ae7d20 100644 --- a/api/tests/unit_tests/core/workflow/nodes/knowledge_retrieval/test_knowledge_retrieval_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/knowledge_retrieval/test_knowledge_retrieval_node.py @@ -3,10 +3,6 @@ import uuid from unittest.mock import Mock import pytest -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variables import StringSegment from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.workflow.nodes.knowledge_retrieval.entities import ( @@ -21,6 +17,10 @@ from core.workflow.nodes.knowledge_retrieval.exc import RateLimitExceededError from core.workflow.nodes.knowledge_retrieval.knowledge_retrieval_node import KnowledgeRetrievalNode from core.workflow.nodes.knowledge_retrieval.retrieval import RAGRetrievalProtocol, Source from core.workflow.system_variables import build_system_variables +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variables import StringSegment from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/list_operator/node_spec.py b/api/tests/unit_tests/core/workflow/nodes/list_operator/node_spec.py index fdf1706765..eca34f05be 100644 --- a/api/tests/unit_tests/core/workflow/nodes/list_operator/node_spec.py +++ b/api/tests/unit_tests/core/workflow/nodes/list_operator/node_spec.py @@ -1,14 +1,14 @@ from unittest.mock import MagicMock import pytest + +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY from graphon.entities import GraphInitParams from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.nodes.list_operator.node import ListOperatorNode from graphon.runtime import GraphRuntimeState from graphon.variables import ArrayNumberSegment, ArrayStringSegment -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY - class TestListOperatorNode: """Comprehensive tests for ListOperatorNode.""" diff --git a/api/tests/unit_tests/core/workflow/nodes/llm/test_llm_utils.py b/api/tests/unit_tests/core/workflow/nodes/llm/test_llm_utils.py index c784f805c0..4186bbdc93 100644 --- a/api/tests/unit_tests/core/workflow/nodes/llm/test_llm_utils.py +++ b/api/tests/unit_tests/core/workflow/nodes/llm/test_llm_utils.py @@ -1,6 +1,8 @@ from unittest import mock import pytest + +from core.model_manager import ModelInstance from graphon.file import File, FileTransferMethod, FileType from graphon.model_runtime.entities import ( ImagePromptMessageContent, @@ -33,8 +35,6 @@ from graphon.nodes.llm.exc import ( from graphon.runtime import VariablePool from graphon.variables import ArrayAnySegment, ArrayFileSegment, NoneSegment -from core.model_manager import ModelInstance - def _build_model_schema( *, diff --git a/api/tests/unit_tests/core/workflow/nodes/llm/test_node.py b/api/tests/unit_tests/core/workflow/nodes/llm/test_node.py index 7841bf05ad..b1f81b6c48 100644 --- a/api/tests/unit_tests/core/workflow/nodes/llm/test_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/llm/test_node.py @@ -5,6 +5,19 @@ from collections.abc import Sequence from unittest import mock import pytest + +from core.app.entities.app_invoke_entities import DifyRunContext, InvokeFrom, ModelConfigWithCredentialsEntity, UserFrom +from core.app.llm.model_access import ( + DifyCredentialsProvider, + DifyModelFactory, + build_dify_model_access, + fetch_model_config, +) +from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle +from core.entities.provider_entities import CustomConfiguration, SystemConfiguration +from core.plugin.impl.model_runtime_factory import create_plugin_model_runtime +from core.prompt.entities.advanced_prompt_entities import MemoryConfig +from core.workflow.system_variables import default_system_variables from graphon.entities import GraphInitParams from graphon.file import File, FileTransferMethod, FileType from graphon.model_runtime.entities.common_entities import I18nObject @@ -67,19 +80,6 @@ from graphon.nodes.llm.runtime_protocols import PromptMessageSerializerProtocol from graphon.runtime import GraphRuntimeState, VariablePool from graphon.template_rendering import TemplateRenderError from graphon.variables import ArrayAnySegment, ArrayFileSegment, NoneSegment - -from core.app.entities.app_invoke_entities import DifyRunContext, InvokeFrom, ModelConfigWithCredentialsEntity, UserFrom -from core.app.llm.model_access import ( - DifyCredentialsProvider, - DifyModelFactory, - build_dify_model_access, - fetch_model_config, -) -from core.entities.provider_configuration import ProviderConfiguration, ProviderModelBundle -from core.entities.provider_entities import CustomConfiguration, SystemConfiguration -from core.plugin.impl.model_runtime_factory import create_plugin_model_runtime -from core.prompt.entities.advanced_prompt_entities import MemoryConfig -from core.workflow.system_variables import default_system_variables from models.provider import ProviderType from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/parameter_extractor/test_parameter_extractor_node.py b/api/tests/unit_tests/core/workflow/nodes/parameter_extractor/test_parameter_extractor_node.py index 1c362a0a03..8f8ec49f14 100644 --- a/api/tests/unit_tests/core/workflow/nodes/parameter_extractor/test_parameter_extractor_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/parameter_extractor/test_parameter_extractor_node.py @@ -6,6 +6,8 @@ from dataclasses import dataclass from typing import Any import pytest + +from factories.variable_factory import build_segment_with_type from graphon.model_runtime.entities import LLMMode from graphon.nodes.llm import ModelConfig, VisionConfig from graphon.nodes.parameter_extractor.entities import ParameterConfig, ParameterExtractorNodeData @@ -18,8 +20,6 @@ from graphon.nodes.parameter_extractor.exc import ( from graphon.nodes.parameter_extractor.parameter_extractor_node import ParameterExtractorNode from graphon.variables.types import SegmentType -from factories.variable_factory import build_segment_with_type - @dataclass class ValidTestCase: diff --git a/api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py b/api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py index d86e0efe02..bc44ececd8 100644 --- a/api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py +++ b/api/tests/unit_tests/core/workflow/nodes/template_transform/template_transform_node_spec.py @@ -1,6 +1,8 @@ from unittest.mock import MagicMock import pytest + +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from graphon.enums import BuiltinNodeTypes, ErrorStrategy, WorkflowNodeExecutionStatus from graphon.graph import Graph from graphon.nodes.base.entities import VariableSelector @@ -8,8 +10,6 @@ from graphon.nodes.template_transform.entities import TemplateTransformNodeData from graphon.nodes.template_transform.template_transform_node import TemplateTransformNode from graphon.runtime import GraphRuntimeState from graphon.template_rendering import TemplateRenderError - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/template_transform/test_template_transform_node.py b/api/tests/unit_tests/core/workflow/nodes/template_transform/test_template_transform_node.py index bd22a8e318..636237e56e 100644 --- a/api/tests/unit_tests/core/workflow/nodes/template_transform/test_template_transform_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/template_transform/test_template_transform_node.py @@ -1,14 +1,14 @@ from unittest.mock import MagicMock import pytest + +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from graphon.nodes.base.entities import VariableSelector from graphon.nodes.template_transform.template_transform_node import ( DEFAULT_TEMPLATE_TRANSFORM_MAX_OUTPUT_LENGTH, TemplateTransformNode, ) from graphon.runtime import GraphRuntimeState - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from tests.workflow_test_utils import build_test_graph_init_params from .template_transform_node_spec import TestTemplateTransformNode # noqa: F401 diff --git a/api/tests/unit_tests/core/workflow/nodes/test_base_node.py b/api/tests/unit_tests/core/workflow/nodes/test_base_node.py index e11ebf6eb8..0522dd9d14 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_base_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_base_node.py @@ -1,16 +1,16 @@ from collections.abc import Mapping import pytest + +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.workflow.node_runtime import resolve_dify_run_context +from core.workflow.system_variables import build_system_variables from graphon.entities import GraphInitParams from graphon.entities.base_node_data import BaseNodeData from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter from graphon.enums import BuiltinNodeTypes from graphon.nodes.base.node import Node from graphon.runtime import GraphRuntimeState, VariablePool - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.workflow.node_runtime import resolve_dify_run_context -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/test_document_extractor_node.py b/api/tests/unit_tests/core/workflow/nodes/test_document_extractor_node.py index 555ff0c945..87ec2d5bce 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_document_extractor_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_document_extractor_node.py @@ -4,6 +4,8 @@ from unittest.mock import Mock, patch import pandas as pd import pytest from docx.oxml.text.paragraph import CT_P + +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from graphon.entities import GraphInitParams from graphon.enums import BuiltinNodeTypes, WorkflowNodeExecutionStatus from graphon.file import File, FileTransferMethod @@ -19,8 +21,6 @@ from graphon.nodes.document_extractor.node import ( from graphon.variables import ArrayFileSegment from graphon.variables.segments import ArrayStringSegment from graphon.variables.variables import StringVariable - -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/test_if_else.py b/api/tests/unit_tests/core/workflow/nodes/test_if_else.py index 1b14f0ab13..782750e02e 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_if_else.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_if_else.py @@ -3,6 +3,11 @@ import uuid from unittest.mock import MagicMock, Mock import pytest + +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom +from core.workflow.node_factory import DifyNodeFactory +from core.workflow.system_variables import build_system_variables +from extensions.ext_database import db from graphon.enums import WorkflowNodeExecutionStatus from graphon.file import File, FileTransferMethod, FileType from graphon.graph import Graph @@ -11,11 +16,6 @@ from graphon.nodes.if_else.if_else_node import IfElseNode from graphon.runtime import GraphRuntimeState, VariablePool from graphon.utils.condition.entities import Condition, SubCondition, SubVariableCondition from graphon.variables import ArrayFileSegment - -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom -from core.workflow.node_factory import DifyNodeFactory -from core.workflow.system_variables import build_system_variables -from extensions.ext_database import db from tests.workflow_test_utils import build_test_graph_init_params diff --git a/api/tests/unit_tests/core/workflow/nodes/test_list_operator.py b/api/tests/unit_tests/core/workflow/nodes/test_list_operator.py index d28c3e01e5..b217e4e8e7 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_list_operator.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_list_operator.py @@ -1,6 +1,8 @@ from unittest.mock import MagicMock import pytest + +from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom from graphon.enums import WorkflowNodeExecutionStatus from graphon.file import File, FileTransferMethod, FileType from graphon.nodes.list_operator.entities import ( @@ -16,8 +18,6 @@ from graphon.nodes.list_operator.exc import InvalidKeyError from graphon.nodes.list_operator.node import ListOperatorNode, _get_file_extract_string_func from graphon.variables import ArrayFileSegment -from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom - @pytest.fixture def list_operator_node(): diff --git a/api/tests/unit_tests/core/workflow/nodes/test_start_node_json_object.py b/api/tests/unit_tests/core/workflow/nodes/test_start_node_json_object.py index 833c303052..543f9878de 100644 --- a/api/tests/unit_tests/core/workflow/nodes/test_start_node_json_object.py +++ b/api/tests/unit_tests/core/workflow/nodes/test_start_node_json_object.py @@ -2,16 +2,16 @@ import json import time import pytest +from pydantic import ValidationError as PydanticValidationError + +from core.workflow.system_variables import build_system_variables +from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, ENVIRONMENT_VARIABLE_NODE_ID from graphon.nodes.start.entities import StartNodeData from graphon.nodes.start.start_node import StartNode from graphon.runtime import GraphRuntimeState from graphon.variables import build_segment, segment_to_variable from graphon.variables.input_entities import VariableEntity, VariableEntityType from graphon.variables.variables import Variable -from pydantic import ValidationError as PydanticValidationError - -from core.workflow.system_variables import build_system_variables -from core.workflow.variable_prefixes import CONVERSATION_VARIABLE_NODE_ID, ENVIRONMENT_VARIABLE_NODE_ID from tests.workflow_test_utils import build_test_graph_init_params, build_test_variable_pool diff --git a/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node.py b/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node.py index 1587014802..c806181340 100644 --- a/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node.py @@ -8,14 +8,14 @@ from typing import TYPE_CHECKING, Any from unittest.mock import MagicMock import pytest + +from core.workflow.system_variables import build_system_variables from graphon.file import File, FileTransferMethod, FileType from graphon.model_runtime.entities.llm_entities import LLMUsage from graphon.node_events import StreamChunkEvent, StreamCompletedEvent from graphon.nodes.tool_runtime_entities import ToolRuntimeHandle, ToolRuntimeMessage from graphon.runtime import GraphRuntimeState, VariablePool from graphon.variables.segments import ArrayFileSegment - -from core.workflow.system_variables import build_system_variables from tests.workflow_test_utils import build_test_graph_init_params if TYPE_CHECKING: # pragma: no cover - imported for type checking only diff --git a/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node_runtime.py b/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node_runtime.py index c4dfc5a179..438af211f3 100644 --- a/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node_runtime.py +++ b/api/tests/unit_tests/core/workflow/nodes/tool/test_tool_node_runtime.py @@ -6,11 +6,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMUsage -from graphon.nodes.tool.entities import ToolNodeData, ToolProviderType -from graphon.nodes.tool.exc import ToolRuntimeInvocationError -from graphon.nodes.tool_runtime_entities import ToolRuntimeHandle, ToolRuntimeMessage -from graphon.runtime import VariablePool from core.callback_handler.workflow_tool_callback_handler import DifyWorkflowCallbackHandler from core.plugin.impl.exc import PluginDaemonClientSideError, PluginInvokeError @@ -22,6 +17,11 @@ from core.tools.tool_manager import ToolManager from core.tools.utils.message_transformer import ToolFileMessageTransformer from core.workflow.node_runtime import DifyToolNodeRuntime from core.workflow.system_variables import build_system_variables +from graphon.model_runtime.entities.llm_entities import LLMUsage +from graphon.nodes.tool.entities import ToolNodeData, ToolProviderType +from graphon.nodes.tool.exc import ToolRuntimeInvocationError +from graphon.nodes.tool_runtime_entities import ToolRuntimeHandle, ToolRuntimeMessage +from graphon.runtime import VariablePool from tests.workflow_test_utils import build_test_graph_init_params, build_test_variable_pool diff --git a/api/tests/unit_tests/core/workflow/nodes/trigger_plugin/test_trigger_event_node.py b/api/tests/unit_tests/core/workflow/nodes/trigger_plugin/test_trigger_event_node.py index 952e798430..c8ddc53284 100644 --- a/api/tests/unit_tests/core/workflow/nodes/trigger_plugin/test_trigger_event_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/trigger_plugin/test_trigger_event_node.py @@ -1,13 +1,12 @@ from collections.abc import Mapping -from graphon.entities import GraphInitParams -from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter -from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState - from core.trigger.constants import TRIGGER_PLUGIN_NODE_TYPE from core.workflow.nodes.trigger_plugin.trigger_event_node import TriggerEventNode from core.workflow.system_variables import build_system_variables +from graphon.entities import GraphInitParams +from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter +from graphon.enums import WorkflowNodeExecutionMetadataKey, WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState from tests.workflow_test_utils import build_test_graph_init_params, build_test_variable_pool diff --git a/api/tests/unit_tests/core/workflow/nodes/webhook/test_exceptions.py b/api/tests/unit_tests/core/workflow/nodes/webhook/test_exceptions.py index f1132af02b..617554ee17 100644 --- a/api/tests/unit_tests/core/workflow/nodes/webhook/test_exceptions.py +++ b/api/tests/unit_tests/core/workflow/nodes/webhook/test_exceptions.py @@ -1,5 +1,4 @@ import pytest -from graphon.entities.exc import BaseNodeError from core.workflow.nodes.trigger_webhook.exc import ( WebhookConfigError, @@ -7,6 +6,7 @@ from core.workflow.nodes.trigger_webhook.exc import ( WebhookNotFoundError, WebhookTimeoutError, ) +from graphon.entities.exc import BaseNodeError def test_webhook_node_error_inheritance(): diff --git a/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_file_conversion.py b/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_file_conversion.py index cccd3fb676..1bbc12b23f 100644 --- a/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_file_conversion.py +++ b/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_file_conversion.py @@ -6,12 +6,9 @@ to FileVariable objects, fixing the "Invalid variable type: ObjectVariable" erro when passing files to downstream LLM nodes. """ +from typing import Any from unittest.mock import Mock, patch -from graphon.entities import GraphInitParams -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom from core.workflow.nodes.trigger_webhook.entities import ( ContentType, @@ -21,6 +18,9 @@ from core.workflow.nodes.trigger_webhook.entities import ( ) from core.workflow.nodes.trigger_webhook.node import TriggerWebhookNode from core.workflow.system_variables import default_system_variables +from graphon.entities import GraphInitParams +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool from tests.workflow_test_utils import build_test_variable_pool @@ -97,7 +97,7 @@ def create_test_file_dict( } -def build_webhook_variable_pool(inputs: dict) -> VariablePool: +def build_webhook_variable_pool(inputs: dict[str, Any]) -> VariablePool: return build_test_variable_pool( variables=default_system_variables(), node_id="webhook-node-1", @@ -105,7 +105,7 @@ def build_webhook_variable_pool(inputs: dict) -> VariablePool: ) -def expected_factory_mapping(file_dict: dict) -> dict: +def expected_factory_mapping(file_dict: dict[str, Any]) -> dict[str, Any]: return {**file_dict, "upload_file_id": file_dict["related_id"]} diff --git a/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_node.py b/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_node.py index 34c66a4f9f..427afa96ec 100644 --- a/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_node.py +++ b/api/tests/unit_tests/core/workflow/nodes/webhook/test_webhook_node.py @@ -1,11 +1,7 @@ +from typing import Any from unittest.mock import patch import pytest -from graphon.entities import GraphInitParams -from graphon.enums import WorkflowNodeExecutionStatus -from graphon.file import File, FileTransferMethod, FileType -from graphon.runtime import GraphRuntimeState, VariablePool -from graphon.variables import FileVariable, StringVariable from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, InvokeFrom, UserFrom from core.trigger.constants import TRIGGER_WEBHOOK_NODE_TYPE @@ -18,6 +14,11 @@ from core.workflow.nodes.trigger_webhook.entities import ( ) from core.workflow.nodes.trigger_webhook.node import TriggerWebhookNode from core.workflow.system_variables import default_system_variables +from graphon.entities import GraphInitParams +from graphon.enums import WorkflowNodeExecutionStatus +from graphon.file import File, FileTransferMethod, FileType +from graphon.runtime import GraphRuntimeState, VariablePool +from graphon.variables import FileVariable, StringVariable from tests.workflow_test_utils import build_test_variable_pool @@ -62,7 +63,7 @@ def create_webhook_node(webhook_data: WebhookData, variable_pool: VariablePool) return node -def build_webhook_variable_pool(inputs: dict) -> VariablePool: +def build_webhook_variable_pool(inputs: dict[str, Any]) -> VariablePool: return build_test_variable_pool( variables=default_system_variables(), node_id="1", diff --git a/api/tests/unit_tests/core/workflow/test_human_input_compat.py b/api/tests/unit_tests/core/workflow/test_human_input_compat.py index cd41c43e4a..0623800b30 100644 --- a/api/tests/unit_tests/core/workflow/test_human_input_compat.py +++ b/api/tests/unit_tests/core/workflow/test_human_input_compat.py @@ -1,6 +1,5 @@ from types import SimpleNamespace -from graphon.enums import BuiltinNodeTypes from pydantic import BaseModel from core.workflow.human_input_compat import ( @@ -16,6 +15,7 @@ from core.workflow.human_input_compat import ( normalize_node_data_for_graph, parse_human_input_delivery_methods, ) +from graphon.enums import BuiltinNodeTypes def test_email_delivery_config_helpers_render_and_sanitize_text() -> None: diff --git a/api/tests/unit_tests/core/workflow/test_node_factory.py b/api/tests/unit_tests/core/workflow/test_node_factory.py index dfe1a47e37..424c50eb26 100644 --- a/api/tests/unit_tests/core/workflow/test_node_factory.py +++ b/api/tests/unit_tests/core/workflow/test_node_factory.py @@ -2,15 +2,15 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch, sentinel import pytest -from graphon.entities.base_node_data import BaseNodeData -from graphon.enums import BuiltinNodeTypes, NodeType -from graphon.nodes.code.entities import CodeLanguage -from graphon.variables.segments import StringSegment from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext, InvokeFrom, UserFrom from core.workflow import node_factory from core.workflow import template_rendering as workflow_template_rendering from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE +from graphon.entities.base_node_data import BaseNodeData +from graphon.enums import BuiltinNodeTypes, NodeType +from graphon.nodes.code.entities import CodeLanguage +from graphon.variables.segments import StringSegment def _assert_typed_node_config(config, *, node_id: str, node_type: NodeType, version: str = "1") -> None: diff --git a/api/tests/unit_tests/core/workflow/test_node_runtime.py b/api/tests/unit_tests/core/workflow/test_node_runtime.py index 4f9c1dad59..71a2afb28a 100644 --- a/api/tests/unit_tests/core/workflow/test_node_runtime.py +++ b/api/tests/unit_tests/core/workflow/test_node_runtime.py @@ -2,10 +2,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock, Mock, sentinel import pytest -from graphon.file import FileTransferMethod, FileType -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType -from graphon.nodes.human_input.entities import HumanInputNodeData from core.app.entities.app_invoke_entities import DIFY_RUN_CONTEXT_KEY, DifyRunContext, InvokeFrom, UserFrom from core.llm_generator.output_parser.errors import OutputParserError @@ -30,6 +26,10 @@ from core.workflow.node_runtime import ( build_dify_llm_file_saver, resolve_dify_run_context, ) +from graphon.file import FileTransferMethod, FileType +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import AIModelEntity, FetchFrom, ModelType +from graphon.nodes.human_input.entities import HumanInputNodeData from tests.workflow_test_utils import build_test_run_context diff --git a/api/tests/unit_tests/core/workflow/test_system_variable.py b/api/tests/unit_tests/core/workflow/test_system_variable.py index 05ea3dc311..bdeab1eda8 100644 --- a/api/tests/unit_tests/core/workflow/test_system_variable.py +++ b/api/tests/unit_tests/core/workflow/test_system_variable.py @@ -1,14 +1,13 @@ from types import SimpleNamespace -from graphon.file import File, FileTransferMethod, FileType -from graphon.nodes import BuiltinNodeTypes - from core.workflow.system_variables import ( build_system_variables, default_system_variables, get_node_creation_preload_selectors, system_variables_to_mapping, ) +from graphon.file import File, FileTransferMethod, FileType +from graphon.nodes import BuiltinNodeTypes def test_build_system_variables_normalizes_workflow_execution_id(): diff --git a/api/tests/unit_tests/core/workflow/test_variable_pool.py b/api/tests/unit_tests/core/workflow/test_variable_pool.py index e7b2b2914a..dddd6eb00c 100644 --- a/api/tests/unit_tests/core/workflow/test_variable_pool.py +++ b/api/tests/unit_tests/core/workflow/test_variable_pool.py @@ -2,6 +2,15 @@ import uuid from collections import defaultdict import pytest + +from core.workflow.system_variables import build_system_variables, system_variables_to_mapping +from core.workflow.variable_pool_initializer import add_variables_to_pool +from core.workflow.variable_prefixes import ( + CONVERSATION_VARIABLE_NODE_ID, + ENVIRONMENT_VARIABLE_NODE_ID, + SYSTEM_VARIABLE_NODE_ID, +) +from factories.variable_factory import build_segment, segment_to_variable from graphon.file import File, FileTransferMethod, FileType from graphon.runtime import VariablePool from graphon.variables import FileSegment, StringSegment @@ -27,15 +36,6 @@ from graphon.variables.variables import ( Variable, ) -from core.workflow.system_variables import build_system_variables, system_variables_to_mapping -from core.workflow.variable_pool_initializer import add_variables_to_pool -from core.workflow.variable_prefixes import ( - CONVERSATION_VARIABLE_NODE_ID, - ENVIRONMENT_VARIABLE_NODE_ID, - SYSTEM_VARIABLE_NODE_ID, -) -from factories.variable_factory import build_segment, segment_to_variable - @pytest.fixture def pool(): diff --git a/api/tests/unit_tests/core/workflow/test_workflow_entry.py b/api/tests/unit_tests/core/workflow/test_workflow_entry.py index d8361d06c4..041c5cc612 100644 --- a/api/tests/unit_tests/core/workflow/test_workflow_entry.py +++ b/api/tests/unit_tests/core/workflow/test_workflow_entry.py @@ -1,12 +1,6 @@ from types import SimpleNamespace import pytest -from graphon.entities.graph_config import NodeConfigDictAdapter -from graphon.file import File, FileTransferMethod, FileType -from graphon.nodes.code.code_node import CodeNode -from graphon.nodes.code.limits import CodeNodeLimits -from graphon.runtime import VariablePool -from graphon.variables.variables import StringVariable from configs import dify_config from core.helper.code_executor.code_executor import CodeLanguage @@ -16,6 +10,12 @@ from core.workflow.variable_prefixes import ( ENVIRONMENT_VARIABLE_NODE_ID, ) from core.workflow.workflow_entry import WorkflowEntry +from graphon.entities.graph_config import NodeConfigDictAdapter +from graphon.file import File, FileTransferMethod, FileType +from graphon.nodes.code.code_node import CodeNode +from graphon.nodes.code.limits import CodeNodeLimits +from graphon.runtime import VariablePool +from graphon.variables.variables import StringVariable @pytest.fixture(autouse=True) diff --git a/api/tests/unit_tests/core/workflow/test_workflow_entry_helpers.py b/api/tests/unit_tests/core/workflow/test_workflow_entry_helpers.py index 6dcaed1143..55800ffc03 100644 --- a/api/tests/unit_tests/core/workflow/test_workflow_entry_helpers.py +++ b/api/tests/unit_tests/core/workflow/test_workflow_entry_helpers.py @@ -4,6 +4,12 @@ from types import SimpleNamespace from unittest.mock import MagicMock, patch, sentinel import pytest + +from core.app.apps.exc import GenerateTaskStoppedError +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom +from core.model_manager import ModelInstance +from core.workflow import workflow_entry +from core.workflow.system_variables import default_system_variables from graphon.entities.base_node_data import BaseNodeData from graphon.entities.graph_config import NodeConfigDictAdapter from graphon.enums import NodeType, WorkflowNodeExecutionStatus @@ -17,12 +23,6 @@ from graphon.nodes import BuiltinNodeTypes from graphon.nodes.base.node import Node from graphon.runtime import ChildGraphNotFoundError, VariablePool from graphon.variables.variables import StringVariable - -from core.app.apps.exc import GenerateTaskStoppedError -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom -from core.model_manager import ModelInstance -from core.workflow import workflow_entry -from core.workflow.system_variables import default_system_variables from tests.workflow_test_utils import build_test_graph_init_params, build_test_variable_pool diff --git a/api/tests/unit_tests/core/workflow/test_workflow_entry_redis_channel.py b/api/tests/unit_tests/core/workflow/test_workflow_entry_redis_channel.py index 4b2f98aeff..80dc8927fa 100644 --- a/api/tests/unit_tests/core/workflow/test_workflow_entry_redis_channel.py +++ b/api/tests/unit_tests/core/workflow/test_workflow_entry_redis_channel.py @@ -2,11 +2,10 @@ from unittest.mock import MagicMock, patch -from graphon.graph_engine.command_channels import RedisChannel -from graphon.runtime import GraphRuntimeState, VariablePool - from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.workflow.workflow_entry import WorkflowEntry +from graphon.graph_engine.command_channels import RedisChannel +from graphon.runtime import GraphRuntimeState, VariablePool class TestWorkflowEntryRedisChannel: diff --git a/api/tests/unit_tests/enterprise/telemetry/test_enterprise_trace.py b/api/tests/unit_tests/enterprise/telemetry/test_enterprise_trace.py index bb1f78b80c..1ce9581aa1 100644 --- a/api/tests/unit_tests/enterprise/telemetry/test_enterprise_trace.py +++ b/api/tests/unit_tests/enterprise/telemetry/test_enterprise_trace.py @@ -4,6 +4,7 @@ from __future__ import annotations import json from datetime import UTC, datetime +from typing import Any from unittest.mock import MagicMock, patch import pytest @@ -57,7 +58,7 @@ _T1 = datetime(2024, 1, 10, 12, 0, 5, tzinfo=UTC) def make_workflow_info(**overrides) -> WorkflowTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "workflow_id": "wf-001", "tenant_id": "tenant-abc", "workflow_run_id": "run-001", @@ -86,7 +87,7 @@ def make_workflow_info(**overrides) -> WorkflowTraceInfo: def make_node_info(**overrides) -> WorkflowNodeTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "workflow_id": "wf-001", "workflow_run_id": "run-001", "tenant_id": "tenant-abc", @@ -115,7 +116,7 @@ def make_node_info(**overrides) -> WorkflowNodeTraceInfo: def make_draft_node_info(**overrides) -> DraftNodeExecutionTrace: - defaults: dict = { + defaults: dict[str, Any] = { "workflow_id": "wf-001", "workflow_run_id": "run-draft-001", "tenant_id": "tenant-abc", @@ -136,7 +137,7 @@ def make_draft_node_info(**overrides) -> DraftNodeExecutionTrace: def make_message_info(**overrides) -> MessageTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "conversation_model": "gpt-4", "message_tokens": 40, @@ -161,7 +162,7 @@ def make_message_info(**overrides) -> MessageTraceInfo: def make_tool_info(**overrides) -> ToolTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "tool_name": "web_search", "tool_inputs": {"query": "test"}, @@ -176,7 +177,7 @@ def make_tool_info(**overrides) -> ToolTraceInfo: def make_moderation_info(**overrides) -> ModerationTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "flagged": False, "action": "pass", @@ -189,7 +190,7 @@ def make_moderation_info(**overrides) -> ModerationTraceInfo: def make_suggested_question_info(**overrides) -> SuggestedQuestionTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "total_tokens": 30, "suggested_question": ["Question A?", "Question B?"], @@ -206,7 +207,7 @@ def make_suggested_question_info(**overrides) -> SuggestedQuestionTraceInfo: def make_dataset_retrieval_info(**overrides) -> DatasetRetrievalTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "documents": [ { @@ -236,7 +237,7 @@ def make_dataset_retrieval_info(**overrides) -> DatasetRetrievalTraceInfo: def make_generate_name_info(**overrides) -> GenerateNameTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "message_id": "msg-001", "tenant_id": "tenant-abc", "conversation_id": "conv-001", @@ -251,7 +252,7 @@ def make_generate_name_info(**overrides) -> GenerateNameTraceInfo: def make_prompt_generation_info(**overrides) -> PromptGenerationTraceInfo: - defaults: dict = { + defaults: dict[str, Any] = { "tenant_id": "tenant-abc", "user_id": "user-001", "app_id": "app-001", diff --git a/api/tests/unit_tests/extensions/test_celery_ssl.py b/api/tests/unit_tests/extensions/test_celery_ssl.py index 81687ce5f8..366e45d86d 100644 --- a/api/tests/unit_tests/extensions/test_celery_ssl.py +++ b/api/tests/unit_tests/extensions/test_celery_ssl.py @@ -7,6 +7,47 @@ from unittest.mock import MagicMock, patch class TestCelerySSLConfiguration: """Test suite for Celery SSL configuration.""" + def test_get_celery_broker_transport_options_includes_global_keyprefix_for_redis(self): + mock_config = MagicMock() + mock_config.CELERY_USE_SENTINEL = False + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + with patch("extensions.ext_celery.dify_config", mock_config): + from extensions.ext_celery import get_celery_broker_transport_options + + result = get_celery_broker_transport_options() + + assert result["global_keyprefix"] == "enterprise-a:" + + def test_get_celery_broker_transport_options_omits_global_keyprefix_when_prefix_empty(self): + mock_config = MagicMock() + mock_config.CELERY_USE_SENTINEL = False + mock_config.REDIS_KEY_PREFIX = " " + + with patch("extensions.ext_celery.dify_config", mock_config): + from extensions.ext_celery import get_celery_broker_transport_options + + result = get_celery_broker_transport_options() + + assert "global_keyprefix" not in result + + def test_get_celery_broker_transport_options_keeps_sentinel_and_adds_global_keyprefix(self): + mock_config = MagicMock() + mock_config.CELERY_USE_SENTINEL = True + mock_config.CELERY_SENTINEL_MASTER_NAME = "mymaster" + mock_config.CELERY_SENTINEL_SOCKET_TIMEOUT = 0.1 + mock_config.CELERY_SENTINEL_PASSWORD = "secret" + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + with patch("extensions.ext_celery.dify_config", mock_config): + from extensions.ext_celery import get_celery_broker_transport_options + + result = get_celery_broker_transport_options() + + assert result["master_name"] == "mymaster" + assert result["sentinel_kwargs"]["password"] == "secret" + assert result["global_keyprefix"] == "enterprise-a:" + def test_get_celery_ssl_options_when_ssl_disabled(self): """Test SSL options when BROKER_USE_SSL is False.""" from configs import DifyConfig @@ -151,3 +192,49 @@ class TestCelerySSLConfiguration: # Check that SSL is also applied to Redis backend assert "redis_backend_use_ssl" in celery_app.conf assert celery_app.conf["redis_backend_use_ssl"] is not None + + def test_celery_init_applies_global_keyprefix_to_broker_and_backend_transport(self): + mock_config = MagicMock() + mock_config.BROKER_USE_SSL = False + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + mock_config.HUMAN_INPUT_TIMEOUT_TASK_INTERVAL = 1 + mock_config.CELERY_BROKER_URL = "redis://localhost:6379/0" + mock_config.CELERY_BACKEND = "redis" + mock_config.CELERY_RESULT_BACKEND = "redis://localhost:6379/0" + mock_config.CELERY_USE_SENTINEL = False + mock_config.LOG_FORMAT = "%(message)s" + mock_config.LOG_TZ = "UTC" + mock_config.LOG_FILE = None + mock_config.CELERY_TASK_ANNOTATIONS = {} + + mock_config.CELERY_BEAT_SCHEDULER_TIME = 1 + mock_config.ENABLE_CLEAN_EMBEDDING_CACHE_TASK = False + mock_config.ENABLE_CLEAN_UNUSED_DATASETS_TASK = False + mock_config.ENABLE_CREATE_TIDB_SERVERLESS_TASK = False + mock_config.ENABLE_UPDATE_TIDB_SERVERLESS_STATUS_TASK = False + mock_config.ENABLE_CLEAN_MESSAGES = False + mock_config.ENABLE_MAIL_CLEAN_DOCUMENT_NOTIFY_TASK = False + mock_config.ENABLE_DATASETS_QUEUE_MONITOR = False + mock_config.ENABLE_HUMAN_INPUT_TIMEOUT_TASK = False + mock_config.ENABLE_CHECK_UPGRADABLE_PLUGIN_TASK = False + mock_config.MARKETPLACE_ENABLED = False + mock_config.WORKFLOW_LOG_CLEANUP_ENABLED = False + mock_config.ENABLE_WORKFLOW_RUN_CLEANUP_TASK = False + mock_config.ENABLE_WORKFLOW_SCHEDULE_POLLER_TASK = False + mock_config.WORKFLOW_SCHEDULE_POLLER_INTERVAL = 1 + mock_config.ENABLE_TRIGGER_PROVIDER_REFRESH_TASK = False + mock_config.TRIGGER_PROVIDER_REFRESH_INTERVAL = 15 + mock_config.ENABLE_API_TOKEN_LAST_USED_UPDATE_TASK = False + mock_config.API_TOKEN_LAST_USED_UPDATE_INTERVAL = 30 + mock_config.ENTERPRISE_ENABLED = False + mock_config.ENTERPRISE_TELEMETRY_ENABLED = False + + with patch("extensions.ext_celery.dify_config", mock_config): + from dify_app import DifyApp + from extensions.ext_celery import init_app + + app = DifyApp(__name__) + celery_app = init_app(app) + + assert celery_app.conf["broker_transport_options"]["global_keyprefix"] == "enterprise-a:" + assert celery_app.conf["result_backend_transport_options"]["global_keyprefix"] == "enterprise-a:" diff --git a/api/tests/unit_tests/extensions/test_pubsub_channel.py b/api/tests/unit_tests/extensions/test_pubsub_channel.py index a5b41a7266..926c406ad4 100644 --- a/api/tests/unit_tests/extensions/test_pubsub_channel.py +++ b/api/tests/unit_tests/extensions/test_pubsub_channel.py @@ -6,6 +6,7 @@ from libs.broadcast_channel.redis.sharded_channel import ShardedRedisBroadcastCh def test_get_pubsub_broadcast_channel_defaults_to_pubsub(monkeypatch): monkeypatch.setattr(dify_config, "PUBSUB_REDIS_CHANNEL_TYPE", "pubsub") + monkeypatch.setattr(ext_redis, "_pubsub_redis_client", object()) channel = ext_redis.get_pubsub_broadcast_channel() @@ -14,6 +15,7 @@ def test_get_pubsub_broadcast_channel_defaults_to_pubsub(monkeypatch): def test_get_pubsub_broadcast_channel_sharded(monkeypatch): monkeypatch.setattr(dify_config, "PUBSUB_REDIS_CHANNEL_TYPE", "sharded") + monkeypatch.setattr(ext_redis, "_pubsub_redis_client", object()) channel = ext_redis.get_pubsub_broadcast_channel() diff --git a/api/tests/unit_tests/extensions/test_redis.py b/api/tests/unit_tests/extensions/test_redis.py index 5e9be4ab9b..21248439bf 100644 --- a/api/tests/unit_tests/extensions/test_redis.py +++ b/api/tests/unit_tests/extensions/test_redis.py @@ -1,12 +1,15 @@ -from unittest.mock import patch +from unittest.mock import MagicMock, patch from redis import RedisError from redis.retry import Retry from extensions.ext_redis import ( + RedisClientWrapper, _get_base_redis_params, _get_cluster_connection_health_params, _get_connection_health_params, + _normalize_redis_key_prefix, + _serialize_redis_name, redis_fallback, ) @@ -123,3 +126,99 @@ class TestRedisFallback: assert test_func.__name__ == "test_func" assert test_func.__doc__ == "Test function docstring" + + +class TestRedisKeyPrefixHelpers: + def test_normalize_redis_key_prefix_trims_whitespace(self): + assert _normalize_redis_key_prefix(" enterprise-a ") == "enterprise-a" + + def test_normalize_redis_key_prefix_treats_whitespace_only_as_empty(self): + assert _normalize_redis_key_prefix(" ") == "" + + def test_serialize_redis_name_returns_original_when_prefix_empty(self): + assert _serialize_redis_name("model_lb_index:test", "") == "model_lb_index:test" + + def test_serialize_redis_name_adds_single_colon_separator(self): + assert _serialize_redis_name("model_lb_index:test", "enterprise-a") == "enterprise-a:model_lb_index:test" + + +class TestRedisClientWrapperKeyPrefix: + def test_wrapper_get_prefixes_string_keys(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.get("oauth_state:abc") + + mock_client.get.assert_called_once_with("enterprise-a:oauth_state:abc") + + def test_wrapper_delete_prefixes_multiple_keys(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.delete("key:a", "key:b") + + mock_client.delete.assert_called_once_with("enterprise-a:key:a", "enterprise-a:key:b") + + def test_wrapper_lock_prefixes_lock_name(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.lock("resource-lock", timeout=10) + + mock_client.lock.assert_called_once() + args, kwargs = mock_client.lock.call_args + assert args == ("enterprise-a:resource-lock",) + assert kwargs["timeout"] == 10 + + def test_wrapper_hash_operations_prefix_key_name(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.hset("hash:key", "field", "value") + wrapper.hgetall("hash:key") + + mock_client.hset.assert_called_once_with("enterprise-a:hash:key", "field", "value") + mock_client.hgetall.assert_called_once_with("enterprise-a:hash:key") + + def test_wrapper_zadd_prefixes_sorted_set_name(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + wrapper.zadd("zset:key", {"member": 1}) + + mock_client.zadd.assert_called_once() + args, kwargs = mock_client.zadd.call_args + assert args == ("enterprise-a:zset:key", {"member": 1}) + assert kwargs["nx"] is False + + def test_wrapper_preserves_keys_when_prefix_is_empty(self): + mock_client = MagicMock() + wrapper = RedisClientWrapper() + wrapper.initialize(mock_client) + + with patch("extensions.ext_redis.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = " " + + wrapper.get("plain:key") + + mock_client.get.assert_called_once_with("plain:key") diff --git a/api/tests/unit_tests/factories/test_build_from_mapping.py b/api/tests/unit_tests/factories/test_build_from_mapping.py index 4fe3f2cb28..511192001e 100644 --- a/api/tests/unit_tests/factories/test_build_from_mapping.py +++ b/api/tests/unit_tests/factories/test_build_from_mapping.py @@ -2,13 +2,13 @@ import uuid from unittest.mock import MagicMock, patch import pytest -from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig from httpx import Response from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom from core.app.file_access import DatabaseFileAccessController, FileAccessScope, bind_file_access_scope from core.workflow.file_reference import build_file_reference, parse_file_reference, resolve_file_record_id from factories.file_factory.builders import build_from_mapping as _build_from_mapping +from graphon.file import File, FileTransferMethod, FileType, FileUploadConfig from models import ToolFile, UploadFile # Test Data diff --git a/api/tests/unit_tests/factories/test_variable_factory.py b/api/tests/unit_tests/factories/test_variable_factory.py index a06c42507d..c35e80a826 100644 --- a/api/tests/unit_tests/factories/test_variable_factory.py +++ b/api/tests/unit_tests/factories/test_variable_factory.py @@ -4,6 +4,11 @@ from typing import Any from uuid import uuid4 import pytest +from hypothesis import HealthCheck, given, settings +from hypothesis import strategies as st + +from factories import variable_factory +from factories.variable_factory import TypeMismatchError, build_segment, build_segment_with_type from graphon.file import File, FileTransferMethod, FileType from graphon.variables import ( ArrayNumberVariable, @@ -31,11 +36,6 @@ from graphon.variables.segments import ( StringSegment, ) from graphon.variables.types import SegmentType -from hypothesis import HealthCheck, given, settings -from hypothesis import strategies as st - -from factories import variable_factory -from factories.variable_factory import TypeMismatchError, build_segment, build_segment_with_type def test_string_variable(): diff --git a/api/tests/unit_tests/fields/test_file_fields.py b/api/tests/unit_tests/fields/test_file_fields.py index 0e848d6ef5..9d9f626b9e 100644 --- a/api/tests/unit_tests/fields/test_file_fields.py +++ b/api/tests/unit_tests/fields/test_file_fields.py @@ -4,11 +4,11 @@ from datetime import datetime from types import SimpleNamespace import pytest -from graphon.file import File, FileTransferMethod, FileType from core.workflow.file_reference import build_file_reference from fields import conversation_fields, message_fields from fields.file_fields import FileResponse, FileWithSignedUrl, RemoteFileInfo, UploadConfig +from graphon.file import File, FileTransferMethod, FileType def test_file_response_serializes_datetime() -> None: diff --git a/api/tests/unit_tests/libs/_human_input/support.py b/api/tests/unit_tests/libs/_human_input/support.py index 13577b7ca5..e6cc23161e 100644 --- a/api/tests/unit_tests/libs/_human_input/support.py +++ b/api/tests/unit_tests/libs/_human_input/support.py @@ -6,7 +6,6 @@ from typing import Any from graphon.nodes.human_input.entities import FormInput from graphon.nodes.human_input.enums import TimeoutUnit - from libs.datetime_utils import naive_utc_now diff --git a/api/tests/unit_tests/libs/_human_input/test_form_service.py b/api/tests/unit_tests/libs/_human_input/test_form_service.py index f1ce1a2c1c..fa2c02020b 100644 --- a/api/tests/unit_tests/libs/_human_input/test_form_service.py +++ b/api/tests/unit_tests/libs/_human_input/test_form_service.py @@ -5,6 +5,7 @@ Unit tests for FormService. from datetime import timedelta import pytest + from graphon.nodes.human_input.entities import ( FormInput, UserAction, @@ -13,7 +14,6 @@ from graphon.nodes.human_input.enums import ( FormInputType, TimeoutUnit, ) - from libs.datetime_utils import naive_utc_now from .support import ( diff --git a/api/tests/unit_tests/libs/_human_input/test_models.py b/api/tests/unit_tests/libs/_human_input/test_models.py index 0babfbb315..866ee61b3e 100644 --- a/api/tests/unit_tests/libs/_human_input/test_models.py +++ b/api/tests/unit_tests/libs/_human_input/test_models.py @@ -5,6 +5,7 @@ Unit tests for human input form models. from datetime import datetime, timedelta import pytest + from graphon.nodes.human_input.entities import ( FormInput, UserAction, @@ -13,7 +14,6 @@ from graphon.nodes.human_input.enums import ( FormInputType, TimeoutUnit, ) - from libs.datetime_utils import naive_utc_now from .support import FormSubmissionData, FormSubmissionRequest, HumanInputForm diff --git a/api/tests/unit_tests/libs/broadcast_channel/redis/test_channel_unit_tests.py b/api/tests/unit_tests/libs/broadcast_channel/redis/test_channel_unit_tests.py index 460374b6f6..8bef01c1ed 100644 --- a/api/tests/unit_tests/libs/broadcast_channel/redis/test_channel_unit_tests.py +++ b/api/tests/unit_tests/libs/broadcast_channel/redis/test_channel_unit_tests.py @@ -139,6 +139,28 @@ class TestTopic: mock_redis_client.publish.assert_called_once_with("test-topic", payload) + def test_publish_prefixes_regular_topic(self, mock_redis_client: MagicMock): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + topic = Topic(mock_redis_client, "test-topic") + + topic.publish(b"test message") + + mock_redis_client.publish.assert_called_once_with("enterprise-a:test-topic", b"test message") + + def test_subscribe_prefixes_regular_topic(self, mock_redis_client: MagicMock): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + topic = Topic(mock_redis_client, "test-topic") + + subscription = topic.subscribe() + try: + subscription._start_if_needed() + finally: + subscription.close() + + mock_redis_client.pubsub.return_value.subscribe.assert_called_once_with("enterprise-a:test-topic") + class TestShardedTopic: """Test cases for the ShardedTopic class.""" @@ -176,6 +198,15 @@ class TestShardedTopic: mock_redis_client.spublish.assert_called_once_with("test-sharded-topic", payload) + def test_publish_prefixes_sharded_topic(self, mock_redis_client: MagicMock): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + sharded_topic = ShardedTopic(mock_redis_client, "test-sharded-topic") + + sharded_topic.publish(b"test sharded message") + + mock_redis_client.spublish.assert_called_once_with("enterprise-a:test-sharded-topic", b"test sharded message") + def test_subscribe_returns_sharded_subscription(self, sharded_topic: ShardedTopic, mock_redis_client: MagicMock): """Test that subscribe() returns a _RedisShardedSubscription instance.""" subscription = sharded_topic.subscribe() @@ -185,6 +216,19 @@ class TestShardedTopic: assert subscription._pubsub is mock_redis_client.pubsub.return_value assert subscription._topic == "test-sharded-topic" + def test_subscribe_prefixes_sharded_topic(self, mock_redis_client: MagicMock): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + sharded_topic = ShardedTopic(mock_redis_client, "test-sharded-topic") + + subscription = sharded_topic.subscribe() + try: + subscription._start_if_needed() + finally: + subscription.close() + + mock_redis_client.pubsub.return_value.ssubscribe.assert_called_once_with("enterprise-a:test-sharded-topic") + @dataclasses.dataclass(frozen=True) class SubscriptionTestCase: diff --git a/api/tests/unit_tests/libs/broadcast_channel/redis/test_streams_channel_unit_tests.py b/api/tests/unit_tests/libs/broadcast_channel/redis/test_streams_channel_unit_tests.py index 0886b70ee5..c6f57c7e59 100644 --- a/api/tests/unit_tests/libs/broadcast_channel/redis/test_streams_channel_unit_tests.py +++ b/api/tests/unit_tests/libs/broadcast_channel/redis/test_streams_channel_unit_tests.py @@ -1,7 +1,8 @@ import threading import time from dataclasses import dataclass -from typing import cast +from typing import Any, cast +from unittest.mock import patch import pytest @@ -29,7 +30,7 @@ class FakeStreamsRedis: self._dollar_snapshots: dict[str, int] = {} # Publisher API - def xadd(self, key: str, fields: dict, *, maxlen: int | None = None) -> str: + def xadd(self, key: str, fields: dict[str, Any], *, maxlen: int | None = None) -> str: """Append entry to stream; accept optional maxlen for API compatibility. The test double ignores maxlen trimming semantics; only records the entry. @@ -44,7 +45,7 @@ class FakeStreamsRedis: self._expire_calls[key] = self._expire_calls.get(key, 0) + 1 # Consumer API - def xread(self, streams: dict, block: int | None = None, count: int | None = None): + def xread(self, streams: dict[str, Any], block: int | None = None, count: int | None = None): # Expect a single key assert len(streams) == 1 key, last_id = next(iter(streams.items())) @@ -79,7 +80,7 @@ class BlockingRedis: def __init__(self) -> None: self._release = threading.Event() - def xread(self, streams: dict, block: int | None = None, count: int | None = None): + def xread(self, streams: dict[str, Any], block: int | None = None, count: int | None = None): self._release.wait(timeout=block / 1000.0 if block else None) return [] @@ -150,6 +151,25 @@ class TestStreamsBroadcastChannel: # Expire called after publish assert fake_redis._expire_calls.get("stream:beta", 0) >= 1 + def test_topic_uses_prefixed_stream_key(self, fake_redis: FakeStreamsRedis): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + + topic = StreamsBroadcastChannel(fake_redis, retention_seconds=60).topic("alpha") + + assert topic._topic == "alpha" + assert topic._key == "enterprise-a:stream:alpha" + + def test_publish_uses_prefixed_stream_key(self, fake_redis: FakeStreamsRedis): + with patch("extensions.redis_names.dify_config") as mock_config: + mock_config.REDIS_KEY_PREFIX = "enterprise-a" + topic = StreamsBroadcastChannel(fake_redis, retention_seconds=60).topic("beta") + + topic.publish(b"hello") + + assert fake_redis._store["enterprise-a:stream:beta"][0][1] == {b"data": b"hello"} + assert fake_redis._expire_calls.get("enterprise-a:stream:beta", 0) >= 1 + def test_topic_exposes_self_as_producer_and_subscriber(self, streams_channel: StreamsBroadcastChannel): topic = streams_channel.topic("producer-subscriber") @@ -225,7 +245,7 @@ class TestStreamsSubscription: self._fields = fields self._calls = 0 - def xread(self, streams: dict, block: int | None = None, count: int | None = None): + def xread(self, streams: dict[str, Any], block: int | None = None, count: int | None = None): self._calls += 1 if self._calls == 1: key = next(iter(streams)) diff --git a/api/tests/unit_tests/libs/test_email_i18n.py b/api/tests/unit_tests/libs/test_email_i18n.py index 962a36fe03..b4c0eaf7ee 100644 --- a/api/tests/unit_tests/libs/test_email_i18n.py +++ b/api/tests/unit_tests/libs/test_email_i18n.py @@ -503,6 +503,7 @@ class TestEmailI18nIntegration: EmailType.ACCOUNT_DELETION_VERIFICATION, EmailType.QUEUE_MONITOR_ALERT, EmailType.DOCUMENT_CLEAN_NOTIFY, + EmailType.WORKFLOW_COMMENT_MENTION, ] for email_type in expected_types: diff --git a/api/tests/unit_tests/libs/test_pyrefly_type_coverage.py b/api/tests/unit_tests/libs/test_pyrefly_type_coverage.py index 7087490845..cad9d47bba 100644 --- a/api/tests/unit_tests/libs/test_pyrefly_type_coverage.py +++ b/api/tests/unit_tests/libs/test_pyrefly_type_coverage.py @@ -1,4 +1,5 @@ import json +from typing import Any from libs.pyrefly_type_coverage import ( CoverageSummary, @@ -8,11 +9,11 @@ from libs.pyrefly_type_coverage import ( ) -def _make_report(summary: dict) -> str: +def _make_report(summary: dict[str, Any]) -> str: return json.dumps({"module_reports": [], "summary": summary}) -_SAMPLE_SUMMARY: dict = { +_SAMPLE_SUMMARY: dict[str, Any] = { "n_modules": 100, "n_typable": 1000, "n_typed": 400, diff --git a/api/tests/unit_tests/libs/test_sendgrid_client.py b/api/tests/unit_tests/libs/test_sendgrid_client.py index 85744003c7..a65a9b1882 100644 --- a/api/tests/unit_tests/libs/test_sendgrid_client.py +++ b/api/tests/unit_tests/libs/test_sendgrid_client.py @@ -1,3 +1,4 @@ +from typing import Any from unittest.mock import MagicMock, patch import pytest @@ -6,7 +7,7 @@ from python_http_client.exceptions import UnauthorizedError from libs.sendgrid import SendGridClient -def _mail(to: str = "user@example.com") -> dict: +def _mail(to: str = "user@example.com") -> dict[str, Any]: return {"to": to, "subject": "Hi", "html": "Hi"} diff --git a/api/tests/unit_tests/libs/test_smtp_client.py b/api/tests/unit_tests/libs/test_smtp_client.py index 1edf4899ac..96d62de2d6 100644 --- a/api/tests/unit_tests/libs/test_smtp_client.py +++ b/api/tests/unit_tests/libs/test_smtp_client.py @@ -1,3 +1,4 @@ +from typing import Any from unittest.mock import ANY, MagicMock, patch import pytest @@ -5,7 +6,7 @@ import pytest from libs.smtp import SMTPClient -def _mail() -> dict: +def _mail() -> dict[str, Any]: return {"to": "user@example.com", "subject": "Hi", "html": "Hi"} diff --git a/api/tests/unit_tests/models/test_comment_models.py b/api/tests/unit_tests/models/test_comment_models.py new file mode 100644 index 0000000000..277335cbef --- /dev/null +++ b/api/tests/unit_tests/models/test_comment_models.py @@ -0,0 +1,100 @@ +from unittest.mock import Mock, patch + +from models.comment import WorkflowComment, WorkflowCommentMention, WorkflowCommentReply + + +def test_workflow_comment_account_properties_and_cache() -> None: + comment = WorkflowComment(created_by="user-1", resolved_by="user-2", content="hello", position_x=1, position_y=2) + created_account = Mock(id="user-1") + resolved_account = Mock(id="user-2") + + with patch("models.comment.db.session.get", side_effect=[created_account, resolved_account]) as get_mock: + assert comment.created_by_account is created_account + assert comment.resolved_by_account is resolved_account + assert get_mock.call_count == 2 + + comment.cache_created_by_account(created_account) + comment.cache_resolved_by_account(resolved_account) + with patch("models.comment.db.session.get") as get_mock: + assert comment.created_by_account is created_account + assert comment.resolved_by_account is resolved_account + get_mock.assert_not_called() + + comment_without_resolver = WorkflowComment( + created_by="user-1", + resolved_by=None, + content="hello", + position_x=1, + position_y=2, + ) + with patch("models.comment.db.session.get") as get_mock: + assert comment_without_resolver.resolved_by_account is None + get_mock.assert_not_called() + + +def test_workflow_comment_counts_and_participants() -> None: + reply_1 = WorkflowCommentReply(comment_id="comment-1", content="reply-1", created_by="user-2") + reply_2 = WorkflowCommentReply(comment_id="comment-1", content="reply-2", created_by="user-2") + mention_1 = WorkflowCommentMention(comment_id="comment-1", mentioned_user_id="user-3") + mention_2 = WorkflowCommentMention(comment_id="comment-1", mentioned_user_id="user-4") + comment = WorkflowComment(created_by="user-1", resolved_by=None, content="hello", position_x=1, position_y=2) + comment.replies = [reply_1, reply_2] + comment.mentions = [mention_1, mention_2] + + account_1 = Mock(id="user-1") + account_2 = Mock(id="user-2") + account_3 = Mock(id="user-3") + account_map = { + "user-1": account_1, + "user-2": account_2, + "user-3": account_3, + "user-4": None, + } + + with patch("models.comment.db.session.get", side_effect=lambda _model, user_id: account_map[user_id]) as get_mock: + participants = comment.participants + + assert comment.reply_count == 2 + assert comment.mention_count == 2 + assert set(participants) == {account_1, account_2, account_3} + assert get_mock.call_count == 4 + + +def test_workflow_comment_participants_use_cached_accounts() -> None: + reply = WorkflowCommentReply(comment_id="comment-1", content="reply-1", created_by="user-2") + mention = WorkflowCommentMention(comment_id="comment-1", mentioned_user_id="user-3") + comment = WorkflowComment(created_by="user-1", resolved_by=None, content="hello", position_x=1, position_y=2) + comment.replies = [reply] + comment.mentions = [mention] + + account_1 = Mock(id="user-1") + account_2 = Mock(id="user-2") + account_3 = Mock(id="user-3") + comment.cache_created_by_account(account_1) + reply.cache_created_by_account(account_2) + mention.cache_mentioned_user_account(account_3) + + with patch("models.comment.db.session.get") as get_mock: + participants = comment.participants + + assert set(participants) == {account_1, account_2, account_3} + get_mock.assert_not_called() + + +def test_reply_and_mention_account_properties_and_cache() -> None: + reply = WorkflowCommentReply(comment_id="comment-1", content="reply", created_by="user-1") + mention = WorkflowCommentMention(comment_id="comment-1", mentioned_user_id="user-2") + reply_account = Mock(id="user-1") + mention_account = Mock(id="user-2") + + with patch("models.comment.db.session.get", side_effect=[reply_account, mention_account]) as get_mock: + assert reply.created_by_account is reply_account + assert mention.mentioned_user_account is mention_account + assert get_mock.call_count == 2 + + reply.cache_created_by_account(reply_account) + mention.cache_mentioned_user_account(mention_account) + with patch("models.comment.db.session.get") as get_mock: + assert reply.created_by_account is reply_account + assert mention.mentioned_user_account is mention_account + get_mock.assert_not_called() diff --git a/api/tests/unit_tests/models/test_conversation_variable.py b/api/tests/unit_tests/models/test_conversation_variable.py index 86163f1554..bb3a6db1a1 100644 --- a/api/tests/unit_tests/models/test_conversation_variable.py +++ b/api/tests/unit_tests/models/test_conversation_variable.py @@ -1,8 +1,7 @@ from uuid import uuid4 -from graphon.variables import SegmentType - from factories import variable_factory +from graphon.variables import SegmentType from models import ConversationVariable diff --git a/api/tests/unit_tests/models/test_model.py b/api/tests/unit_tests/models/test_model.py index 3f6d6bfbe3..a87dd7f15a 100644 --- a/api/tests/unit_tests/models/test_model.py +++ b/api/tests/unit_tests/models/test_model.py @@ -2,9 +2,9 @@ import importlib import types import pytest -from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod from core.workflow.file_reference import build_file_reference +from graphon.file import FILE_MODEL_IDENTITY, FileTransferMethod from models.model import Conversation, Message diff --git a/api/tests/unit_tests/models/test_workflow.py b/api/tests/unit_tests/models/test_workflow.py index e7c0479757..f7bdc97eb5 100644 --- a/api/tests/unit_tests/models/test_workflow.py +++ b/api/tests/unit_tests/models/test_workflow.py @@ -3,14 +3,13 @@ import json from unittest import mock from uuid import uuid4 -from graphon.file import File, FileTransferMethod, FileType -from graphon.variables import FloatVariable, IntegerVariable, SecretVariable, StringVariable -from graphon.variables.segments import IntegerSegment, Segment - from constants import HIDDEN_VALUE from core.helper import encrypter from core.workflow.file_reference import build_file_reference from factories.variable_factory import build_segment +from graphon.file import File, FileTransferMethod, FileType +from graphon.variables import FloatVariable, IntegerVariable, SecretVariable, StringVariable +from graphon.variables.segments import IntegerSegment, Segment from models.workflow import ( Workflow, WorkflowDraftVariable, diff --git a/api/tests/unit_tests/models/test_workflow_models.py b/api/tests/unit_tests/models/test_workflow_models.py index 507e1c8c3a..eb9fef7587 100644 --- a/api/tests/unit_tests/models/test_workflow_models.py +++ b/api/tests/unit_tests/models/test_workflow_models.py @@ -13,12 +13,12 @@ from datetime import UTC, datetime from uuid import uuid4 import pytest + from graphon.enums import ( BuiltinNodeTypes, WorkflowExecutionStatus, WorkflowNodeExecutionStatus, ) - from models.enums import CreatorUserRole, WorkflowRunTriggeredFrom from models.workflow import ( Workflow, diff --git a/api/tests/unit_tests/repositories/test_workflow_collaboration_repository.py b/api/tests/unit_tests/repositories/test_workflow_collaboration_repository.py new file mode 100644 index 0000000000..1f47e8b692 --- /dev/null +++ b/api/tests/unit_tests/repositories/test_workflow_collaboration_repository.py @@ -0,0 +1,121 @@ +import json +from unittest.mock import Mock + +import pytest + +from repositories import workflow_collaboration_repository as repo_module +from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository + + +class TestWorkflowCollaborationRepository: + @pytest.fixture + def mock_redis(self, monkeypatch: pytest.MonkeyPatch) -> Mock: + mock_redis = Mock() + monkeypatch.setattr(repo_module, "redis_client", mock_redis) + return mock_redis + + def test_get_sid_mapping_returns_mapping(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.get.return_value = b'{"workflow_id":"wf-1","user_id":"u-1"}' + repository = WorkflowCollaborationRepository() + + # Act + result = repository.get_sid_mapping("sid-1") + + # Assert + assert result == {"workflow_id": "wf-1", "user_id": "u-1"} + + def test_list_sessions_filters_invalid_entries(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.hgetall.return_value = { + b"sid-1": b'{"user_id":"u-1","username":"Jane","sid":"sid-1","connected_at":2}', + b"sid-2": b'{"username":"Missing","sid":"sid-2"}', + b"sid-3": b"not-json", + } + repository = WorkflowCollaborationRepository() + + # Act + result = repository.list_sessions("wf-1") + + # Assert + assert result == [ + { + "user_id": "u-1", + "username": "Jane", + "avatar": None, + "sid": "sid-1", + "connected_at": 2, + } + ] + + def test_set_session_info_persists_payload(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.exists.return_value = True + repository = WorkflowCollaborationRepository() + payload = { + "user_id": "u-1", + "username": "Jane", + "avatar": None, + "sid": "sid-1", + "connected_at": 1, + } + + # Act + repository.set_session_info("wf-1", payload) + + # Assert + assert mock_redis.hset.called + workflow_key, sid, session_json = mock_redis.hset.call_args.args + assert workflow_key == "workflow_online_users:wf-1" + assert sid == "sid-1" + assert json.loads(session_json)["user_id"] == "u-1" + assert mock_redis.set.called + + def test_refresh_session_state_expires_keys(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.exists.return_value = True + repository = WorkflowCollaborationRepository() + + # Act + repository.refresh_session_state("wf-1", "sid-1") + + # Assert + assert mock_redis.expire.call_count == 2 + + def test_get_current_leader_decodes_bytes(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.get.return_value = b"sid-1" + repository = WorkflowCollaborationRepository() + + # Act + result = repository.get_current_leader("wf-1") + + # Assert + assert result == "sid-1" + + def test_set_leader_if_absent_uses_nx(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.set.return_value = True + repository = WorkflowCollaborationRepository() + + # Act + result = repository.set_leader_if_absent("wf-1", "sid-1") + + # Assert + assert result is True + _key, _value = mock_redis.set.call_args.args + assert _key == "workflow_leader:wf-1" + assert _value == "sid-1" + assert mock_redis.set.call_args.kwargs["nx"] is True + assert "ex" in mock_redis.set.call_args.kwargs + + def test_get_session_sids_decodes(self, mock_redis: Mock) -> None: + # Arrange + mock_redis.hkeys.return_value = [b"sid-1", "sid-2"] + repository = WorkflowCollaborationRepository() + + # Act + result = repository.get_session_sids("wf-1") + + # Assert + assert result == ["sid-1", "sid-2"] diff --git a/api/tests/unit_tests/services/auth/test_jina_auth_standalone_module.py b/api/tests/unit_tests/services/auth/test_jina_auth_standalone_module.py index 4b5a97bf3f..b31af996ae 100644 --- a/api/tests/unit_tests/services/auth/test_jina_auth_standalone_module.py +++ b/api/tests/unit_tests/services/auth/test_jina_auth_standalone_module.py @@ -4,6 +4,7 @@ import importlib.util import sys from pathlib import Path from types import ModuleType +from typing import Any from unittest.mock import MagicMock import httpx @@ -30,8 +31,8 @@ def jina_module() -> ModuleType: return module -def _credentials(api_key: str | None = "test_api_key_123", auth_type: str = "bearer") -> dict: - config: dict = {} if api_key is None else {"api_key": api_key} +def _credentials(api_key: str | None = "test_api_key_123", auth_type: str = "bearer") -> dict[str, Any]: + config: dict[str, Any] = {} if api_key is None else {"api_key": api_key} return {"auth_type": auth_type, "config": config} @@ -47,7 +48,7 @@ def test_init_rejects_invalid_auth_type(jina_module: ModuleType) -> None: @pytest.mark.parametrize("credentials", [{"auth_type": "bearer", "config": {}}, {"auth_type": "bearer"}]) -def test_init_requires_api_key(jina_module: ModuleType, credentials: dict) -> None: +def test_init_requires_api_key(jina_module: ModuleType, credentials: dict[str, Any]) -> None: with pytest.raises(ValueError, match="No API key provided"): jina_module.JinaAuth(credentials) diff --git a/api/tests/unit_tests/services/dataset_service_test_helpers.py b/api/tests/unit_tests/services/dataset_service_test_helpers.py index da557de8a4..3349c1fd8c 100644 --- a/api/tests/unit_tests/services/dataset_service_test_helpers.py +++ b/api/tests/unit_tests/services/dataset_service_test_helpers.py @@ -7,10 +7,10 @@ document, and segment service test modules that exercise import json from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock, Mock, create_autospec, patch import pytest -from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType from werkzeug.exceptions import Forbidden, NotFound from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError @@ -19,6 +19,7 @@ from core.rag.index_processor.constant.built_in_field import BuiltInField from core.rag.index_processor.constant.index_type import IndexStructureType from core.rag.retrieval.retrieval_methods import RetrievalMethod from enums.cloud_plan import CloudPlan +from graphon.model_runtime.entities.model_entities import ModelFeature, ModelType from models import Account, TenantAccountRole from models.dataset import ( ChildChunk, @@ -166,7 +167,7 @@ class DatasetServiceUnitDataFactory: built_in_field_enabled: bool = False, doc_form: str | None = "text_model", enable_api: bool = False, - summary_index_setting: dict | None = None, + summary_index_setting: dict[str, Any] | None = None, **kwargs, ) -> Mock: dataset = Mock(spec=Dataset) @@ -214,12 +215,12 @@ class DatasetServiceUnitDataFactory: archived: bool = False, enabled: bool = True, data_source_type: str = "upload_file", - data_source_info_dict: dict | None = None, + data_source_info_dict: dict[str, Any] | None = None, data_source_info: str | None = None, doc_form: str = "text_model", need_summary: bool = True, position: int = 0, - doc_metadata: dict | None = None, + doc_metadata: dict[str, Any] | None = None, name: str = "Document", **kwargs, ) -> Mock: diff --git a/api/tests/unit_tests/services/document_service_status.py b/api/tests/unit_tests/services/document_service_status.py deleted file mode 100644 index 1b682d5762..0000000000 --- a/api/tests/unit_tests/services/document_service_status.py +++ /dev/null @@ -1,70 +0,0 @@ -"""Unit tests for non-SQL validation in DocumentService status management methods.""" - -from unittest.mock import Mock, create_autospec - -import pytest - -from models import Account -from models.dataset import Dataset -from services.dataset_service import DocumentService - - -class DocumentStatusTestDataFactory: - """Factory class for creating test data and mock objects for document status tests.""" - - @staticmethod - def create_dataset_mock( - dataset_id: str = "dataset-123", - tenant_id: str = "tenant-123", - name: str = "Test Dataset", - built_in_field_enabled: bool = False, - **kwargs, - ) -> Mock: - """Create a mock Dataset with specified attributes.""" - dataset = Mock(spec=Dataset) - dataset.id = dataset_id - dataset.tenant_id = tenant_id - dataset.name = name - dataset.built_in_field_enabled = built_in_field_enabled - for key, value in kwargs.items(): - setattr(dataset, key, value) - return dataset - - @staticmethod - def create_user_mock( - user_id: str = "user-123", - tenant_id: str = "tenant-123", - **kwargs, - ) -> Mock: - """Create a mock user (Account) with specified attributes.""" - user = create_autospec(Account, instance=True) - user.id = user_id - user.current_tenant_id = tenant_id - for key, value in kwargs.items(): - setattr(user, key, value) - return user - - -class TestDocumentServiceBatchUpdateDocumentStatus: - """Unit tests for non-SQL path in DocumentService.batch_update_document_status.""" - - def test_batch_update_document_status_invalid_action_error(self): - """ - Test error handling for invalid action. - - Verifies that when an invalid action is provided, a ValueError - is raised. - - This test ensures: - - Invalid actions are rejected - - Error message is clear - - Error type is correct - """ - # Arrange - dataset = DocumentStatusTestDataFactory.create_dataset_mock() - user = DocumentStatusTestDataFactory.create_user_mock() - document_ids = ["document-123"] - - # Act & Assert - with pytest.raises(ValueError, match="Invalid action"): - DocumentService.batch_update_document_status(dataset, document_ids, "invalid_action", user) diff --git a/api/tests/unit_tests/services/document_service_validation.py b/api/tests/unit_tests/services/document_service_validation.py index 6903c47a24..71df8c4e20 100644 --- a/api/tests/unit_tests/services/document_service_validation.py +++ b/api/tests/unit_tests/services/document_service_validation.py @@ -109,11 +109,11 @@ This test suite follows a comprehensive testing strategy that covers: from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.model_entities import ModelType from core.errors.error import LLMBadRequestError, ProviderTokenNotInitError from core.rag.entities import PreProcessingRule, Rule, Segmentation from core.rag.index_processor.constant.index_type import IndexStructureType, IndexTechniqueType +from graphon.model_runtime.entities.model_entities import ModelType from models.dataset import Dataset, DatasetProcessRule, Document from services.dataset_service import DatasetService, DocumentService from services.entities.knowledge_entities.knowledge_entities import ( diff --git a/api/tests/unit_tests/services/external_dataset_service.py b/api/tests/unit_tests/services/external_dataset_service.py index dd41c0c97e..83bae370eb 100644 --- a/api/tests/unit_tests/services/external_dataset_service.py +++ b/api/tests/unit_tests/services/external_dataset_service.py @@ -51,7 +51,7 @@ class ExternalDatasetTestDataFactory: tenant_id: str = "tenant-1", name: str = "Test API", description: str = "Description", - settings: dict | None = None, + settings: dict[str, Any] | None = None, ) -> ExternalKnowledgeApis: """ Create a concrete ``ExternalKnowledgeApis`` instance with minimal fields. @@ -220,7 +220,7 @@ class TestExternalDatasetServiceValidateApiList: ({"endpoint": "https://example.com"}, "api_key is required"), ], ) - def test_validate_api_list_failures(self, config: dict, expected_message: str): + def test_validate_api_list_failures(self, config: dict[str, Any], expected_message: str): """ Invalid configs should raise ``ValueError`` with a clear message. """ diff --git a/api/tests/unit_tests/services/hit_service.py b/api/tests/unit_tests/services/hit_service.py index 22ab8503df..ddbc7dc041 100644 --- a/api/tests/unit_tests/services/hit_service.py +++ b/api/tests/unit_tests/services/hit_service.py @@ -6,6 +6,7 @@ which handles retrieval testing operations for datasets, including internal dataset retrieval and external knowledge base retrieval. """ +from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest @@ -30,7 +31,7 @@ class HitTestingTestDataFactory: dataset_id: str = "dataset-123", tenant_id: str = "tenant-123", provider: str = "vendor", - retrieval_model: dict | None = None, + retrieval_model: dict[str, Any] | None = None, **kwargs, ) -> Mock: """ @@ -83,7 +84,7 @@ class HitTestingTestDataFactory: @staticmethod def create_document_mock( content: str = "Test document content", - metadata: dict | None = None, + metadata: dict[str, Any] | None = None, **kwargs, ) -> Mock: """ diff --git a/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_dsl_service.py b/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_dsl_service.py index 6813a1bf2a..337659b15f 100644 --- a/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_dsl_service.py +++ b/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_dsl_service.py @@ -1,13 +1,13 @@ from types import SimpleNamespace -from typing import cast +from typing import Any, cast from unittest.mock import MagicMock, Mock import pytest import yaml -from graphon.enums import BuiltinNodeTypes from sqlalchemy.orm import Session from core.workflow.nodes.knowledge_index import KNOWLEDGE_INDEX_NODE_TYPE +from graphon.enums import BuiltinNodeTypes from services.entities.knowledge_entities.rag_pipeline_entities import IconInfo, RagPipelineDatasetCreateEntity from services.rag_pipeline.rag_pipeline_dsl_service import ( ImportStatus, @@ -558,7 +558,7 @@ def test_append_workflow_export_data_filters_credentials(mocker) -> None: "services.rag_pipeline.rag_pipeline_dsl_service.DependenciesAnalysisService.generate_dependencies", return_value=[], ) - export_data: dict = {} + export_data: dict[str, Any] = {} pipeline = Mock(id="p1", tenant_id="t1") service._append_workflow_export_data(export_data=export_data, pipeline=pipeline, include_secret=False) @@ -641,7 +641,7 @@ def test_append_workflow_export_data_encrypts_knowledge_retrieval_dataset_ids(mo "services.rag_pipeline.rag_pipeline_dsl_service.DependenciesAnalysisService.generate_dependencies", return_value=[], ) - export_data: dict = {} + export_data: dict[str, Any] = {} pipeline = Mock(id="p1", tenant_id="t1") service._append_workflow_export_data(export_data=export_data, pipeline=pipeline, include_secret=False) diff --git a/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_service.py b/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_service.py index 941a665308..327281d07f 100644 --- a/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_service.py +++ b/api/tests/unit_tests/services/rag_pipeline/test_rag_pipeline_service.py @@ -787,7 +787,6 @@ def test_retry_error_document_success(mocker, rag_pipeline_service) -> None: def test_set_datasource_variables_success(mocker, rag_pipeline_service) -> None: from graphon.entities.workflow_node_execution import WorkflowNodeExecution - from models.dataset import Pipeline # 1. Setup mocks @@ -1483,12 +1482,11 @@ def test_handle_node_run_result_raises_when_no_terminal_event(mocker, rag_pipeli def test_handle_node_run_result_marks_document_error_for_published_invoke(mocker, rag_pipeline_service) -> None: + from core.app.entities.app_invoke_entities import InvokeFrom from graphon.enums import WorkflowNodeExecutionStatus from graphon.graph_events import NodeRunFailedEvent from graphon.node_events.base import NodeRunResult - from core.app.entities.app_invoke_entities import InvokeFrom - class FakeVariablePool: def __init__(self): self._values = { diff --git a/api/tests/unit_tests/services/test_account_service.py b/api/tests/unit_tests/services/test_account_service.py index eeb5d178ec..c4f5f57153 100644 --- a/api/tests/unit_tests/services/test_account_service.py +++ b/api/tests/unit_tests/services/test_account_service.py @@ -5,7 +5,7 @@ from unittest.mock import MagicMock, patch import pytest from configs import dify_config -from models.account import Account, AccountStatus +from models.account import Account, AccountStatus, TenantStatus from services.account_service import AccountService, RegisterService, TenantService from services.errors.account import ( AccountAlreadyInTenantError, @@ -1697,7 +1697,7 @@ class TestRegisterService: # Setup test data mock_tenant = MagicMock() mock_tenant.id = "tenant-456" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_account = TestAccountAssociatedDataFactory.create_account_mock( account_id="user-123", email="test@example.com" ) @@ -1759,7 +1759,7 @@ class TestRegisterService: # Setup test data mock_tenant = MagicMock() mock_tenant.id = "tenant-456" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL # Mock Redis data invitation_data = { @@ -1784,7 +1784,7 @@ class TestRegisterService: # Setup test data mock_tenant = MagicMock() mock_tenant.id = "tenant-456" - mock_tenant.status = "normal" + mock_tenant.status = TenantStatus.NORMAL mock_account = TestAccountAssociatedDataFactory.create_account_mock( account_id="different-user-456", email="test@example.com" ) diff --git a/api/tests/unit_tests/services/test_app_generate_service_streaming_integration.py b/api/tests/unit_tests/services/test_app_generate_service_streaming_integration.py index e66d52f66b..30aa359b45 100644 --- a/api/tests/unit_tests/services/test_app_generate_service_streaming_integration.py +++ b/api/tests/unit_tests/services/test_app_generate_service_streaming_integration.py @@ -1,6 +1,7 @@ import json import uuid from collections import defaultdict, deque +from typing import Any import pytest @@ -60,7 +61,7 @@ class _FakeStreams: self._data: dict[str, list[tuple[str, dict]]] = defaultdict(list) self._seq: dict[str, int] = defaultdict(int) - def xadd(self, key: str, fields: dict, *, maxlen: int | None = None) -> str: + def xadd(self, key: str, fields: dict[str, Any], *, maxlen: int | None = None) -> str: # maxlen is accepted for API compatibility with redis-py; ignored in this test double self._seq[key] += 1 eid = f"{self._seq[key]}-0" @@ -71,7 +72,7 @@ class _FakeStreams: # no-op for tests return None - def xread(self, streams: dict, block: int | None = None, count: int | None = None): + def xread(self, streams: dict[str, Any], block: int | None = None, count: int | None = None): assert len(streams) == 1 key, last_id = next(iter(streams.items())) entries = self._data.get(key, []) diff --git a/api/tests/unit_tests/services/test_audio_service.py b/api/tests/unit_tests/services/test_audio_service.py index af8fc1e84f..83258fd1b7 100644 --- a/api/tests/unit_tests/services/test_audio_service.py +++ b/api/tests/unit_tests/services/test_audio_service.py @@ -53,6 +53,7 @@ Tests available voice retrieval: - text_to_speech: Enables TTS functionality """ +from typing import Any from unittest.mock import MagicMock, Mock, create_autospec, patch import pytest @@ -109,7 +110,7 @@ class AudioServiceTestDataFactory: return app @staticmethod - def create_workflow_mock(features_dict: dict | None = None, **kwargs) -> Mock: + def create_workflow_mock(features_dict: dict[str, Any] | None = None, **kwargs) -> Mock: """ Create a mock Workflow object. @@ -128,8 +129,8 @@ class AudioServiceTestDataFactory: @staticmethod def create_app_model_config_mock( - speech_to_text_dict: dict | None = None, - text_to_speech_dict: dict | None = None, + speech_to_text_dict: dict[str, Any] | None = None, + text_to_speech_dict: dict[str, Any] | None = None, **kwargs, ) -> Mock: """ diff --git a/api/tests/unit_tests/services/test_conversation_service.py b/api/tests/unit_tests/services/test_conversation_service.py index 68f4c51afe..2c7f13b79f 100644 --- a/api/tests/unit_tests/services/test_conversation_service.py +++ b/api/tests/unit_tests/services/test_conversation_service.py @@ -6,26 +6,15 @@ Tests are organized by functionality and include edge cases, error handling, and both positive and negative test scenarios. """ -from datetime import timedelta from unittest.mock import MagicMock, Mock, create_autospec, patch -import pytest from sqlalchemy import asc, desc from core.app.entities.app_invoke_entities import InvokeFrom from libs.datetime_utils import naive_utc_now -from libs.infinite_scroll_pagination import InfiniteScrollPagination from models import Account, ConversationVariable -from models.enums import ConversationFromSource from models.model import App, Conversation, EndUser, Message from services.conversation_service import ConversationService -from services.errors.conversation import ( - ConversationNotExistsError, - ConversationVariableNotExistsError, - ConversationVariableTypeMismatchError, - LastConversationNotExistsError, -) -from services.errors.message import MessageNotExistsError class ConversationServiceTestDataFactory: @@ -338,330 +327,9 @@ class TestConversationServiceHelpers: assert condition is not None -class TestConversationServiceGetConversation: - """Test conversation retrieval operations.""" - - @patch("services.conversation_service.db.session") - def test_get_conversation_success_with_account(self, mock_db_session): - """ - Test successful conversation retrieval with account user. - - Should return conversation when found with proper filters. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock( - from_account_id=user.id, from_source=ConversationFromSource.CONSOLE - ) - - mock_db_session.scalar.return_value = conversation - - # Act - result = ConversationService.get_conversation(app_model, "conv-123", user) - - # Assert - assert result == conversation - - @patch("services.conversation_service.db.session") - def test_get_conversation_success_with_end_user(self, mock_db_session): - """ - Test successful conversation retrieval with end user. - - Should return conversation when found with proper filters for API user. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_end_user_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock( - from_end_user_id=user.id, from_source=ConversationFromSource.API - ) - - mock_db_session.scalar.return_value = conversation - - # Act - result = ConversationService.get_conversation(app_model, "conv-123", user) - - # Assert - assert result == conversation - - @patch("services.conversation_service.db.session") - def test_get_conversation_not_found_raises_error(self, mock_db_session): - """ - Test that get_conversation raises error when conversation not found. - - Should raise ConversationNotExistsError when no matching conversation found. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - mock_db_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(ConversationNotExistsError): - ConversationService.get_conversation(app_model, "conv-123", user) - - -class TestConversationServiceRename: - """Test conversation rename operations.""" - - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_rename_with_manual_name(self, mock_get_conversation, mock_db_session): - """ - Test renaming conversation with manual name. - - Should update conversation name and timestamp when auto_generate is False. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Act - result = ConversationService.rename( - app_model=app_model, - conversation_id="conv-123", - user=user, - name="New Name", - auto_generate=False, - ) - - # Assert - assert result == conversation - assert conversation.name == "New Name" - mock_db_session.commit.assert_called_once() - - -class TestConversationServiceAutoGenerateName: - """Test conversation auto-name generation operations.""" - - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.LLMGenerator") - def test_auto_generate_name_success(self, mock_llm_generator, mock_db_session): - """ - Test successful auto-generation of conversation name. - - Should generate name using LLMGenerator and update conversation. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - message = ConversationServiceTestDataFactory.create_message_mock( - conversation_id=conversation.id, app_id=app_model.id - ) - - # Mock database query to return message - mock_db_session.scalar.return_value = message - - # Mock LLM generator - mock_llm_generator.generate_conversation_name.return_value = "Generated Name" - - # Act - result = ConversationService.auto_generate_name(app_model, conversation) - - # Assert - assert result == conversation - assert conversation.name == "Generated Name" - mock_llm_generator.generate_conversation_name.assert_called_once_with( - app_model.tenant_id, message.query, conversation.id, app_model.id - ) - mock_db_session.commit.assert_called_once() - - @patch("services.conversation_service.db.session") - def test_auto_generate_name_no_message_raises_error(self, mock_db_session): - """ - Test auto-generation fails when no message found. - - Should raise MessageNotExistsError when conversation has no messages. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - # Mock database query to return None - mock_db_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(MessageNotExistsError): - ConversationService.auto_generate_name(app_model, conversation) - - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.LLMGenerator") - def test_auto_generate_name_handles_llm_exception(self, mock_llm_generator, mock_db_session): - """ - Test auto-generation handles LLM generator exceptions gracefully. - - Should continue without name when LLMGenerator fails. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - message = ConversationServiceTestDataFactory.create_message_mock( - conversation_id=conversation.id, app_id=app_model.id - ) - - # Mock database query to return message - mock_db_session.scalar.return_value = message - - # Mock LLM generator to raise exception - mock_llm_generator.generate_conversation_name.side_effect = Exception("LLM Error") - - # Act - result = ConversationService.auto_generate_name(app_model, conversation) - - # Assert - assert result == conversation - # Name should remain unchanged due to exception - mock_db_session.commit.assert_called_once() - - -class TestConversationServiceDelete: - """Test conversation deletion operations.""" - - @patch("services.conversation_service.delete_conversation_related_data") - @patch("services.conversation_service.db.session") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_delete_success(self, mock_get_conversation, mock_db_session, mock_delete_task): - """ - Test successful conversation deletion. - - Should delete conversation and schedule cleanup task. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock(name="Test App") - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Act - ConversationService.delete(app_model, "conv-123", user) - - # Assert - mock_db_session.delete.assert_called_once_with(conversation) - mock_db_session.commit.assert_called_once() - mock_delete_task.delay.assert_called_once_with(conversation.id) - - class TestConversationServiceConversationalVariable: """Test conversational variable operations.""" - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_get_conversational_variable_success(self, mock_get_conversation, mock_session_factory): - """ - Test successful retrieval of conversational variables. - - Should return paginated list of variables for conversation. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and variables - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - variable1 = ConversationServiceTestDataFactory.create_conversation_variable_mock() - variable2 = ConversationServiceTestDataFactory.create_conversation_variable_mock(variable_id="var-456") - - mock_session.scalars.return_value.all.return_value = [variable1, variable2] - - # Act - result = ConversationService.get_conversational_variable( - app_model=app_model, - conversation_id="conv-123", - user=user, - limit=10, - last_id=None, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert len(result.data) == 2 - assert result.limit == 10 - assert result.has_more is False - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_get_conversational_variable_with_last_id(self, mock_get_conversation, mock_session_factory): - """ - Test retrieval of variables with last_id pagination. - - Should filter variables created after last_id. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and variables - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - last_variable = ConversationServiceTestDataFactory.create_conversation_variable_mock( - created_at=naive_utc_now() - timedelta(hours=1) - ) - variable = ConversationServiceTestDataFactory.create_conversation_variable_mock(created_at=naive_utc_now()) - - mock_session.scalar.return_value = last_variable - mock_session.scalars.return_value.all.return_value = [variable] - - # Act - result = ConversationService.get_conversational_variable( - app_model=app_model, - conversation_id="conv-123", - user=user, - limit=10, - last_id="var-123", - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert len(result.data) == 1 - assert result.limit == 10 - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_get_conversational_variable_last_id_not_found_raises_error( - self, mock_get_conversation, mock_session_factory - ): - """ - Test that invalid last_id raises ConversationVariableNotExistsError. - - Should raise error when last_id doesn't exist. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - mock_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(ConversationVariableNotExistsError): - ConversationService.get_conversational_variable( - app_model=app_model, - conversation_id="conv-123", - user=user, - limit=10, - last_id="invalid-id", - ) - @patch("services.conversation_service.session_factory") @patch("services.conversation_service.ConversationService.get_conversation") @patch("services.conversation_service.dify_config") @@ -698,466 +366,3 @@ class TestConversationServiceConversationalVariable: # Assert - JSON filter should be applied assert mock_session.scalars.called - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - @patch("services.conversation_service.dify_config") - def test_get_conversational_variable_with_name_filter_postgresql( - self, mock_config, mock_get_conversation, mock_session_factory - ): - """ - Test variable filtering by name for PostgreSQL databases. - - Should apply JSON extraction filter for variable names. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - mock_config.DB_TYPE = "postgresql" - - # Mock session - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - mock_session.scalars.return_value.all.return_value = [] - - # Act - ConversationService.get_conversational_variable( - app_model=app_model, - conversation_id="conv-123", - user=user, - limit=10, - last_id=None, - variable_name="test_var", - ) - - # Assert - JSON filter should be applied - assert mock_session.scalars.called - - -class TestConversationServiceUpdateVariable: - """Test conversation variable update operations.""" - - @patch("services.conversation_service.variable_factory") - @patch("services.conversation_service.ConversationVariableUpdater") - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_update_conversation_variable_success( - self, mock_get_conversation, mock_session_factory, mock_updater_class, mock_variable_factory - ): - """ - Test successful update of conversation variable. - - Should update variable value and return updated data. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and existing variable - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - existing_variable = ConversationServiceTestDataFactory.create_conversation_variable_mock(value_type="string") - mock_session.scalar.return_value = existing_variable - - # Mock variable factory and updater - updated_variable = Mock() - updated_variable.model_dump.return_value = {"id": "var-123", "name": "test_var", "value": "new_value"} - mock_variable_factory.build_conversation_variable_from_mapping.return_value = updated_variable - - mock_updater = MagicMock() - mock_updater_class.return_value = mock_updater - - # Act - result = ConversationService.update_conversation_variable( - app_model=app_model, - conversation_id="conv-123", - variable_id="var-123", - user=user, - new_value="new_value", - ) - - # Assert - assert result["id"] == "var-123" - assert result["value"] == "new_value" - mock_updater.update.assert_called_once_with("conv-123", updated_variable) - mock_updater.flush.assert_called_once() - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_update_conversation_variable_not_found_raises_error(self, mock_get_conversation, mock_session_factory): - """ - Test update fails when variable doesn't exist. - - Should raise ConversationVariableNotExistsError. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - mock_session.scalar.return_value = None - - # Act & Assert - with pytest.raises(ConversationVariableNotExistsError): - ConversationService.update_conversation_variable( - app_model=app_model, - conversation_id="conv-123", - variable_id="invalid-id", - user=user, - new_value="new_value", - ) - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_update_conversation_variable_type_mismatch_raises_error(self, mock_get_conversation, mock_session_factory): - """ - Test update fails when value type doesn't match expected type. - - Should raise ConversationVariableTypeMismatchError. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and existing variable - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - existing_variable = ConversationServiceTestDataFactory.create_conversation_variable_mock(value_type="number") - mock_session.scalar.return_value = existing_variable - - # Act & Assert - Try to set string value for number variable - with pytest.raises(ConversationVariableTypeMismatchError): - ConversationService.update_conversation_variable( - app_model=app_model, - conversation_id="conv-123", - variable_id="var-123", - user=user, - new_value="string_value", # Wrong type - ) - - @patch("services.conversation_service.session_factory") - @patch("services.conversation_service.ConversationService.get_conversation") - def test_update_conversation_variable_integer_number_compatibility( - self, mock_get_conversation, mock_session_factory - ): - """ - Test that integer type accepts number values. - - Should allow number values for integer type variables. - """ - # Arrange - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - - mock_get_conversation.return_value = conversation - - # Mock session and existing variable - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - existing_variable = ConversationServiceTestDataFactory.create_conversation_variable_mock(value_type="integer") - mock_session.scalar.return_value = existing_variable - - # Mock variable factory and updater - updated_variable = Mock() - updated_variable.model_dump.return_value = {"id": "var-123", "name": "test_var", "value": 42} - - with ( - patch("services.conversation_service.variable_factory") as mock_variable_factory, - patch("services.conversation_service.ConversationVariableUpdater") as mock_updater_class, - ): - mock_variable_factory.build_conversation_variable_from_mapping.return_value = updated_variable - mock_updater = MagicMock() - mock_updater_class.return_value = mock_updater - - # Act - result = ConversationService.update_conversation_variable( - app_model=app_model, - conversation_id="conv-123", - variable_id="var-123", - user=user, - new_value=42, # Number value for integer type - ) - - # Assert - assert result["value"] == 42 - mock_updater.update.assert_called_once() - - -class TestConversationServicePaginationAdvanced: - """Advanced pagination tests for ConversationService.""" - - @patch("services.conversation_service.session_factory") - def test_pagination_by_last_id_with_last_id_not_found(self, mock_session_factory): - """ - Test pagination with invalid last_id raises error. - - Should raise LastConversationNotExistsError when last_id doesn't exist. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - mock_session.scalar.return_value = None - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act & Assert - with pytest.raises(LastConversationNotExistsError): - ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id="invalid-id", - limit=20, - invoke_from=InvokeFrom.WEB_APP, - ) - - @patch("services.conversation_service.session_factory") - def test_pagination_by_last_id_with_exclude_ids(self, mock_session_factory): - """ - Test pagination with exclude_ids filter. - - Should exclude specified conversation IDs from results. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - mock_session.scalars.return_value.all.return_value = [conversation] - mock_session.scalar.return_value = conversation - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - exclude_ids=["excluded-123"], - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert len(result.data) == 1 - - @patch("services.conversation_service.session_factory") - def test_pagination_by_last_id_has_more_detection(self, mock_session_factory): - """ - Test pagination has_more detection logic. - - Should set has_more=True when there are more results beyond limit. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - # Return exactly limit items to trigger has_more check - conversations = [ - ConversationServiceTestDataFactory.create_conversation_mock(conversation_id=f"conv-{i}") for i in range(20) - ] - mock_session.scalars.return_value.all.return_value = conversations - mock_session.scalar.return_value = conversations[-1] - - # Mock count query to return > 0 - mock_session.scalar.return_value = 5 # Additional items exist - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert result.has_more is True - - @patch("services.conversation_service.session_factory") - def test_pagination_by_last_id_with_different_sort_by(self, mock_session_factory): - """ - Test pagination with different sort fields. - - Should handle various sort_by parameters correctly. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - conversation = ConversationServiceTestDataFactory.create_conversation_mock() - mock_session.scalars.return_value.all.return_value = [conversation] - mock_session.scalar.return_value = conversation - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Test different sort fields - sort_fields = ["created_at", "-updated_at", "name", "-status"] - - for sort_by in sort_fields: - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - sort_by=sort_by, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - - -class TestConversationServiceEdgeCases: - """Test edge cases and error scenarios.""" - - @patch("services.conversation_service.session_factory") - def test_pagination_with_end_user_api_source(self, mock_session_factory): - """ - Test pagination correctly handles EndUser with API source. - - Should use 'api' as from_source for EndUser instances. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - conversation = ConversationServiceTestDataFactory.create_conversation_mock( - from_source=ConversationFromSource.API, from_end_user_id="user-123" - ) - mock_session.scalars.return_value.all.return_value = [conversation] - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_end_user_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - - @patch("services.conversation_service.session_factory") - def test_pagination_with_account_console_source(self, mock_session_factory): - """ - Test pagination correctly handles Account with console source. - - Should use 'console' as from_source for Account instances. - """ - # Arrange - mock_session = MagicMock() - mock_session_factory.create_session.return_value.__enter__.return_value = mock_session - - conversation = ConversationServiceTestDataFactory.create_conversation_mock( - from_source=ConversationFromSource.CONSOLE, from_account_id="account-123" - ) - mock_session.scalars.return_value.all.return_value = [conversation] - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - - def test_pagination_with_include_ids_filter(self): - """ - Test pagination with include_ids filter. - - Should only return conversations with IDs in include_ids list. - """ - # Arrange - mock_session = MagicMock() - mock_session.scalars.return_value.all.return_value = [] - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - include_ids=["conv-123", "conv-456"], - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - # Verify that include_ids filter was applied - assert mock_session.scalars.called - - def test_pagination_with_empty_exclude_ids(self): - """ - Test pagination with empty exclude_ids list. - - Should handle empty exclude_ids gracefully. - """ - # Arrange - mock_session = MagicMock() - mock_session.scalars.return_value.all.return_value = [] - - app_model = ConversationServiceTestDataFactory.create_app_mock() - user = ConversationServiceTestDataFactory.create_account_mock() - - # Act - result = ConversationService.pagination_by_last_id( - session=mock_session, - app_model=app_model, - user=user, - last_id=None, - limit=20, - invoke_from=InvokeFrom.WEB_APP, - exclude_ids=[], - ) - - # Assert - assert isinstance(result, InfiniteScrollPagination) - assert result.has_more is False diff --git a/api/tests/unit_tests/services/test_dataset_service_dataset.py b/api/tests/unit_tests/services/test_dataset_service_dataset.py index 2913ae20fe..3d08b6fd09 100644 --- a/api/tests/unit_tests/services/test_dataset_service_dataset.py +++ b/api/tests/unit_tests/services/test_dataset_service_dataset.py @@ -1,29 +1,20 @@ """Unit tests for DatasetService and dataset-related collaborators.""" from .dataset_service_test_helpers import ( - CloudPlan, - Dataset, - DatasetCollectionBindingService, DatasetNameDuplicateError, DatasetPermissionEnum, DatasetPermissionService, - DatasetProcessRule, DatasetService, DatasetServiceUnitDataFactory, - DocumentIndexingError, - DocumentService, LLMBadRequestError, MagicMock, - Mock, ModelFeature, ModelType, NoPermissionError, - NotFound, PipelineIconInfo, ProviderTokenNotInitError, RagPipelineDatasetCreateEntity, SimpleNamespace, - TenantAccountRole, _make_knowledge_configuration, _make_retrieval_model, _make_session_context, @@ -33,127 +24,6 @@ from .dataset_service_test_helpers import ( ) -class TestDatasetServiceQueries: - """Unit tests for DatasetService query composition and fallback branches.""" - - @pytest.fixture - def mock_dataset_query_dependencies(self): - with ( - patch("services.dataset_service.db") as mock_db, - patch("services.dataset_service.helper.escape_like_pattern", return_value="escaped-search") as escape_like, - patch("services.dataset_service.TagService.get_target_ids_by_tag_ids") as get_target_ids, - ): - mock_db.paginate.return_value = SimpleNamespace(items=["dataset"], total=1) - yield { - "db": mock_db, - "escape_like_pattern": escape_like, - "get_target_ids": get_target_ids, - } - - def test_get_datasets_returns_paginated_results_for_public_view(self, mock_dataset_query_dependencies): - items, total = DatasetService.get_datasets(page=1, per_page=20, tenant_id="tenant-1") - - assert items == ["dataset"] - assert total == 1 - mock_dataset_query_dependencies["db"].paginate.assert_called_once() - mock_dataset_query_dependencies["escape_like_pattern"].assert_not_called() - - def test_get_datasets_short_circuits_for_dataset_operator_without_permissions( - self, mock_dataset_query_dependencies - ): - user = DatasetServiceUnitDataFactory.create_user_mock(role=TenantAccountRole.DATASET_OPERATOR) - mock_dataset_query_dependencies["db"].session.scalars.return_value.all.return_value = [] - - items, total = DatasetService.get_datasets(page=1, per_page=20, tenant_id="tenant-1", user=user) - - assert items == [] - assert total == 0 - mock_dataset_query_dependencies["db"].paginate.assert_not_called() - - def test_get_datasets_short_circuits_when_tag_lookup_returns_no_target_ids(self, mock_dataset_query_dependencies): - mock_dataset_query_dependencies["get_target_ids"].return_value = [] - - items, total = DatasetService.get_datasets( - page=1, - per_page=20, - tenant_id="tenant-1", - tag_ids=["tag-1"], - ) - - assert items == [] - assert total == 0 - mock_dataset_query_dependencies["get_target_ids"].assert_called_once_with("knowledge", "tenant-1", ["tag-1"]) - mock_dataset_query_dependencies["db"].paginate.assert_not_called() - - def test_get_datasets_search_and_tag_filters_call_collaborators(self, mock_dataset_query_dependencies): - mock_dataset_query_dependencies["get_target_ids"].return_value = ["dataset-1"] - - items, total = DatasetService.get_datasets( - page=2, - per_page=10, - tenant_id="tenant-1", - search="report", - tag_ids=["tag-1"], - ) - - assert items == ["dataset"] - assert total == 1 - mock_dataset_query_dependencies["escape_like_pattern"].assert_called_once_with("report") - mock_dataset_query_dependencies["get_target_ids"].assert_called_once_with("knowledge", "tenant-1", ["tag-1"]) - mock_dataset_query_dependencies["db"].paginate.assert_called_once() - - def test_get_process_rules_returns_latest_rule_when_present(self): - dataset_process_rule = Mock(spec=DatasetProcessRule) - dataset_process_rule.mode = "automatic" - dataset_process_rule.rules_dict = {"delimiter": "\n"} - - with patch("services.dataset_service.db") as mock_db: - (mock_db.session.execute.return_value.scalar_one_or_none.return_value) = dataset_process_rule - - result = DatasetService.get_process_rules("dataset-1") - - assert result == {"mode": "automatic", "rules": {"delimiter": "\n"}} - - def test_get_process_rules_falls_back_to_default_rules_when_missing(self): - with patch("services.dataset_service.db") as mock_db: - (mock_db.session.execute.return_value.scalar_one_or_none.return_value) = None - - result = DatasetService.get_process_rules("dataset-1") - - assert result == { - "mode": DocumentService.DEFAULT_RULES["mode"], - "rules": DocumentService.DEFAULT_RULES["rules"], - } - - def test_get_datasets_by_ids_returns_empty_for_missing_ids(self): - with patch("services.dataset_service.db") as mock_db: - items, total = DatasetService.get_datasets_by_ids([], "tenant-1") - - assert items == [] - assert total == 0 - mock_db.paginate.assert_not_called() - - def test_get_datasets_by_ids_uses_paginate_for_non_empty_input(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.paginate.return_value = SimpleNamespace(items=["dataset-1"], total=1) - - items, total = DatasetService.get_datasets_by_ids(["dataset-1"], "tenant-1") - - assert items == ["dataset-1"] - assert total == 1 - mock_db.paginate.assert_called_once() - - def test_get_dataset_returns_first_match(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock() - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.get.return_value = dataset - - result = DatasetService.get_dataset(dataset.id) - - assert result is dataset - - class TestDatasetServiceValidation: """Unit tests for DatasetService validation helpers.""" @@ -1337,103 +1207,6 @@ class TestDatasetServiceRagPipelineSettings: class TestDatasetServicePermissionsAndLifecycle: """Unit tests for dataset permissions, deletion, and metadata helpers.""" - def test_delete_dataset_returns_false_when_dataset_is_missing(self): - with patch.object(DatasetService, "get_dataset", return_value=None): - result = DatasetService.delete_dataset("dataset-1", user=SimpleNamespace(id="user-1")) - - assert result is False - - def test_delete_dataset_checks_permission_and_deletes_dataset(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock() - - with ( - patch.object(DatasetService, "get_dataset", return_value=dataset), - patch.object(DatasetService, "check_dataset_permission") as check_permission, - patch("services.dataset_service.dataset_was_deleted.send") as send_deleted_signal, - patch("services.dataset_service.db") as mock_db, - ): - result = DatasetService.delete_dataset(dataset.id, user=SimpleNamespace(id="user-1")) - - assert result is True - check_permission.assert_called_once_with(dataset, SimpleNamespace(id="user-1")) - send_deleted_signal.assert_called_once_with(dataset) - mock_db.session.delete.assert_called_once_with(dataset) - mock_db.session.commit.assert_called_once() - - def test_dataset_use_check_returns_scalar_result(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.session.execute.return_value.scalar_one.return_value = True - - result = DatasetService.dataset_use_check("dataset-1") - - assert result is True - - def test_check_dataset_permission_rejects_cross_tenant_access(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock(tenant_id="tenant-a") - user = DatasetServiceUnitDataFactory.create_user_mock(tenant_id="tenant-b") - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_rejects_only_me_dataset_for_non_creator(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.ONLY_ME, - created_by="owner-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_rejects_partial_team_user_without_binding(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.PARTIAL_TEAM, - created_by="owner-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = None - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_permission(dataset, user) - - def test_check_dataset_permission_allows_partial_team_creator_without_lookup(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.PARTIAL_TEAM, - created_by="creator-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="creator-1", - role=TenantAccountRole.EDITOR, - ) - - with patch("services.dataset_service.db") as mock_db: - DatasetService.check_dataset_permission(dataset, user) - - mock_db.session.scalar.assert_not_called() - - def test_check_dataset_permission_allows_partial_team_member_with_binding(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.PARTIAL_TEAM, - created_by="owner-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = object() - - DatasetService.check_dataset_permission(dataset, user) - def test_check_dataset_operator_permission_validates_required_arguments(self): with pytest.raises(ValueError, match="Dataset not found"): DatasetService.check_dataset_operator_permission(user=SimpleNamespace(id="user-1"), dataset=None) @@ -1441,279 +1214,14 @@ class TestDatasetServicePermissionsAndLifecycle: with pytest.raises(ValueError, match="User not found"): DatasetService.check_dataset_operator_permission(user=None, dataset=SimpleNamespace(id="dataset-1")) - def test_check_dataset_operator_permission_rejects_only_me_for_non_creator(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock( - permission=DatasetPermissionEnum.ONLY_ME, - created_by="owner-1", - ) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - def test_check_dataset_operator_permission_rejects_partial_team_without_binding(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock(permission=DatasetPermissionEnum.PARTIAL_TEAM) - user = DatasetServiceUnitDataFactory.create_user_mock( - user_id="member-1", - role=TenantAccountRole.EDITOR, - ) - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalars.return_value.all.return_value = [] - - with pytest.raises(NoPermissionError, match="do not have permission"): - DatasetService.check_dataset_operator_permission(user=user, dataset=dataset) - - def test_get_dataset_queries_delegates_to_paginate(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.desc.side_effect = lambda column: column - mock_db.paginate.return_value = SimpleNamespace(items=["query"], total=1) - - items, total = DatasetService.get_dataset_queries("dataset-1", page=1, per_page=20) - - assert items == ["query"] - assert total == 1 - mock_db.paginate.assert_called_once() - - def test_get_related_apps_returns_ordered_query_results(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.desc.side_effect = lambda column: column - mock_db.session.scalars.return_value.all.return_value = ["relation-1"] - - result = DatasetService.get_related_apps("dataset-1") - - assert result == ["relation-1"] - - def test_update_dataset_api_status_raises_not_found_for_missing_dataset(self): - with patch.object(DatasetService, "get_dataset", return_value=None): - with pytest.raises(NotFound, match="Dataset not found"): - DatasetService.update_dataset_api_status("dataset-1", True) - - def test_update_dataset_api_status_requires_current_user_id(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock(enable_api=False) - - with ( - patch.object(DatasetService, "get_dataset", return_value=dataset), - patch("services.dataset_service.current_user", SimpleNamespace(id=None)), - ): - with pytest.raises(ValueError, match="Current user or current user id not found"): - DatasetService.update_dataset_api_status(dataset.id, True) - - def test_update_dataset_api_status_updates_fields_and_commits(self): - dataset = DatasetServiceUnitDataFactory.create_dataset_mock(enable_api=False) - now = object() - - with ( - patch.object(DatasetService, "get_dataset", return_value=dataset), - patch("services.dataset_service.current_user", SimpleNamespace(id="user-1")), - patch("services.dataset_service.naive_utc_now", return_value=now), - patch("services.dataset_service.db") as mock_db, - ): - DatasetService.update_dataset_api_status(dataset.id, True) - - assert dataset.enable_api is True - assert dataset.updated_by == "user-1" - assert dataset.updated_at is now - mock_db.session.commit.assert_called_once() - - def test_get_dataset_auto_disable_logs_returns_empty_when_billing_is_disabled(self): - class FakeAccount: - pass - - current_user = FakeAccount() - current_user.current_tenant_id = "tenant-1" - - features = SimpleNamespace( - billing=SimpleNamespace(enabled=False, subscription=SimpleNamespace(plan=CloudPlan.PROFESSIONAL)) - ) - - with ( - patch("services.dataset_service.Account", FakeAccount), - patch("services.dataset_service.current_user", current_user), - patch("services.dataset_service.FeatureService.get_features", return_value=features), - patch("services.dataset_service.db") as mock_db, - ): - result = DatasetService.get_dataset_auto_disable_logs("dataset-1") - - assert result == {"document_ids": [], "count": 0} - mock_db.session.scalars.assert_not_called() - - def test_get_dataset_auto_disable_logs_returns_recent_document_ids(self): - class FakeAccount: - pass - - current_user = FakeAccount() - current_user.current_tenant_id = "tenant-1" - logs = [SimpleNamespace(document_id="doc-1"), SimpleNamespace(document_id="doc-2")] - features = SimpleNamespace( - billing=SimpleNamespace(enabled=True, subscription=SimpleNamespace(plan=CloudPlan.PROFESSIONAL)) - ) - - with ( - patch("services.dataset_service.Account", FakeAccount), - patch("services.dataset_service.current_user", current_user), - patch("services.dataset_service.FeatureService.get_features", return_value=features), - patch("services.dataset_service.db") as mock_db, - ): - mock_db.session.scalars.return_value.all.return_value = logs - - result = DatasetService.get_dataset_auto_disable_logs("dataset-1") - - assert result == {"document_ids": ["doc-1", "doc-2"], "count": 2} - - -class TestDatasetServiceDocumentIndexing: - """Unit tests for pause/recover/retry orchestration without SQL assertions.""" - - @pytest.fixture - def mock_document_service_dependencies(self): - with ( - patch("services.dataset_service.redis_client") as mock_redis, - patch("services.dataset_service.db.session") as mock_db_session, - patch("services.dataset_service.current_user") as mock_current_user, - ): - mock_current_user.id = "user-123" - yield { - "redis_client": mock_redis, - "db_session": mock_db_session, - "current_user": mock_current_user, - } - - def test_pause_document_success(self, mock_document_service_dependencies): - document = DatasetServiceUnitDataFactory.create_document_mock(indexing_status="indexing") - - DocumentService.pause_document(document) - - assert document.is_paused is True - assert document.paused_by == "user-123" - mock_document_service_dependencies["db_session"].add.assert_called_once_with(document) - mock_document_service_dependencies["db_session"].commit.assert_called_once() - mock_document_service_dependencies["redis_client"].setnx.assert_called_once_with( - f"document_{document.id}_is_paused", - "True", - ) - - def test_pause_document_invalid_status_error(self, mock_document_service_dependencies): - document = DatasetServiceUnitDataFactory.create_document_mock(indexing_status="completed") - - with pytest.raises(DocumentIndexingError): - DocumentService.pause_document(document) - - def test_recover_document_success(self, mock_document_service_dependencies): - document = DatasetServiceUnitDataFactory.create_document_mock(indexing_status="indexing", is_paused=True) - - with patch("services.dataset_service.recover_document_indexing_task") as recover_task: - DocumentService.recover_document(document) - - assert document.is_paused is False - assert document.paused_by is None - assert document.paused_at is None - mock_document_service_dependencies["db_session"].add.assert_called_once_with(document) - mock_document_service_dependencies["db_session"].commit.assert_called_once() - mock_document_service_dependencies["redis_client"].delete.assert_called_once_with( - f"document_{document.id}_is_paused" - ) - recover_task.delay.assert_called_once_with(document.dataset_id, document.id) - - def test_retry_document_indexing_success(self, mock_document_service_dependencies): - dataset_id = "dataset-123" - documents = [ - DatasetServiceUnitDataFactory.create_document_mock(document_id="doc-1", indexing_status="error"), - DatasetServiceUnitDataFactory.create_document_mock(document_id="doc-2", indexing_status="error"), - ] - mock_document_service_dependencies["redis_client"].get.return_value = None - - with patch("services.dataset_service.retry_document_indexing_task") as retry_task: - DocumentService.retry_document(dataset_id, documents) - - assert all(document.indexing_status == "waiting" for document in documents) - assert mock_document_service_dependencies["db_session"].add.call_count == 2 - assert mock_document_service_dependencies["db_session"].commit.call_count == 2 - assert mock_document_service_dependencies["redis_client"].setex.call_count == 2 - retry_task.delay.assert_called_once_with(dataset_id, ["doc-1", "doc-2"], "user-123") - class TestDatasetCollectionBindingService: """Unit tests for dataset collection binding lookups and creation.""" - def test_get_dataset_collection_binding_returns_existing_binding(self): - binding = SimpleNamespace(id="binding-1") - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = binding - - result = DatasetCollectionBindingService.get_dataset_collection_binding("provider", "model") - - assert result is binding - mock_db.session.add.assert_not_called() - - def test_get_dataset_collection_binding_creates_binding_when_missing(self): - created_binding = SimpleNamespace(id="binding-2") - - with ( - patch("services.dataset_service.db") as mock_db, - patch("services.dataset_service.select"), - patch("services.dataset_service.DatasetCollectionBinding", return_value=created_binding) as binding_cls, - patch.object(Dataset, "gen_collection_name_by_id", return_value="generated-collection"), - ): - mock_db.session.scalar.return_value = None - - result = DatasetCollectionBindingService.get_dataset_collection_binding("provider", "model", "dataset") - - assert result is created_binding - binding_cls.assert_called_once_with( - provider_name="provider", - model_name="model", - collection_name="generated-collection", - type="dataset", - ) - mock_db.session.add.assert_called_once_with(created_binding) - mock_db.session.commit.assert_called_once() - - def test_get_dataset_collection_binding_by_id_and_type_raises_when_missing(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = None - - with pytest.raises(ValueError, match="Dataset collection binding not found"): - DatasetCollectionBindingService.get_dataset_collection_binding_by_id_and_type("binding-1") - - def test_get_dataset_collection_binding_by_id_and_type_returns_binding(self): - binding = SimpleNamespace(id="binding-1") - - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalar.return_value = binding - - result = DatasetCollectionBindingService.get_dataset_collection_binding_by_id_and_type("binding-1") - - assert result is binding - class TestDatasetPermissionService: """Unit tests for dataset partial-member management helpers.""" - def test_get_dataset_partial_member_list_returns_scalar_results(self): - with patch("services.dataset_service.db") as mock_db: - mock_db.session.scalars.return_value.all.return_value = ["user-1", "user-2"] - - result = DatasetPermissionService.get_dataset_partial_member_list("dataset-1") - - assert result == ["user-1", "user-2"] - - def test_update_partial_member_list_replaces_permissions_and_commits(self): - with patch("services.dataset_service.db") as mock_db: - DatasetPermissionService.update_partial_member_list( - "tenant-1", - "dataset-1", - [{"user_id": "user-1"}, {"user_id": "user-2"}], - ) - - mock_db.session.execute.assert_called() - mock_db.session.add_all.assert_called_once() - mock_db.session.commit.assert_called_once() - def test_update_partial_member_list_rolls_back_on_exception(self): with patch("services.dataset_service.db") as mock_db: mock_db.session.add_all.side_effect = RuntimeError("boom") @@ -1777,13 +1285,6 @@ class TestDatasetPermissionService: [{"user_id": "user-1"}], ) - def test_clear_partial_member_list_deletes_permissions_and_commits(self): - with patch("services.dataset_service.db") as mock_db: - DatasetPermissionService.clear_partial_member_list("dataset-1") - - mock_db.session.execute.assert_called() - mock_db.session.commit.assert_called_once() - def test_clear_partial_member_list_rolls_back_on_exception(self): with patch("services.dataset_service.db") as mock_db: mock_db.session.execute.side_effect = RuntimeError("boom") diff --git a/api/tests/unit_tests/services/test_datasource_provider_service.py b/api/tests/unit_tests/services/test_datasource_provider_service.py index c00a4938bb..d304e0ec44 100644 --- a/api/tests/unit_tests/services/test_datasource_provider_service.py +++ b/api/tests/unit_tests/services/test_datasource_provider_service.py @@ -2,10 +2,10 @@ from unittest.mock import MagicMock, patch import httpx import pytest -from graphon.model_runtime.entities.provider_entities import FormType from sqlalchemy.orm import Session from core.plugin.entities.plugin_daemon import CredentialType +from graphon.model_runtime.entities.provider_entities import FormType from models.account import Account from models.model import EndUser from models.oauth import DatasourceProvider diff --git a/api/tests/unit_tests/services/test_external_dataset_service.py b/api/tests/unit_tests/services/test_external_dataset_service.py index 9c1a92b4d9..fdea0ba869 100644 --- a/api/tests/unit_tests/services/test_external_dataset_service.py +++ b/api/tests/unit_tests/services/test_external_dataset_service.py @@ -8,6 +8,7 @@ Target: 1500+ lines of comprehensive test coverage. import json import re from datetime import datetime +from typing import Any from unittest.mock import MagicMock, Mock, patch import pytest @@ -31,7 +32,7 @@ class ExternalDatasetServiceTestDataFactory: api_id: str = "api-123", tenant_id: str = "tenant-123", name: str = "Test API", - settings: dict | None = None, + settings: dict[str, Any] | None = None, **kwargs, ) -> Mock: """Create a mock ExternalKnowledgeApis object.""" @@ -120,8 +121,8 @@ class ExternalDatasetServiceTestDataFactory: def create_api_setting_mock( url: str = "https://api.example.com/retrieval", request_method: str = "post", - headers: dict | None = None, - params: dict | None = None, + headers: dict[str, Any] | None = None, + params: dict[str, Any] | None = None, ) -> ExternalKnowledgeApiSetting: """Create an ExternalKnowledgeApiSetting object.""" if headers is None: diff --git a/api/tests/unit_tests/services/test_human_input_service.py b/api/tests/unit_tests/services/test_human_input_service.py index 9be475d043..55af564821 100644 --- a/api/tests/unit_tests/services/test_human_input_service.py +++ b/api/tests/unit_tests/services/test_human_input_service.py @@ -3,18 +3,18 @@ from datetime import datetime, timedelta from unittest.mock import MagicMock import pytest -from graphon.nodes.human_input.entities import ( - FormDefinition, - FormInput, - UserAction, -) -from graphon.nodes.human_input.enums import FormInputType, HumanInputFormKind, HumanInputFormStatus import services.human_input_service as human_input_service_module from core.repositories.human_input_repository import ( HumanInputFormRecord, HumanInputFormSubmissionRepository, ) +from graphon.nodes.human_input.entities import ( + FormDefinition, + FormInput, + UserAction, +) +from graphon.nodes.human_input.enums import FormInputType, HumanInputFormKind, HumanInputFormStatus from libs.datetime_utils import naive_utc_now from models.human_input import RecipientType from services.human_input_service import ( diff --git a/api/tests/unit_tests/services/test_messages_clean_service.py b/api/tests/unit_tests/services/test_messages_clean_service.py index f3efc4463e..5fcad615c8 100644 --- a/api/tests/unit_tests/services/test_messages_clean_service.py +++ b/api/tests/unit_tests/services/test_messages_clean_service.py @@ -1,4 +1,5 @@ import datetime +from typing import Any from unittest.mock import MagicMock, patch import pytest @@ -18,7 +19,7 @@ def make_simple_message(msg_id: str, app_id: str) -> SimpleMessage: return SimpleMessage(id=msg_id, app_id=app_id, created_at=datetime.datetime(2024, 1, 1)) -def make_plan_provider(tenant_plans: dict) -> MagicMock: +def make_plan_provider(tenant_plans: dict[str, Any]) -> MagicMock: """Helper to create a mock plan_provider that returns the given tenant_plans.""" provider = MagicMock() provider.return_value = tenant_plans diff --git a/api/tests/unit_tests/services/test_model_load_balancing_service.py b/api/tests/unit_tests/services/test_model_load_balancing_service.py index bea288fb9b..3119af40a2 100644 --- a/api/tests/unit_tests/services/test_model_load_balancing_service.py +++ b/api/tests/unit_tests/services/test_model_load_balancing_service.py @@ -6,6 +6,9 @@ from typing import Any, cast from unittest.mock import MagicMock import pytest +from pytest_mock import MockerFixture + +from constants import HIDDEN_VALUE from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.model_entities import ModelType from graphon.model_runtime.entities.provider_entities import ( @@ -15,9 +18,6 @@ from graphon.model_runtime.entities.provider_entities import ( ModelCredentialSchema, ProviderCredentialSchema, ) -from pytest_mock import MockerFixture - -from constants import HIDDEN_VALUE from models.provider import LoadBalancingModelConfig from services.model_load_balancing_service import ModelLoadBalancingService diff --git a/api/tests/unit_tests/services/test_model_provider_service.py b/api/tests/unit_tests/services/test_model_provider_service.py new file mode 100644 index 0000000000..28d459eac9 --- /dev/null +++ b/api/tests/unit_tests/services/test_model_provider_service.py @@ -0,0 +1,602 @@ +from types import SimpleNamespace +from typing import Any +from unittest.mock import MagicMock + +import pytest + +from core.entities.model_entities import ModelStatus +from graphon.model_runtime.entities.common_entities import I18nObject +from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType, ParameterRule, ParameterType +from models.provider import ProviderType +from services import model_provider_service as service_module +from services.errors.app_model_config import ProviderNotFoundError +from services.model_provider_service import ModelProviderService + + +def _create_service_with_mocked_manager() -> tuple[ModelProviderService, MagicMock]: + manager = MagicMock() + service = ModelProviderService() + service._get_provider_manager = MagicMock(return_value=manager) + return service, manager + + +def _build_provider_configuration( + *, + provider_name: str = "openai", + supported_model_types: list[ModelType] | None = None, + custom_models: list[Any] | None = None, + custom_config_available: bool = True, +) -> SimpleNamespace: + if supported_model_types is None: + supported_model_types = [ModelType.LLM] + + return SimpleNamespace( + provider=SimpleNamespace( + provider=provider_name, + label=I18nObject(en_US=provider_name), + description=None, + icon_small=None, + icon_small_dark=None, + background=None, + help=None, + supported_model_types=supported_model_types, + configurate_methods=[], + provider_credential_schema=None, + model_credential_schema=None, + ), + preferred_provider_type=ProviderType.CUSTOM, + custom_configuration=SimpleNamespace( + provider=SimpleNamespace( + current_credential_id="cred-1", + current_credential_name="Credential 1", + available_credentials=[], + ), + models=custom_models, + can_added_models=[], + ), + system_configuration=SimpleNamespace(enabled=False, current_quota_type=None, quota_configurations=[]), + is_custom_configuration_available=lambda: custom_config_available, + ) + + +class TestModelProviderServiceConfiguration: + def test__get_provider_configuration_should_return_configuration_when_provider_exists(self) -> None: + service, manager = _create_service_with_mocked_manager() + provider_configuration = SimpleNamespace(name="provider-config") + manager.get_configurations.return_value = {"openai": provider_configuration} + + result = service._get_provider_configuration(tenant_id="tenant-1", provider="openai") + + assert result is provider_configuration + + def test__get_provider_configuration_should_raise_error_when_provider_is_missing(self) -> None: + service, manager = _create_service_with_mocked_manager() + manager.get_configurations.return_value = {} + + with pytest.raises(ProviderNotFoundError, match="does not exist"): + service._get_provider_configuration(tenant_id="tenant-1", provider="missing") + + def test_get_provider_list_should_filter_by_model_type_and_build_no_configure_status(self) -> None: + service, manager = _create_service_with_mocked_manager() + allowed = _build_provider_configuration( + provider_name="openai", + supported_model_types=[ModelType.LLM], + custom_config_available=False, + ) + filtered = _build_provider_configuration( + provider_name="embedding", + supported_model_types=[ModelType.TEXT_EMBEDDING], + custom_config_available=True, + ) + manager.get_configurations.return_value = {"openai": allowed, "embedding": filtered} + + result = service.get_provider_list(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + assert len(result) == 1 + assert result[0].provider == "openai" + assert result[0].custom_configuration.status.value == "no-configure" + + def test_get_models_by_provider_should_wrap_model_entities_with_tenant_context(self) -> None: + service, manager = _create_service_with_mocked_manager() + + class _Model: + def __init__(self, model_name: str) -> None: + self.model_name = model_name + + def model_dump(self) -> dict[str, Any]: + return { + "model": self.model_name, + "label": {"en_US": self.model_name}, + "model_type": ModelType.LLM, + "features": [], + "fetch_from": FetchFrom.PREDEFINED_MODEL, + "model_properties": {}, + "deprecated": False, + "status": ModelStatus.ACTIVE, + "load_balancing_enabled": False, + "has_invalid_load_balancing_configs": False, + "provider": { + "provider": "openai", + "label": {"en_US": "OpenAI"}, + "icon_small": None, + "icon_small_dark": None, + "supported_model_types": [ModelType.LLM], + }, + } + + provider_configurations = SimpleNamespace( + get_models=MagicMock(return_value=[_Model("gpt-4o"), _Model("gpt-4o-mini")]) + ) + manager.get_configurations.return_value = provider_configurations + + result = service.get_models_by_provider(tenant_id="tenant-1", provider="openai") + + assert len(result) == 2 + assert result[0].model == "gpt-4o" + assert result[1].provider.provider == "openai" + provider_configurations.get_models.assert_called_once_with(provider="openai") + + +class TestModelProviderServiceDelegation: + @pytest.mark.parametrize( + ("method_name", "method_kwargs", "provider_method_name", "provider_call_kwargs", "provider_return"), + [ + ( + "get_provider_credential", + {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, + "get_provider_credential", + {"credential_id": "cred-1"}, + {"token": "abc"}, + ), + ( + "validate_provider_credentials", + {"tenant_id": "tenant-1", "provider": "openai", "credentials": {"token": "abc"}}, + "validate_provider_credentials", + ({"token": "abc"},), + None, + ), + ( + "create_provider_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "credentials": {"token": "abc"}, + "credential_name": "A", + }, + "create_provider_credential", + ({"token": "abc"}, "A"), + None, + ), + ( + "update_provider_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "credentials": {"token": "abc"}, + "credential_id": "cred-1", + "credential_name": "B", + }, + "update_provider_credential", + {"credential_id": "cred-1", "credentials": {"token": "abc"}, "credential_name": "B"}, + None, + ), + ( + "remove_provider_credential", + {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, + "delete_provider_credential", + {"credential_id": "cred-1"}, + None, + ), + ( + "switch_active_provider_credential", + {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, + "switch_active_provider_credential", + {"credential_id": "cred-1"}, + None, + ), + ], + ) + def test_provider_credential_methods_should_delegate_to_provider_configuration( + self, + method_name: str, + method_kwargs: dict[str, Any], + provider_method_name: str, + provider_call_kwargs: Any, + provider_return: Any, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + getattr(provider_configuration, provider_method_name).return_value = provider_return + get_provider_config_mock = MagicMock(return_value=provider_configuration) + monkeypatch.setattr(service, "_get_provider_configuration", get_provider_config_mock) + + result = getattr(service, method_name)(**method_kwargs) + + get_provider_config_mock.assert_called_once_with("tenant-1", "openai") + provider_method = getattr(provider_configuration, provider_method_name) + if isinstance(provider_call_kwargs, tuple): + provider_method.assert_called_once_with(*provider_call_kwargs) + elif isinstance(provider_call_kwargs, dict): + provider_method.assert_called_once_with(**provider_call_kwargs) + else: + provider_method.assert_called_once_with(provider_call_kwargs) + if method_name == "get_provider_credential": + assert result == {"token": "abc"} + + @pytest.mark.parametrize( + ("method_name", "method_kwargs", "provider_method_name", "expected_kwargs", "provider_return"), + [ + ( + "get_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credential_id": "cred-1", + }, + "get_custom_model_credential", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, + {"api_key": "x"}, + ), + ( + "validate_model_credentials", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + }, + "validate_custom_model_credentials", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credentials": {"api_key": "x"}}, + None, + ), + ( + "create_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + "credential_name": "cred-a", + }, + "create_custom_model_credential", + { + "model_type": ModelType.LLM, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + "credential_name": "cred-a", + }, + None, + ), + ( + "update_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + "credential_id": "cred-1", + "credential_name": "cred-b", + }, + "update_custom_model_credential", + { + "model_type": ModelType.LLM, + "model": "gpt-4o", + "credentials": {"api_key": "x"}, + "credential_id": "cred-1", + "credential_name": "cred-b", + }, + None, + ), + ( + "remove_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credential_id": "cred-1", + }, + "delete_custom_model_credential", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, + None, + ), + ( + "switch_active_custom_model_credential", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credential_id": "cred-1", + }, + "switch_custom_model_credential", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, + None, + ), + ( + "add_model_credential_to_model_list", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + "credential_id": "cred-1", + }, + "add_model_credential_to_model", + {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, + None, + ), + ( + "remove_model", + { + "tenant_id": "tenant-1", + "provider": "openai", + "model_type": ModelType.LLM.value, + "model": "gpt-4o", + }, + "delete_custom_model", + {"model_type": ModelType.LLM, "model": "gpt-4o"}, + None, + ), + ], + ) + def test_custom_model_methods_should_convert_model_type_and_delegate( + self, + method_name: str, + method_kwargs: dict[str, Any], + provider_method_name: str, + expected_kwargs: dict[str, Any], + provider_return: Any, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + getattr(provider_configuration, provider_method_name).return_value = provider_return + get_provider_config_mock = MagicMock(return_value=provider_configuration) + monkeypatch.setattr(service, "_get_provider_configuration", get_provider_config_mock) + + result = getattr(service, method_name)(**method_kwargs) + + get_provider_config_mock.assert_called_once_with("tenant-1", "openai") + getattr(provider_configuration, provider_method_name).assert_called_once_with(**expected_kwargs) + if method_name == "get_model_credential": + assert result == {"api_key": "x"} + + +class TestModelProviderServiceListingsAndDefaults: + def test_get_models_by_model_type_should_group_active_non_deprecated_models(self) -> None: + service, manager = _create_service_with_mocked_manager() + openai_provider = SimpleNamespace( + provider="openai", + label=I18nObject(en_US="OpenAI"), + icon_small=None, + icon_small_dark=None, + ) + anthropic_provider = SimpleNamespace( + provider="anthropic", + label=I18nObject(en_US="Anthropic"), + icon_small=None, + icon_small_dark=None, + ) + models = [ + SimpleNamespace( + provider=openai_provider, + model="gpt-4o", + label=I18nObject(en_US="GPT-4o"), + model_type=ModelType.LLM, + features=[], + fetch_from=FetchFrom.PREDEFINED_MODEL, + model_properties={}, + status=ModelStatus.ACTIVE, + load_balancing_enabled=False, + deprecated=False, + ), + SimpleNamespace( + provider=openai_provider, + model="old-openai", + label=I18nObject(en_US="Old OpenAI"), + model_type=ModelType.LLM, + features=[], + fetch_from=FetchFrom.PREDEFINED_MODEL, + model_properties={}, + status=ModelStatus.ACTIVE, + load_balancing_enabled=False, + deprecated=True, + ), + SimpleNamespace( + provider=anthropic_provider, + model="old-anthropic", + label=I18nObject(en_US="Old Anthropic"), + model_type=ModelType.LLM, + features=[], + fetch_from=FetchFrom.PREDEFINED_MODEL, + model_properties={}, + status=ModelStatus.ACTIVE, + load_balancing_enabled=False, + deprecated=True, + ), + ] + provider_configurations = SimpleNamespace(get_models=MagicMock(return_value=models)) + manager.get_configurations.return_value = provider_configurations + + result = service.get_models_by_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + provider_configurations.get_models.assert_called_once_with(model_type=ModelType.LLM, only_active=True) + assert len(result) == 1 + assert result[0].provider == "openai" + assert len(result[0].models) == 1 + assert result[0].models[0].model == "gpt-4o" + + @pytest.mark.parametrize( + ("credentials", "schema", "expected_count"), + [ + (None, None, 0), + ({"api_key": "x"}, None, 0), + ( + {"api_key": "x"}, + SimpleNamespace( + parameter_rules=[ + ParameterRule( + name="temperature", + label=I18nObject(en_US="Temperature"), + type=ParameterType.FLOAT, + ) + ] + ), + 1, + ), + ], + ) + def test_get_model_parameter_rules_should_handle_missing_credentials_and_schema( + self, + credentials: dict[str, Any] | None, + schema: Any, + expected_count: int, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + provider_configuration.get_current_credentials.return_value = credentials + provider_configuration.get_model_schema.return_value = schema + monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) + + result = service.get_model_parameter_rules(tenant_id="tenant-1", provider="openai", model="gpt-4o") + + assert len(result) == expected_count + provider_configuration.get_current_credentials.assert_called_once_with( + model_type=ModelType.LLM, + model="gpt-4o", + ) + if credentials: + provider_configuration.get_model_schema.assert_called_once_with( + model_type=ModelType.LLM, + model="gpt-4o", + credentials=credentials, + ) + else: + provider_configuration.get_model_schema.assert_not_called() + + def test_get_default_model_of_model_type_should_return_response_when_manager_returns_model(self) -> None: + service, manager = _create_service_with_mocked_manager() + manager.get_default_model.return_value = SimpleNamespace( + model="gpt-4o", + model_type=ModelType.LLM, + provider=SimpleNamespace( + provider="openai", + label=I18nObject(en_US="OpenAI"), + icon_small=None, + supported_model_types=[ModelType.LLM], + ), + ) + + result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + assert result is not None + assert result.model == "gpt-4o" + assert result.provider.provider == "openai" + manager.get_default_model.assert_called_once_with(tenant_id="tenant-1", model_type=ModelType.LLM) + + def test_get_default_model_of_model_type_should_return_none_when_manager_returns_none(self) -> None: + service, manager = _create_service_with_mocked_manager() + manager.get_default_model.return_value = None + + result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + assert result is None + + def test_get_default_model_of_model_type_should_return_none_when_manager_raises_exception(self) -> None: + service, manager = _create_service_with_mocked_manager() + manager.get_default_model.side_effect = RuntimeError("boom") + + result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) + + assert result is None + + def test_update_default_model_of_model_type_should_delegate_to_provider_manager(self) -> None: + service, manager = _create_service_with_mocked_manager() + + service.update_default_model_of_model_type( + tenant_id="tenant-1", + model_type=ModelType.LLM.value, + provider="openai", + model="gpt-4o", + ) + + manager.update_default_model_record.assert_called_once_with( + tenant_id="tenant-1", + model_type=ModelType.LLM, + provider="openai", + model="gpt-4o", + ) + + def test_get_model_provider_icon_should_fetch_icon_bytes_from_factory( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + factory_instance = MagicMock() + factory_instance.get_provider_icon.return_value = (b"icon-bytes", "image/png") + factory_constructor = MagicMock(return_value=factory_instance) + monkeypatch.setattr(service_module, "create_plugin_model_provider_factory", factory_constructor) + + result = service.get_model_provider_icon( + tenant_id="tenant-1", + provider="openai", + icon_type="icon_small", + lang="en_US", + ) + + factory_constructor.assert_called_once_with(tenant_id="tenant-1") + factory_instance.get_provider_icon.assert_called_once_with("openai", "icon_small", "en_US") + assert result == (b"icon-bytes", "image/png") + + def test_switch_preferred_provider_should_convert_enum_and_delegate( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) + + service.switch_preferred_provider( + tenant_id="tenant-1", + provider="openai", + preferred_provider_type=ProviderType.SYSTEM.value, + ) + + provider_configuration.switch_preferred_provider_type.assert_called_once_with(ProviderType.SYSTEM) + + @pytest.mark.parametrize( + ("method_name", "provider_method_name"), + [ + ("enable_model", "enable_model"), + ("disable_model", "disable_model"), + ], + ) + def test_model_enablement_methods_should_convert_model_type_and_delegate( + self, + method_name: str, + provider_method_name: str, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = ModelProviderService() + provider_configuration = MagicMock() + monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) + + getattr(service, method_name)( + tenant_id="tenant-1", + provider="openai", + model="gpt-4o", + model_type=ModelType.LLM.value, + ) + + getattr(provider_configuration, provider_method_name).assert_called_once_with( + model="gpt-4o", + model_type=ModelType.LLM, + ) diff --git a/api/tests/unit_tests/services/test_model_provider_service_sanitization.py b/api/tests/unit_tests/services/test_model_provider_service_sanitization.py index acf5dff634..97f3bd6f01 100644 --- a/api/tests/unit_tests/services/test_model_provider_service_sanitization.py +++ b/api/tests/unit_tests/services/test_model_provider_service_sanitization.py @@ -1,11 +1,11 @@ import types import pytest + +from core.entities.provider_entities import CredentialConfiguration, CustomModelConfiguration from graphon.model_runtime.entities.common_entities import I18nObject from graphon.model_runtime.entities.model_entities import ModelType from graphon.model_runtime.entities.provider_entities import ConfigurateMethod - -from core.entities.provider_entities import CredentialConfiguration, CustomModelConfiguration from models.provider import ProviderType from services.model_provider_service import ModelProviderService @@ -85,644 +85,3 @@ def test_get_provider_list_strips_credentials(service_with_fake_configurations: assert len(custom_models) == 1 # The sanitizer should drop credentials in list response assert custom_models[0].credentials is None - - -# === Merged from test_model_provider_service.py === - - -from types import SimpleNamespace -from typing import Any -from unittest.mock import MagicMock - -import pytest -from graphon.model_runtime.entities.common_entities import I18nObject -from graphon.model_runtime.entities.model_entities import FetchFrom, ModelType, ParameterRule, ParameterType - -from core.entities.model_entities import ModelStatus -from models.provider import ProviderType -from services import model_provider_service as service_module -from services.errors.app_model_config import ProviderNotFoundError -from services.model_provider_service import ModelProviderService - - -def _create_service_with_mocked_manager() -> tuple[ModelProviderService, MagicMock]: - manager = MagicMock() - service = ModelProviderService() - service._get_provider_manager = MagicMock(return_value=manager) - return service, manager - - -def _build_provider_configuration( - *, - provider_name: str = "openai", - supported_model_types: list[ModelType] | None = None, - custom_models: list[Any] | None = None, - custom_config_available: bool = True, -) -> SimpleNamespace: - if supported_model_types is None: - supported_model_types = [ModelType.LLM] - return SimpleNamespace( - provider=SimpleNamespace( - provider=provider_name, - label=I18nObject(en_US=provider_name), - description=None, - icon_small=None, - icon_small_dark=None, - background=None, - help=None, - supported_model_types=supported_model_types, - configurate_methods=[], - provider_credential_schema=None, - model_credential_schema=None, - ), - preferred_provider_type=ProviderType.CUSTOM, - custom_configuration=SimpleNamespace( - provider=SimpleNamespace( - current_credential_id="cred-1", - current_credential_name="Credential 1", - available_credentials=[], - ), - models=custom_models, - can_added_models=[], - ), - system_configuration=SimpleNamespace(enabled=False, current_quota_type=None, quota_configurations=[]), - is_custom_configuration_available=lambda: custom_config_available, - ) - - -def test__get_provider_configuration_should_return_configuration_when_provider_exists() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - provider_configuration = SimpleNamespace(name="provider-config") - manager.get_configurations.return_value = {"openai": provider_configuration} - - # Act - result = service._get_provider_configuration(tenant_id="tenant-1", provider="openai") - - # Assert - assert result is provider_configuration - - -def test__get_provider_configuration_should_raise_error_when_provider_is_missing() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - manager.get_configurations.return_value = {} - - # Act / Assert - with pytest.raises(ProviderNotFoundError, match="does not exist"): - service._get_provider_configuration(tenant_id="tenant-1", provider="missing") - - -def test_get_provider_list_should_filter_by_model_type_and_build_no_configure_status() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - allowed = _build_provider_configuration( - provider_name="openai", - supported_model_types=[ModelType.LLM], - custom_config_available=False, - ) - filtered = _build_provider_configuration( - provider_name="embedding", - supported_model_types=[ModelType.TEXT_EMBEDDING], - custom_config_available=True, - ) - manager.get_configurations.return_value = {"openai": allowed, "embedding": filtered} - - # Act - result = service.get_provider_list(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - assert len(result) == 1 - assert result[0].provider == "openai" - assert result[0].custom_configuration.status.value == "no-configure" - - -def test_get_models_by_provider_should_wrap_model_entities_with_tenant_context() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - - class _Model: - def __init__(self, model_name: str) -> None: - self.model_name = model_name - - def model_dump(self) -> dict[str, Any]: - return { - "model": self.model_name, - "label": {"en_US": self.model_name}, - "model_type": ModelType.LLM, - "features": [], - "fetch_from": FetchFrom.PREDEFINED_MODEL, - "model_properties": {}, - "deprecated": False, - "status": ModelStatus.ACTIVE, - "load_balancing_enabled": False, - "has_invalid_load_balancing_configs": False, - "provider": { - "provider": "openai", - "label": {"en_US": "OpenAI"}, - "icon_small": None, - "icon_small_dark": None, - "supported_model_types": [ModelType.LLM], - }, - } - - provider_configurations = SimpleNamespace( - get_models=MagicMock(return_value=[_Model("gpt-4o"), _Model("gpt-4o-mini")]) - ) - manager.get_configurations.return_value = provider_configurations - - # Act - result = service.get_models_by_provider(tenant_id="tenant-1", provider="openai") - - # Assert - assert len(result) == 2 - assert result[0].model == "gpt-4o" - assert result[1].provider.provider == "openai" - provider_configurations.get_models.assert_called_once_with(provider="openai") - - -@pytest.mark.parametrize( - ("method_name", "method_kwargs", "provider_method_name", "provider_call_kwargs", "provider_return"), - [ - ( - "get_provider_credential", - {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, - "get_provider_credential", - {"credential_id": "cred-1"}, - {"token": "abc"}, - ), - ( - "validate_provider_credentials", - {"tenant_id": "tenant-1", "provider": "openai", "credentials": {"token": "abc"}}, - "validate_provider_credentials", - ({"token": "abc"},), - None, - ), - ( - "create_provider_credential", - {"tenant_id": "tenant-1", "provider": "openai", "credentials": {"token": "abc"}, "credential_name": "A"}, - "create_provider_credential", - ({"token": "abc"}, "A"), - None, - ), - ( - "update_provider_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "credentials": {"token": "abc"}, - "credential_id": "cred-1", - "credential_name": "B", - }, - "update_provider_credential", - {"credential_id": "cred-1", "credentials": {"token": "abc"}, "credential_name": "B"}, - None, - ), - ( - "remove_provider_credential", - {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, - "delete_provider_credential", - {"credential_id": "cred-1"}, - None, - ), - ( - "switch_active_provider_credential", - {"tenant_id": "tenant-1", "provider": "openai", "credential_id": "cred-1"}, - "switch_active_provider_credential", - {"credential_id": "cred-1"}, - None, - ), - ], -) -def test_provider_credential_methods_should_delegate_to_provider_configuration( - method_name: str, - method_kwargs: dict[str, Any], - provider_method_name: str, - provider_call_kwargs: Any, - provider_return: Any, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - getattr(provider_configuration, provider_method_name).return_value = provider_return - get_provider_config_mock = MagicMock(return_value=provider_configuration) - monkeypatch.setattr(service, "_get_provider_configuration", get_provider_config_mock) - - # Act - result = getattr(service, method_name)(**method_kwargs) - - # Assert - get_provider_config_mock.assert_called_once_with("tenant-1", "openai") - provider_method = getattr(provider_configuration, provider_method_name) - if isinstance(provider_call_kwargs, tuple): - provider_method.assert_called_once_with(*provider_call_kwargs) - elif isinstance(provider_call_kwargs, dict): - provider_method.assert_called_once_with(**provider_call_kwargs) - else: - provider_method.assert_called_once_with(provider_call_kwargs) - if method_name == "get_provider_credential": - assert result == {"token": "abc"} - - -@pytest.mark.parametrize( - ("method_name", "method_kwargs", "provider_method_name", "expected_kwargs", "provider_return"), - [ - ( - "get_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credential_id": "cred-1", - }, - "get_custom_model_credential", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, - {"api_key": "x"}, - ), - ( - "validate_model_credentials", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - }, - "validate_custom_model_credentials", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credentials": {"api_key": "x"}}, - None, - ), - ( - "create_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - "credential_name": "cred-a", - }, - "create_custom_model_credential", - { - "model_type": ModelType.LLM, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - "credential_name": "cred-a", - }, - None, - ), - ( - "update_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - "credential_id": "cred-1", - "credential_name": "cred-b", - }, - "update_custom_model_credential", - { - "model_type": ModelType.LLM, - "model": "gpt-4o", - "credentials": {"api_key": "x"}, - "credential_id": "cred-1", - "credential_name": "cred-b", - }, - None, - ), - ( - "remove_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credential_id": "cred-1", - }, - "delete_custom_model_credential", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, - None, - ), - ( - "switch_active_custom_model_credential", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credential_id": "cred-1", - }, - "switch_custom_model_credential", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, - None, - ), - ( - "add_model_credential_to_model_list", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - "credential_id": "cred-1", - }, - "add_model_credential_to_model", - {"model_type": ModelType.LLM, "model": "gpt-4o", "credential_id": "cred-1"}, - None, - ), - ( - "remove_model", - { - "tenant_id": "tenant-1", - "provider": "openai", - "model_type": ModelType.LLM.value, - "model": "gpt-4o", - }, - "delete_custom_model", - {"model_type": ModelType.LLM, "model": "gpt-4o"}, - None, - ), - ], -) -def test_custom_model_methods_should_convert_model_type_and_delegate( - method_name: str, - method_kwargs: dict[str, Any], - provider_method_name: str, - expected_kwargs: dict[str, Any], - provider_return: Any, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - getattr(provider_configuration, provider_method_name).return_value = provider_return - get_provider_config_mock = MagicMock(return_value=provider_configuration) - monkeypatch.setattr(service, "_get_provider_configuration", get_provider_config_mock) - - # Act - result = getattr(service, method_name)(**method_kwargs) - - # Assert - get_provider_config_mock.assert_called_once_with("tenant-1", "openai") - getattr(provider_configuration, provider_method_name).assert_called_once_with(**expected_kwargs) - if method_name == "get_model_credential": - assert result == {"api_key": "x"} - - -def test_get_models_by_model_type_should_group_active_non_deprecated_models() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - openai_provider = SimpleNamespace( - provider="openai", - label=I18nObject(en_US="OpenAI"), - icon_small=None, - icon_small_dark=None, - ) - anthropic_provider = SimpleNamespace( - provider="anthropic", - label=I18nObject(en_US="Anthropic"), - icon_small=None, - icon_small_dark=None, - ) - models = [ - SimpleNamespace( - provider=openai_provider, - model="gpt-4o", - label=I18nObject(en_US="GPT-4o"), - model_type=ModelType.LLM, - features=[], - fetch_from=FetchFrom.PREDEFINED_MODEL, - model_properties={}, - status=ModelStatus.ACTIVE, - load_balancing_enabled=False, - deprecated=False, - ), - SimpleNamespace( - provider=openai_provider, - model="old-openai", - label=I18nObject(en_US="Old OpenAI"), - model_type=ModelType.LLM, - features=[], - fetch_from=FetchFrom.PREDEFINED_MODEL, - model_properties={}, - status=ModelStatus.ACTIVE, - load_balancing_enabled=False, - deprecated=True, - ), - SimpleNamespace( - provider=anthropic_provider, - model="old-anthropic", - label=I18nObject(en_US="Old Anthropic"), - model_type=ModelType.LLM, - features=[], - fetch_from=FetchFrom.PREDEFINED_MODEL, - model_properties={}, - status=ModelStatus.ACTIVE, - load_balancing_enabled=False, - deprecated=True, - ), - ] - provider_configurations = SimpleNamespace(get_models=MagicMock(return_value=models)) - manager.get_configurations.return_value = provider_configurations - - # Act - result = service.get_models_by_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - provider_configurations.get_models.assert_called_once_with(model_type=ModelType.LLM, only_active=True) - assert len(result) == 1 - assert result[0].provider == "openai" - assert len(result[0].models) == 1 - assert result[0].models[0].model == "gpt-4o" - - -@pytest.mark.parametrize( - ("credentials", "schema", "expected_count"), - [ - (None, None, 0), - ({"api_key": "x"}, None, 0), - ( - {"api_key": "x"}, - SimpleNamespace( - parameter_rules=[ - ParameterRule( - name="temperature", - label=I18nObject(en_US="Temperature"), - type=ParameterType.FLOAT, - ) - ] - ), - 1, - ), - ], -) -def test_get_model_parameter_rules_should_handle_missing_credentials_and_schema( - credentials: dict[str, Any] | None, - schema: Any, - expected_count: int, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - provider_configuration.get_current_credentials.return_value = credentials - provider_configuration.get_model_schema.return_value = schema - monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) - - # Act - result = service.get_model_parameter_rules(tenant_id="tenant-1", provider="openai", model="gpt-4o") - - # Assert - assert len(result) == expected_count - provider_configuration.get_current_credentials.assert_called_once_with(model_type=ModelType.LLM, model="gpt-4o") - if credentials: - provider_configuration.get_model_schema.assert_called_once_with( - model_type=ModelType.LLM, - model="gpt-4o", - credentials=credentials, - ) - else: - provider_configuration.get_model_schema.assert_not_called() - - -def test_get_default_model_of_model_type_should_return_response_when_manager_returns_model() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - manager.get_default_model.return_value = SimpleNamespace( - model="gpt-4o", - model_type=ModelType.LLM, - provider=SimpleNamespace( - provider="openai", - label=I18nObject(en_US="OpenAI"), - icon_small=None, - supported_model_types=[ModelType.LLM], - ), - ) - - # Act - result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - assert result is not None - assert result.model == "gpt-4o" - assert result.provider.provider == "openai" - manager.get_default_model.assert_called_once_with(tenant_id="tenant-1", model_type=ModelType.LLM) - - -def test_get_default_model_of_model_type_should_return_none_when_manager_returns_none() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - manager.get_default_model.return_value = None - - # Act - result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - assert result is None - - -def test_get_default_model_of_model_type_should_return_none_when_manager_raises_exception() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - manager.get_default_model.side_effect = RuntimeError("boom") - - # Act - result = service.get_default_model_of_model_type(tenant_id="tenant-1", model_type=ModelType.LLM.value) - - # Assert - assert result is None - - -def test_update_default_model_of_model_type_should_delegate_to_provider_manager() -> None: - # Arrange - service, manager = _create_service_with_mocked_manager() - - # Act - service.update_default_model_of_model_type( - tenant_id="tenant-1", - model_type=ModelType.LLM.value, - provider="openai", - model="gpt-4o", - ) - - # Assert - manager.update_default_model_record.assert_called_once_with( - tenant_id="tenant-1", - model_type=ModelType.LLM, - provider="openai", - model="gpt-4o", - ) - - -def test_get_model_provider_icon_should_fetch_icon_bytes_from_factory(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - service = ModelProviderService() - factory_instance = MagicMock() - factory_instance.get_provider_icon.return_value = (b"icon-bytes", "image/png") - factory_constructor = MagicMock(return_value=factory_instance) - monkeypatch.setattr(service_module, "create_plugin_model_provider_factory", factory_constructor) - - # Act - result = service.get_model_provider_icon( - tenant_id="tenant-1", - provider="openai", - icon_type="icon_small", - lang="en_US", - ) - - # Assert - factory_constructor.assert_called_once_with(tenant_id="tenant-1") - factory_instance.get_provider_icon.assert_called_once_with("openai", "icon_small", "en_US") - assert result == (b"icon-bytes", "image/png") - - -def test_switch_preferred_provider_should_convert_enum_and_delegate(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) - - # Act - service.switch_preferred_provider( - tenant_id="tenant-1", - provider="openai", - preferred_provider_type=ProviderType.SYSTEM.value, - ) - - # Assert - provider_configuration.switch_preferred_provider_type.assert_called_once_with(ProviderType.SYSTEM) - - -@pytest.mark.parametrize( - ("method_name", "provider_method_name"), - [ - ("enable_model", "enable_model"), - ("disable_model", "disable_model"), - ], -) -def test_model_enablement_methods_should_convert_model_type_and_delegate( - method_name: str, - provider_method_name: str, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = ModelProviderService() - provider_configuration = MagicMock() - monkeypatch.setattr(service, "_get_provider_configuration", MagicMock(return_value=provider_configuration)) - - # Act - getattr(service, method_name)( - tenant_id="tenant-1", - provider="openai", - model="gpt-4o", - model_type=ModelType.LLM.value, - ) - - # Assert - getattr(provider_configuration, provider_method_name).assert_called_once_with( - model="gpt-4o", - model_type=ModelType.LLM, - ) diff --git a/api/tests/unit_tests/services/test_operation_service.py b/api/tests/unit_tests/services/test_operation_service.py index a4c69b23ac..e43a7fa649 100644 --- a/api/tests/unit_tests/services/test_operation_service.py +++ b/api/tests/unit_tests/services/test_operation_service.py @@ -1,3 +1,4 @@ +from typing import Any from unittest.mock import MagicMock, patch import httpx @@ -105,7 +106,7 @@ class TestOperationService: ) @patch.object(OperationService, "_send_request") def test_should_map_parameters_correctly_when_record_utm_called( - self, mock_send: MagicMock, utm_info: dict, expected_params: dict + self, mock_send: MagicMock, utm_info: dict[str, Any], expected_params: dict[str, Any] ): """Test that record_utm correctly maps utm_info to parameters and calls _send_request""" # Arrange diff --git a/api/tests/unit_tests/services/test_schedule_service.py b/api/tests/unit_tests/services/test_schedule_service.py index 334062242b..0f8f7ffab5 100644 --- a/api/tests/unit_tests/services/test_schedule_service.py +++ b/api/tests/unit_tests/services/test_schedule_service.py @@ -2,23 +2,16 @@ import unittest from datetime import UTC, datetime from types import SimpleNamespace from typing import Any, cast -from unittest.mock import MagicMock, Mock, patch +from unittest.mock import MagicMock, Mock import pytest from sqlalchemy.orm import Session from core.trigger.constants import TRIGGER_SCHEDULE_NODE_TYPE -from core.workflow.nodes.trigger_schedule.entities import ScheduleConfig, SchedulePlanUpdate, VisualConfig -from core.workflow.nodes.trigger_schedule.exc import ScheduleConfigError, ScheduleNotFoundError -from events.event_handlers.sync_workflow_schedule_when_app_published import ( - sync_schedule_from_workflow, -) +from core.workflow.nodes.trigger_schedule.entities import VisualConfig +from core.workflow.nodes.trigger_schedule.exc import ScheduleConfigError from libs.schedule_utils import calculate_next_run_at, convert_12h_to_24h -from models.account import Account, TenantAccountJoin -from models.trigger import WorkflowSchedulePlan from models.workflow import Workflow -from services.errors.account import AccountNotFoundError -from services.trigger import schedule_service as service_module from services.trigger.schedule_service import ScheduleService @@ -83,180 +76,6 @@ class TestScheduleService(unittest.TestCase): with pytest.raises(UnknownTimeZoneError): calculate_next_run_at(cron_expr, timezone) - @patch("libs.schedule_utils.calculate_next_run_at") - def test_create_schedule(self, mock_calculate_next_run): - """Test creating a new schedule.""" - mock_session = MagicMock(spec=Session) - mock_calculate_next_run.return_value = datetime(2025, 8, 30, 10, 30, 0, tzinfo=UTC) - - config = ScheduleConfig( - node_id="start", - cron_expression="30 10 * * *", - timezone="UTC", - ) - - schedule = ScheduleService.create_schedule( - session=mock_session, - tenant_id="test-tenant", - app_id="test-app", - config=config, - ) - - assert schedule is not None - assert schedule.tenant_id == "test-tenant" - assert schedule.app_id == "test-app" - assert schedule.node_id == "start" - assert schedule.cron_expression == "30 10 * * *" - assert schedule.timezone == "UTC" - assert schedule.next_run_at is not None - mock_session.add.assert_called_once() - mock_session.flush.assert_called_once() - - @patch("services.trigger.schedule_service.calculate_next_run_at") - def test_update_schedule(self, mock_calculate_next_run): - """Test updating an existing schedule.""" - mock_session = MagicMock(spec=Session) - mock_schedule = Mock(spec=WorkflowSchedulePlan) - mock_schedule.cron_expression = "0 12 * * *" - mock_schedule.timezone = "America/New_York" - mock_session.get.return_value = mock_schedule - mock_calculate_next_run.return_value = datetime(2025, 8, 30, 12, 0, 0, tzinfo=UTC) - - updates = SchedulePlanUpdate( - cron_expression="0 12 * * *", - timezone="America/New_York", - ) - - result = ScheduleService.update_schedule( - session=mock_session, - schedule_id="test-schedule-id", - updates=updates, - ) - - assert result is not None - assert result.cron_expression == "0 12 * * *" - assert result.timezone == "America/New_York" - mock_calculate_next_run.assert_called_once() - mock_session.flush.assert_called_once() - - def test_update_schedule_not_found(self): - """Test updating a non-existent schedule raises exception.""" - from core.workflow.nodes.trigger_schedule.exc import ScheduleNotFoundError - - mock_session = MagicMock(spec=Session) - mock_session.get.return_value = None - - updates = SchedulePlanUpdate( - cron_expression="0 12 * * *", - ) - - with pytest.raises(ScheduleNotFoundError) as context: - ScheduleService.update_schedule( - session=mock_session, - schedule_id="non-existent-id", - updates=updates, - ) - - assert "Schedule not found: non-existent-id" in str(context.value) - mock_session.flush.assert_not_called() - - def test_delete_schedule(self): - """Test deleting a schedule.""" - mock_session = MagicMock(spec=Session) - mock_schedule = Mock(spec=WorkflowSchedulePlan) - mock_session.get.return_value = mock_schedule - - # Should not raise exception and complete successfully - ScheduleService.delete_schedule( - session=mock_session, - schedule_id="test-schedule-id", - ) - - mock_session.delete.assert_called_once_with(mock_schedule) - mock_session.flush.assert_called_once() - - def test_delete_schedule_not_found(self): - """Test deleting a non-existent schedule raises exception.""" - from core.workflow.nodes.trigger_schedule.exc import ScheduleNotFoundError - - mock_session = MagicMock(spec=Session) - mock_session.get.return_value = None - - # Should raise ScheduleNotFoundError - with pytest.raises(ScheduleNotFoundError) as context: - ScheduleService.delete_schedule( - session=mock_session, - schedule_id="non-existent-id", - ) - - assert "Schedule not found: non-existent-id" in str(context.value) - mock_session.delete.assert_not_called() - - @patch("services.trigger.schedule_service.select") - def test_get_tenant_owner(self, mock_select): - """Test getting tenant owner account.""" - mock_session = MagicMock(spec=Session) - mock_account = Mock(spec=Account) - mock_account.id = "owner-account-id" - - # Mock owner query - mock_owner_result = Mock(spec=TenantAccountJoin) - mock_owner_result.account_id = "owner-account-id" - - mock_session.execute.return_value.scalar_one_or_none.return_value = mock_owner_result - mock_session.get.return_value = mock_account - - result = ScheduleService.get_tenant_owner( - session=mock_session, - tenant_id="test-tenant", - ) - - assert result is not None - assert result.id == "owner-account-id" - - @patch("services.trigger.schedule_service.select") - def test_get_tenant_owner_fallback_to_admin(self, mock_select): - """Test getting tenant owner falls back to admin if no owner.""" - mock_session = MagicMock(spec=Session) - mock_account = Mock(spec=Account) - mock_account.id = "admin-account-id" - - # Mock admin query (owner returns None) - mock_admin_result = Mock(spec=TenantAccountJoin) - mock_admin_result.account_id = "admin-account-id" - - mock_session.execute.return_value.scalar_one_or_none.side_effect = [None, mock_admin_result] - mock_session.get.return_value = mock_account - - result = ScheduleService.get_tenant_owner( - session=mock_session, - tenant_id="test-tenant", - ) - - assert result is not None - assert result.id == "admin-account-id" - - @patch("services.trigger.schedule_service.calculate_next_run_at") - def test_update_next_run_at(self, mock_calculate_next_run): - """Test updating next run time after schedule triggered.""" - mock_session = MagicMock(spec=Session) - mock_schedule = Mock(spec=WorkflowSchedulePlan) - mock_schedule.cron_expression = "30 10 * * *" - mock_schedule.timezone = "UTC" - mock_session.get.return_value = mock_schedule - - next_time = datetime(2025, 8, 31, 10, 30, 0, tzinfo=UTC) - mock_calculate_next_run.return_value = next_time - - result = ScheduleService.update_next_run_at( - session=mock_session, - schedule_id="test-schedule-id", - ) - - assert result == next_time - assert mock_schedule.next_run_at == next_time - mock_session.flush.assert_called_once() - class TestVisualToCron(unittest.TestCase): """Test cases for visual configuration to cron conversion.""" @@ -678,108 +497,6 @@ class TestScheduleWithTimezone(unittest.TestCase): assert summer_next.hour == 14 -class TestSyncScheduleFromWorkflow(unittest.TestCase): - """Test cases for syncing schedule from workflow.""" - - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.db") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.ScheduleService") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.select") - def test_sync_schedule_create_new(self, mock_select, mock_service, mock_db): - """Test creating new schedule when none exists.""" - mock_session = MagicMock() - mock_db.engine = MagicMock() - mock_session.__enter__ = MagicMock(return_value=mock_session) - mock_session.__exit__ = MagicMock(return_value=None) - sessionmaker = MagicMock(return_value=MagicMock(begin=MagicMock(return_value=mock_session))) - with patch("events.event_handlers.sync_workflow_schedule_when_app_published.sessionmaker", sessionmaker): - mock_session.scalar.return_value = None # No existing plan - - # Mock extract_schedule_config to return a ScheduleConfig object - mock_config = Mock(spec=ScheduleConfig) - mock_config.node_id = "start" - mock_config.cron_expression = "30 10 * * *" - mock_config.timezone = "UTC" - mock_service.extract_schedule_config.return_value = mock_config - - mock_new_plan = Mock(spec=WorkflowSchedulePlan) - mock_service.create_schedule.return_value = mock_new_plan - - workflow = Mock(spec=Workflow) - result = sync_schedule_from_workflow("tenant-id", "app-id", workflow) - - assert result == mock_new_plan - mock_service.create_schedule.assert_called_once() - mock_session.commit.assert_not_called() - - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.db") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.ScheduleService") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.select") - def test_sync_schedule_update_existing(self, mock_select, mock_service, mock_db): - """Test updating existing schedule.""" - mock_session = MagicMock() - mock_db.engine = MagicMock() - mock_session.__enter__ = MagicMock(return_value=mock_session) - mock_session.__exit__ = MagicMock(return_value=None) - sessionmaker = MagicMock(return_value=MagicMock(begin=MagicMock(return_value=mock_session))) - - with patch("events.event_handlers.sync_workflow_schedule_when_app_published.sessionmaker", sessionmaker): - mock_existing_plan = Mock(spec=WorkflowSchedulePlan) - mock_existing_plan.id = "existing-plan-id" - mock_session.scalar.return_value = mock_existing_plan - - # Mock extract_schedule_config to return a ScheduleConfig object - mock_config = Mock(spec=ScheduleConfig) - mock_config.node_id = "start" - mock_config.cron_expression = "0 12 * * *" - mock_config.timezone = "America/New_York" - mock_service.extract_schedule_config.return_value = mock_config - - mock_updated_plan = Mock(spec=WorkflowSchedulePlan) - mock_service.update_schedule.return_value = mock_updated_plan - - workflow = Mock(spec=Workflow) - result = sync_schedule_from_workflow("tenant-id", "app-id", workflow) - - assert result == mock_updated_plan - mock_service.update_schedule.assert_called_once() - # Verify the arguments passed to update_schedule - call_args = mock_service.update_schedule.call_args - assert call_args.kwargs["session"] == mock_session - assert call_args.kwargs["schedule_id"] == "existing-plan-id" - updates_obj = call_args.kwargs["updates"] - assert isinstance(updates_obj, SchedulePlanUpdate) - assert updates_obj.node_id == "start" - assert updates_obj.cron_expression == "0 12 * * *" - assert updates_obj.timezone == "America/New_York" - mock_session.commit.assert_not_called() - - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.db") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.ScheduleService") - @patch("events.event_handlers.sync_workflow_schedule_when_app_published.select") - def test_sync_schedule_remove_when_no_config(self, mock_select, mock_service, mock_db): - """Test removing schedule when no schedule config in workflow.""" - mock_session = MagicMock() - mock_db.engine = MagicMock() - mock_session.__enter__ = MagicMock(return_value=mock_session) - mock_session.__exit__ = MagicMock(return_value=None) - sessionmaker = MagicMock(return_value=MagicMock(begin=MagicMock(return_value=mock_session))) - - with patch("events.event_handlers.sync_workflow_schedule_when_app_published.sessionmaker", sessionmaker): - mock_existing_plan = Mock(spec=WorkflowSchedulePlan) - mock_existing_plan.id = "existing-plan-id" - mock_session.scalar.return_value = mock_existing_plan - - mock_service.extract_schedule_config.return_value = None # No schedule config - - workflow = Mock(spec=Workflow) - result = sync_schedule_from_workflow("tenant-id", "app-id", workflow) - - assert result is None - # Now using ScheduleService.delete_schedule instead of session.delete - mock_service.delete_schedule.assert_called_once_with(session=mock_session, schedule_id="existing-plan-id") - mock_session.commit.assert_not_called() - - @pytest.fixture def session_mock() -> MagicMock: return MagicMock(spec=Session) @@ -789,62 +506,6 @@ def _workflow(**kwargs: Any) -> Workflow: return cast(Workflow, SimpleNamespace(**kwargs)) -def test_update_schedule_should_update_only_node_id_without_recomputing_time( - session_mock: MagicMock, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - schedule = MagicMock(spec=WorkflowSchedulePlan) - schedule.cron_expression = "0 10 * * *" - schedule.timezone = "UTC" - session_mock.get.return_value = schedule - - next_run_mock = MagicMock(return_value=datetime(2026, 1, 1, 10, 0, tzinfo=UTC)) - monkeypatch.setattr(service_module, "calculate_next_run_at", next_run_mock) - - # Act - result = ScheduleService.update_schedule( - session=session_mock, - schedule_id="schedule-1", - updates=SchedulePlanUpdate(node_id="node-new"), - ) - - # Assert - assert result is schedule - assert schedule.node_id == "node-new" - next_run_mock.assert_not_called() - session_mock.flush.assert_called_once() - - -def test_get_tenant_owner_should_raise_when_account_record_missing(session_mock: MagicMock) -> None: - # Arrange - join = SimpleNamespace(account_id="account-404") - session_mock.execute.return_value.scalar_one_or_none.return_value = join - session_mock.get.return_value = None - - # Act / Assert - with pytest.raises(AccountNotFoundError, match="Account not found: account-404"): - ScheduleService.get_tenant_owner(session=session_mock, tenant_id="tenant-1") - - -def test_get_tenant_owner_should_raise_when_no_owner_or_admin_found(session_mock: MagicMock) -> None: - # Arrange - session_mock.execute.return_value.scalar_one_or_none.side_effect = [None, None] - - # Act / Assert - with pytest.raises(AccountNotFoundError, match="Account not found for tenant: tenant-1"): - ScheduleService.get_tenant_owner(session=session_mock, tenant_id="tenant-1") - - -def test_update_next_run_at_should_raise_when_schedule_not_found(session_mock: MagicMock) -> None: - # Arrange - session_mock.get.return_value = None - - # Act / Assert - with pytest.raises(ScheduleNotFoundError, match="Schedule not found: schedule-1"): - ScheduleService.update_next_run_at(session=session_mock, schedule_id="schedule-1") - - def test_to_schedule_config_should_build_from_cron_mode() -> None: # Arrange node_config: dict[str, Any] = { diff --git a/api/tests/unit_tests/services/test_trigger_provider_service.py b/api/tests/unit_tests/services/test_trigger_provider_service.py index bd2e936b62..ebf1b36610 100644 --- a/api/tests/unit_tests/services/test_trigger_provider_service.py +++ b/api/tests/unit_tests/services/test_trigger_provider_service.py @@ -3,6 +3,7 @@ from __future__ import annotations import contextlib import json from types import SimpleNamespace +from typing import Any from unittest.mock import MagicMock import pytest @@ -28,9 +29,9 @@ def _mock_get_trigger_provider(mocker: MockerFixture, provider: object | None) - def _encrypter_mock( *, - decrypted: dict | None = None, - encrypted: dict | None = None, - masked: dict | None = None, + decrypted: dict[str, Any] | None = None, + encrypted: dict[str, Any] | None = None, + masked: dict[str, Any] | None = None, ) -> MagicMock: enc = MagicMock() enc.decrypt.return_value = decrypted or {} diff --git a/api/tests/unit_tests/services/test_variable_truncator.py b/api/tests/unit_tests/services/test_variable_truncator.py index 27602bb1cc..4b864dd221 100644 --- a/api/tests/unit_tests/services/test_variable_truncator.py +++ b/api/tests/unit_tests/services/test_variable_truncator.py @@ -12,11 +12,11 @@ This test suite covers all functionality of the current VariableTruncator includ import functools import json import uuid -from collections.abc import Mapping from typing import Any from uuid import uuid4 import pytest + from graphon.file import File, FileTransferMethod, FileType from graphon.variables.segments import ( ArrayFileSegment, @@ -29,7 +29,6 @@ from graphon.variables.segments import ( ObjectSegment, StringSegment, ) - from services.variable_truncator import ( DummyVariableTruncator, MaxDepthExceededError, @@ -674,229 +673,3 @@ def test_dummy_variable_truncator_methods(): assert isinstance(result, TruncationResult) assert result.result == segment assert result.truncated is False - - -# === Merged from test_variable_truncator_additional.py === - - -from typing import Any - -import pytest -from graphon.nodes.variable_assigner.common.helpers import UpdatedVariable -from graphon.variables.segments import IntegerSegment, ObjectSegment, StringSegment -from graphon.variables.types import SegmentType - -from services import variable_truncator as truncator_module -from services.variable_truncator import BaseTruncator, TruncationResult, VariableTruncator - - -class _AbstractPassthrough(BaseTruncator): - def truncate(self, segment: Any) -> TruncationResult: - # Arrange / Act - return super().truncate(segment) # type: ignore[misc] - - def truncate_variable_mapping(self, v: Mapping[str, Any]) -> tuple[Mapping[str, Any], bool]: - # Arrange / Act - return super().truncate_variable_mapping(v) # type: ignore[misc] - - -def test_base_truncator_methods_should_execute_abstract_placeholders() -> None: - # Arrange - passthrough = _AbstractPassthrough() - - # Act - truncate_result = passthrough.truncate(StringSegment(value="x")) - mapping_result = passthrough.truncate_variable_mapping({"a": 1}) - - # Assert - assert truncate_result is None - assert mapping_result is None - - -def test_default_should_use_dify_config_limits(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_MAX_SIZE", 111) - monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_ARRAY_LENGTH", 7) - monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_STRING_LENGTH", 33) - - # Act - truncator = VariableTruncator.default() - - # Assert - assert truncator._max_size_bytes == 111 - assert truncator._array_element_limit == 7 - assert truncator._string_length_limit == 33 - - -def test_truncate_variable_mapping_should_mark_over_budget_keys_with_ellipsis() -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=5) - mapping = {"very_long_key": "value"} - - # Act - result, truncated = truncator.truncate_variable_mapping(mapping) - - # Assert - assert result == {"very_long_key": "..."} - assert truncated is True - - -def test_truncate_variable_mapping_should_handle_segment_values() -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=100) - mapping = {"seg": StringSegment(value="hello")} - - # Act - result, truncated = truncator.truncate_variable_mapping(mapping) - - # Assert - assert isinstance(result["seg"], StringSegment) - assert result["seg"].value == "hello" - assert truncated is False - - -@pytest.mark.parametrize( - ("value", "expected"), - [ - (None, False), - (True, False), - (1, False), - (1.5, False), - ("x", True), - ({"k": "v"}, True), - ], -) -def test_json_value_needs_truncation_should_match_expected_rules(value: Any, expected: bool) -> None: - # Arrange - - # Act - result = VariableTruncator._json_value_needs_truncation(value) - - # Assert - assert result is expected - - -def test_truncate_should_use_string_fallback_when_truncated_value_size_exceeds_limit( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=10) - forced_result = truncator_module._PartResult( - value=StringSegment(value="this is too long"), - value_size=100, - truncated=True, - ) - monkeypatch.setattr(truncator, "_truncate_segment", lambda *_args, **_kwargs: forced_result) - - # Act - result = truncator.truncate(StringSegment(value="input")) - - # Assert - assert result.truncated is True - assert isinstance(result.result, StringSegment) - assert not result.result.value.startswith('"') - - -def test_truncate_segment_should_raise_assertion_for_unexpected_truncatable_segment( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - truncator = VariableTruncator() - monkeypatch.setattr(VariableTruncator, "_segment_need_truncation", lambda _segment: True) - - # Act / Assert - with pytest.raises(AssertionError): - truncator._truncate_segment(IntegerSegment(value=1), 10) - - -def test_calculate_json_size_should_unwrap_segment_values() -> None: - # Arrange - segment = StringSegment(value="abc") - - # Act - size = VariableTruncator.calculate_json_size(segment) - - # Assert - assert size == VariableTruncator.calculate_json_size("abc") - - -def test_calculate_json_size_should_handle_updated_variable_instances() -> None: - # Arrange - updated = UpdatedVariable(name="n", selector=["node", "var"], value_type=SegmentType.STRING, new_value="v") - - # Act - size = VariableTruncator.calculate_json_size(updated) - - # Assert - assert size > 0 - - -def test_maybe_qa_structure_should_validate_shape() -> None: - # Arrange - - # Act / Assert - assert VariableTruncator._maybe_qa_structure({"qa_chunks": []}) is True - assert VariableTruncator._maybe_qa_structure({"qa_chunks": "not-list"}) is False - assert VariableTruncator._maybe_qa_structure({}) is False - - -def test_maybe_parent_child_structure_should_validate_shape() -> None: - # Arrange - - # Act / Assert - assert VariableTruncator._maybe_parent_child_structure({"parent_mode": "full", "parent_child_chunks": []}) is True - assert VariableTruncator._maybe_parent_child_structure({"parent_mode": 1, "parent_child_chunks": []}) is False - assert ( - VariableTruncator._maybe_parent_child_structure({"parent_mode": "full", "parent_child_chunks": "bad"}) is False - ) - - -def test_truncate_object_should_truncate_segment_values_inside_object() -> None: - # Arrange - truncator = VariableTruncator(string_length_limit=8, max_size_bytes=30) - mapping = {"s": StringSegment(value="long-content")} - - # Act - result = truncator._truncate_object(mapping, 20) - - # Assert - assert result.truncated is True - assert isinstance(result.value["s"], StringSegment) - - -def test_truncate_json_primitives_should_handle_updated_variable_input() -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=100) - updated = UpdatedVariable(name="n", selector=["node", "var"], value_type=SegmentType.STRING, new_value="v") - - # Act - result = truncator._truncate_json_primitives(updated, 100) - - # Assert - assert isinstance(result.value, dict) - - -def test_truncate_json_primitives_should_raise_assertion_for_unsupported_value_type() -> None: - # Arrange - truncator = VariableTruncator() - - # Act / Assert - with pytest.raises(AssertionError): - truncator._truncate_json_primitives(object(), 100) # type: ignore[arg-type] - - -def test_truncate_should_apply_json_string_fallback_for_large_non_string_segment( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - truncator = VariableTruncator(max_size_bytes=10) - forced_segment = ObjectSegment(value={"k": "v"}) - forced_result = truncator_module._PartResult(value=forced_segment, value_size=100, truncated=True) - monkeypatch.setattr(truncator, "_truncate_segment", lambda *_args, **_kwargs: forced_result) - - # Act - result = truncator.truncate(ObjectSegment(value={"a": "b"})) - - # Assert - assert result.truncated is True - assert isinstance(result.result, StringSegment) diff --git a/api/tests/unit_tests/services/test_variable_truncator_additional.py b/api/tests/unit_tests/services/test_variable_truncator_additional.py new file mode 100644 index 0000000000..e9427c4ab3 --- /dev/null +++ b/api/tests/unit_tests/services/test_variable_truncator_additional.py @@ -0,0 +1,174 @@ +from collections.abc import Mapping +from typing import Any + +import pytest + +from graphon.nodes.variable_assigner.common.helpers import UpdatedVariable +from graphon.variables.segments import IntegerSegment, ObjectSegment, StringSegment +from graphon.variables.types import SegmentType +from services import variable_truncator as truncator_module +from services.variable_truncator import BaseTruncator, TruncationResult, VariableTruncator + + +class _AbstractPassthrough(BaseTruncator): + def truncate(self, segment: Any) -> TruncationResult: + return super().truncate(segment) # type: ignore[misc] + + def truncate_variable_mapping(self, v: Mapping[str, Any]) -> tuple[Mapping[str, Any], bool]: + return super().truncate_variable_mapping(v) # type: ignore[misc] + + +class TestBaseTruncatorContract: + def test_base_truncator_methods_should_execute_abstract_placeholders(self) -> None: + passthrough = _AbstractPassthrough() + + truncate_result = passthrough.truncate(StringSegment(value="x")) + mapping_result = passthrough.truncate_variable_mapping({"a": 1}) + + assert truncate_result is None + assert mapping_result is None + + +class TestVariableTruncatorAdditionalBehavior: + def test_default_should_use_dify_config_limits(self, monkeypatch: pytest.MonkeyPatch) -> None: + monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_MAX_SIZE", 111) + monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_ARRAY_LENGTH", 7) + monkeypatch.setattr(truncator_module.dify_config, "WORKFLOW_VARIABLE_TRUNCATION_STRING_LENGTH", 33) + + truncator = VariableTruncator.default() + + assert truncator._max_size_bytes == 111 + assert truncator._array_element_limit == 7 + assert truncator._string_length_limit == 33 + + def test_truncate_variable_mapping_should_mark_over_budget_keys_with_ellipsis(self) -> None: + truncator = VariableTruncator(max_size_bytes=5) + mapping = {"very_long_key": "value"} + + result, truncated = truncator.truncate_variable_mapping(mapping) + + assert result == {"very_long_key": "..."} + assert truncated is True + + def test_truncate_variable_mapping_should_handle_segment_values(self) -> None: + truncator = VariableTruncator(max_size_bytes=100) + mapping = {"seg": StringSegment(value="hello")} + + result, truncated = truncator.truncate_variable_mapping(mapping) + + assert isinstance(result["seg"], StringSegment) + assert result["seg"].value == "hello" + assert truncated is False + + @pytest.mark.parametrize( + ("value", "expected"), + [ + (None, False), + (True, False), + (1, False), + (1.5, False), + ("x", True), + ({"k": "v"}, True), + ], + ) + def test_json_value_needs_truncation_should_match_expected_rules( + self, + value: Any, + expected: bool, + ) -> None: + result = VariableTruncator._json_value_needs_truncation(value) + assert result is expected + + def test_truncate_should_use_string_fallback_when_truncated_value_size_exceeds_limit( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + truncator = VariableTruncator(max_size_bytes=10) + forced_result = truncator_module._PartResult( + value=StringSegment(value="this is too long"), + value_size=100, + truncated=True, + ) + monkeypatch.setattr(truncator, "_truncate_segment", lambda *_args, **_kwargs: forced_result) + + result = truncator.truncate(StringSegment(value="input")) + + assert result.truncated is True + assert isinstance(result.result, StringSegment) + assert not result.result.value.startswith('"') + + def test_truncate_segment_should_raise_assertion_for_unexpected_truncatable_segment( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + truncator = VariableTruncator() + monkeypatch.setattr(VariableTruncator, "_segment_need_truncation", lambda _segment: True) + + with pytest.raises(AssertionError): + truncator._truncate_segment(IntegerSegment(value=1), 10) + + def test_calculate_json_size_should_unwrap_segment_values(self) -> None: + segment = StringSegment(value="abc") + + size = VariableTruncator.calculate_json_size(segment) + + assert size == VariableTruncator.calculate_json_size("abc") + + def test_calculate_json_size_should_handle_updated_variable_instances(self) -> None: + updated = UpdatedVariable(name="n", selector=["node", "var"], value_type=SegmentType.STRING, new_value="v") + + size = VariableTruncator.calculate_json_size(updated) + + assert size > 0 + + def test_maybe_qa_structure_should_validate_shape(self) -> None: + assert VariableTruncator._maybe_qa_structure({"qa_chunks": []}) is True + assert VariableTruncator._maybe_qa_structure({"qa_chunks": "not-list"}) is False + assert VariableTruncator._maybe_qa_structure({}) is False + + def test_maybe_parent_child_structure_should_validate_shape(self) -> None: + assert ( + VariableTruncator._maybe_parent_child_structure({"parent_mode": "full", "parent_child_chunks": []}) is True + ) + assert VariableTruncator._maybe_parent_child_structure({"parent_mode": 1, "parent_child_chunks": []}) is False + assert ( + VariableTruncator._maybe_parent_child_structure({"parent_mode": "full", "parent_child_chunks": "bad"}) + is False + ) + + def test_truncate_object_should_truncate_segment_values_inside_object(self) -> None: + truncator = VariableTruncator(string_length_limit=8, max_size_bytes=30) + mapping = {"s": StringSegment(value="long-content")} + + result = truncator._truncate_object(mapping, 20) + + assert result.truncated is True + assert isinstance(result.value["s"], StringSegment) + + def test_truncate_json_primitives_should_handle_updated_variable_input(self) -> None: + truncator = VariableTruncator(max_size_bytes=100) + updated = UpdatedVariable(name="n", selector=["node", "var"], value_type=SegmentType.STRING, new_value="v") + + result = truncator._truncate_json_primitives(updated, 100) + + assert isinstance(result.value, dict) + + def test_truncate_json_primitives_should_raise_assertion_for_unsupported_value_type(self) -> None: + truncator = VariableTruncator() + + with pytest.raises(AssertionError): + truncator._truncate_json_primitives(object(), 100) # type: ignore[arg-type] + + def test_truncate_should_apply_json_string_fallback_for_large_non_string_segment( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + truncator = VariableTruncator(max_size_bytes=10) + forced_segment = ObjectSegment(value={"k": "v"}) + forced_result = truncator_module._PartResult(value=forced_segment, value_size=100, truncated=True) + monkeypatch.setattr(truncator, "_truncate_segment", lambda *_args, **_kwargs: forced_result) + + result = truncator.truncate(ObjectSegment(value={"a": "b"})) + + assert result.truncated is True + assert isinstance(result.result, StringSegment) diff --git a/api/tests/unit_tests/services/test_webhook_service.py b/api/tests/unit_tests/services/test_webhook_service.py index 0eefdf7209..ffdcc046f9 100644 --- a/api/tests/unit_tests/services/test_webhook_service.py +++ b/api/tests/unit_tests/services/test_webhook_service.py @@ -559,770 +559,3 @@ class TestWebhookServiceUnit: result = _prepare_webhook_execution("test_webhook", is_debug=True) assert result == (mock_trigger, mock_workflow, mock_config, mock_data, None) - - -# === Merged from test_webhook_service_additional.py === - - -from types import SimpleNamespace -from typing import Any, cast -from unittest.mock import MagicMock - -import pytest -from flask import Flask -from graphon.variables.types import SegmentType -from werkzeug.datastructures import FileStorage -from werkzeug.exceptions import RequestEntityTooLarge - -from core.workflow.nodes.trigger_webhook.entities import ( - ContentType, - WebhookBodyParameter, - WebhookData, - WebhookParameter, -) -from models.enums import AppTriggerStatus -from models.model import App -from models.trigger import WorkflowWebhookTrigger -from models.workflow import Workflow -from services.errors.app import QuotaExceededError -from services.trigger import webhook_service as service_module -from services.trigger.webhook_service import WebhookService - - -class _FakeQuery: - def __init__(self, result: Any) -> None: - self._result = result - - def where(self, *args: Any, **kwargs: Any) -> "_FakeQuery": - return self - - def filter(self, *args: Any, **kwargs: Any) -> "_FakeQuery": - return self - - def order_by(self, *args: Any, **kwargs: Any) -> "_FakeQuery": - return self - - def first(self) -> Any: - return self._result - - -class _SessionContext: - def __init__(self, session: Any) -> None: - self._session = session - - def __enter__(self) -> Any: - return self._session - - def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: - return False - - -class _SessionmakerContext: - def __init__(self, session: Any) -> None: - self._session = session - - def begin(self) -> "_SessionmakerContext": - return self - - def __enter__(self) -> Any: - return self._session - - def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: - return False - - -@pytest.fixture -def flask_app() -> Flask: - return Flask(__name__) - - -def _patch_session(monkeypatch: pytest.MonkeyPatch, session: Any) -> None: - monkeypatch.setattr(service_module, "db", SimpleNamespace(engine=MagicMock(), session=MagicMock())) - monkeypatch.setattr(service_module, "Session", lambda *args, **kwargs: _SessionContext(session)) - monkeypatch.setattr(service_module, "sessionmaker", lambda *args, **kwargs: _SessionmakerContext(session)) - - -def _workflow_trigger(**kwargs: Any) -> WorkflowWebhookTrigger: - return cast(WorkflowWebhookTrigger, SimpleNamespace(**kwargs)) - - -def _workflow(**kwargs: Any) -> Workflow: - return cast(Workflow, SimpleNamespace(**kwargs)) - - -def _app(**kwargs: Any) -> App: - return cast(App, SimpleNamespace(**kwargs)) - - -def test_get_webhook_trigger_and_workflow_should_raise_when_webhook_not_found(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - fake_session = MagicMock() - fake_session.scalar.return_value = None - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="Webhook not found"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_raise_when_app_trigger_not_found( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - fake_session = MagicMock() - fake_session.scalar.side_effect = [webhook_trigger, None] - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="App trigger not found"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_raise_when_app_trigger_rate_limited( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - app_trigger = SimpleNamespace(status=AppTriggerStatus.RATE_LIMITED) - fake_session = MagicMock() - fake_session.scalar.side_effect = [webhook_trigger, app_trigger] - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="rate limited"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_raise_when_app_trigger_disabled( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - app_trigger = SimpleNamespace(status=AppTriggerStatus.DISABLED) - fake_session = MagicMock() - fake_session.scalar.side_effect = [webhook_trigger, app_trigger] - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="disabled"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_raise_when_workflow_not_found(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - app_trigger = SimpleNamespace(status=AppTriggerStatus.ENABLED) - fake_session = MagicMock() - fake_session.scalar.side_effect = [webhook_trigger, app_trigger, None] - _patch_session(monkeypatch, fake_session) - - # Act / Assert - with pytest.raises(ValueError, match="Workflow not found"): - WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - -def test_get_webhook_trigger_and_workflow_should_return_values_for_non_debug_mode( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - app_trigger = SimpleNamespace(status=AppTriggerStatus.ENABLED) - workflow = MagicMock() - workflow.get_node_config_by_id.return_value = {"data": {"key": "value"}} - - fake_session = MagicMock() - fake_session.scalar.side_effect = [webhook_trigger, app_trigger, workflow] - _patch_session(monkeypatch, fake_session) - - # Act - got_trigger, got_workflow, got_node_config = WebhookService.get_webhook_trigger_and_workflow("webhook-1") - - # Assert - assert got_trigger is webhook_trigger - assert got_workflow is workflow - assert got_node_config == {"data": {"key": "value"}} - - -def test_get_webhook_trigger_and_workflow_should_return_values_for_debug_mode(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - webhook_trigger = SimpleNamespace(app_id="app-1", node_id="node-1") - workflow = MagicMock() - workflow.get_node_config_by_id.return_value = {"data": {"mode": "debug"}} - - fake_session = MagicMock() - fake_session.scalar.side_effect = [webhook_trigger, workflow] - _patch_session(monkeypatch, fake_session) - - # Act - got_trigger, got_workflow, got_node_config = WebhookService.get_webhook_trigger_and_workflow( - "webhook-1", is_debug=True - ) - - # Assert - assert got_trigger is webhook_trigger - assert got_workflow is workflow - assert got_node_config == {"data": {"mode": "debug"}} - - -def test_extract_webhook_data_should_use_text_fallback_for_unknown_content_type( - flask_app: Flask, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - warning_mock = MagicMock() - monkeypatch.setattr(service_module.logger, "warning", warning_mock) - webhook_trigger = MagicMock() - - # Act - with flask_app.test_request_context( - "/webhook", - method="POST", - headers={"Content-Type": "application/vnd.custom"}, - data="plain content", - ): - result = WebhookService.extract_webhook_data(webhook_trigger) - - # Assert - assert result["body"] == {"raw": "plain content"} - warning_mock.assert_called_once() - - -def test_extract_webhook_data_should_raise_for_request_too_large( - flask_app: Flask, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - monkeypatch.setattr(service_module.dify_config, "WEBHOOK_REQUEST_BODY_MAX_SIZE", 1) - - # Act / Assert - with flask_app.test_request_context("/webhook", method="POST", data="ab"): - with pytest.raises(RequestEntityTooLarge): - WebhookService.extract_webhook_data(MagicMock()) - - -def test_extract_octet_stream_body_should_return_none_when_empty_payload(flask_app: Flask) -> None: - # Arrange - webhook_trigger = MagicMock() - - # Act - with flask_app.test_request_context("/webhook", method="POST", data=b""): - body, files = WebhookService._extract_octet_stream_body(webhook_trigger) - - # Assert - assert body == {"raw": None} - assert files == {} - - -def test_extract_octet_stream_body_should_return_none_when_processing_raises( - flask_app: Flask, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = MagicMock() - monkeypatch.setattr(WebhookService, "_detect_binary_mimetype", MagicMock(return_value="application/octet-stream")) - monkeypatch.setattr(WebhookService, "_create_file_from_binary", MagicMock(side_effect=RuntimeError("boom"))) - - # Act - with flask_app.test_request_context("/webhook", method="POST", data=b"abc"): - body, files = WebhookService._extract_octet_stream_body(webhook_trigger) - - # Assert - assert body == {"raw": None} - assert files == {} - - -def test_extract_text_body_should_return_empty_string_when_request_read_fails( - flask_app: Flask, - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - monkeypatch.setattr("flask.wrappers.Request.get_data", MagicMock(side_effect=RuntimeError("read error"))) - - # Act - with flask_app.test_request_context("/webhook", method="POST", data="abc"): - body, files = WebhookService._extract_text_body() - - # Assert - assert body == {"raw": ""} - assert files == {} - - -def test_detect_binary_mimetype_should_fallback_when_magic_raises(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - fake_magic = MagicMock() - fake_magic.from_buffer.side_effect = RuntimeError("magic failed") - monkeypatch.setattr(service_module, "magic", fake_magic) - - # Act - result = WebhookService._detect_binary_mimetype(b"binary") - - # Assert - assert result == "application/octet-stream" - - -def test_process_file_uploads_should_use_octet_stream_fallback_when_mimetype_unknown( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = _workflow_trigger(created_by="user-1", tenant_id="tenant-1") - file_obj = MagicMock() - file_obj.to_dict.return_value = {"id": "f-1"} - monkeypatch.setattr(WebhookService, "_create_file_from_binary", MagicMock(return_value=file_obj)) - monkeypatch.setattr(service_module.mimetypes, "guess_type", MagicMock(return_value=(None, None))) - - uploaded = MagicMock() - uploaded.filename = "file.unknown" - uploaded.content_type = None - uploaded.read.return_value = b"content" - - # Act - result = WebhookService._process_file_uploads({"f": uploaded}, webhook_trigger) - - # Assert - assert result == {"f": {"id": "f-1"}} - - -def test_create_file_from_binary_should_call_tool_file_manager_and_file_factory( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = _workflow_trigger(created_by="user-1", tenant_id="tenant-1") - manager = MagicMock() - manager.create_file_by_raw.return_value = SimpleNamespace(id="tool-file-1") - monkeypatch.setattr(service_module, "ToolFileManager", MagicMock(return_value=manager)) - expected_file = MagicMock() - monkeypatch.setattr(service_module.file_factory, "build_from_mapping", MagicMock(return_value=expected_file)) - - # Act - result = WebhookService._create_file_from_binary(b"abc", "text/plain", webhook_trigger) - - # Assert - assert result is expected_file - manager.create_file_by_raw.assert_called_once() - - -@pytest.mark.parametrize( - ("raw_value", "param_type", "expected"), - [ - ("42", SegmentType.NUMBER, 42), - ("3.14", SegmentType.NUMBER, 3.14), - ("yes", SegmentType.BOOLEAN, True), - ("no", SegmentType.BOOLEAN, False), - ], -) -def test_convert_form_value_should_convert_supported_types( - raw_value: str, - param_type: str, - expected: Any, -) -> None: - # Arrange - - # Act - result = WebhookService._convert_form_value("param", raw_value, param_type) - - # Assert - assert result == expected - - -def test_convert_form_value_should_raise_for_unsupported_type() -> None: - # Arrange - - # Act / Assert - with pytest.raises(ValueError, match="Unsupported type"): - WebhookService._convert_form_value("p", "x", SegmentType.FILE) - - -def test_validate_json_value_should_return_original_for_unmapped_supported_segment_type( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - warning_mock = MagicMock() - monkeypatch.setattr(service_module.logger, "warning", warning_mock) - - # Act - result = WebhookService._validate_json_value("param", {"x": 1}, "unsupported-type") - - # Assert - assert result == {"x": 1} - warning_mock.assert_called_once() - - -def test_validate_and_convert_value_should_wrap_conversion_errors() -> None: - # Arrange - - # Act / Assert - with pytest.raises(ValueError, match="validation failed"): - WebhookService._validate_and_convert_value("param", "bad", SegmentType.NUMBER, is_form_data=True) - - -def test_process_parameters_should_raise_when_required_parameter_missing() -> None: - # Arrange - raw_params = {"optional": "x"} - config = [WebhookParameter(name="required_param", type=SegmentType.STRING, required=True)] - - # Act / Assert - with pytest.raises(ValueError, match="Required parameter missing"): - WebhookService._process_parameters(raw_params, config, is_form_data=True) - - -def test_process_parameters_should_include_unconfigured_parameters() -> None: - # Arrange - raw_params = {"known": "1", "unknown": "x"} - config = [WebhookParameter(name="known", type=SegmentType.NUMBER, required=False)] - - # Act - result = WebhookService._process_parameters(raw_params, config, is_form_data=True) - - # Assert - assert result == {"known": 1, "unknown": "x"} - - -def test_process_body_parameters_should_raise_when_required_text_raw_is_missing() -> None: - # Arrange - - # Act / Assert - with pytest.raises(ValueError, match="Required body content missing"): - WebhookService._process_body_parameters( - raw_body={"raw": ""}, - body_configs=[WebhookBodyParameter(name="raw", required=True)], - content_type=ContentType.TEXT, - ) - - -def test_process_body_parameters_should_skip_file_config_for_multipart_form_data() -> None: - # Arrange - raw_body = {"message": "hello", "extra": "x"} - body_configs = [ - WebhookBodyParameter(name="upload", type=SegmentType.FILE, required=True), - WebhookBodyParameter(name="message", type=SegmentType.STRING, required=True), - ] - - # Act - result = WebhookService._process_body_parameters(raw_body, body_configs, ContentType.FORM_DATA) - - # Assert - assert result == {"message": "hello", "extra": "x"} - - -def test_validate_required_headers_should_accept_sanitized_header_names() -> None: - # Arrange - headers = {"x_api_key": "123"} - configs = [WebhookParameter(name="x-api-key", required=True)] - - # Act - WebhookService._validate_required_headers(headers, configs) - - # Assert - assert True - - -def test_validate_required_headers_should_raise_when_required_header_missing() -> None: - # Arrange - headers = {"x-other": "123"} - configs = [WebhookParameter(name="x-api-key", required=True)] - - # Act / Assert - with pytest.raises(ValueError, match="Required header missing"): - WebhookService._validate_required_headers(headers, configs) - - -def test_validate_http_metadata_should_return_content_type_mismatch_error() -> None: - # Arrange - webhook_data = {"method": "POST", "headers": {"Content-Type": "application/json"}} - node_data = WebhookData(method="post", content_type=ContentType.TEXT) - - # Act - result = WebhookService._validate_http_metadata(webhook_data, node_data) - - # Assert - assert result["valid"] is False - assert "Content-type mismatch" in result["error"] - - -def test_extract_content_type_should_fallback_to_lowercase_header_key() -> None: - # Arrange - headers = {"content-type": "application/json; charset=utf-8"} - - # Act - result = WebhookService._extract_content_type(headers) - - # Assert - assert result == "application/json" - - -def test_build_workflow_inputs_should_include_expected_keys() -> None: - # Arrange - webhook_data = {"headers": {"h": "v"}, "query_params": {"q": 1}, "body": {"b": 2}} - - # Act - result = WebhookService.build_workflow_inputs(webhook_data) - - # Assert - assert result["webhook_data"] == webhook_data - assert result["webhook_headers"] == {"h": "v"} - assert result["webhook_query_params"] == {"q": 1} - assert result["webhook_body"] == {"b": 2} - - -def test_trigger_workflow_execution_should_trigger_async_workflow_successfully(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - webhook_trigger = _workflow_trigger( - app_id="app-1", - node_id="node-1", - tenant_id="tenant-1", - webhook_id="webhook-1", - ) - workflow = _workflow(id="wf-1") - webhook_data = {"body": {"x": 1}} - - session = MagicMock() - _patch_session(monkeypatch, session) - - end_user = SimpleNamespace(id="end-user-1") - monkeypatch.setattr( - service_module.EndUserService, "get_or_create_end_user_by_type", MagicMock(return_value=end_user) - ) - quota_type = SimpleNamespace(TRIGGER=SimpleNamespace(consume=MagicMock())) - monkeypatch.setattr(service_module, "QuotaType", quota_type) - trigger_async_mock = MagicMock() - monkeypatch.setattr(service_module.AsyncWorkflowService, "trigger_workflow_async", trigger_async_mock) - - # Act - WebhookService.trigger_workflow_execution(webhook_trigger, webhook_data, workflow) - - # Assert - trigger_async_mock.assert_called_once() - - -def test_trigger_workflow_execution_should_mark_tenant_rate_limited_when_quota_exceeded( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - webhook_trigger = _workflow_trigger( - app_id="app-1", - node_id="node-1", - tenant_id="tenant-1", - webhook_id="webhook-1", - ) - workflow = _workflow(id="wf-1") - - session = MagicMock() - _patch_session(monkeypatch, session) - - monkeypatch.setattr( - service_module.EndUserService, - "get_or_create_end_user_by_type", - MagicMock(return_value=SimpleNamespace(id="end-user-1")), - ) - monkeypatch.setattr( - service_module.QuotaService, - "reserve", - MagicMock(side_effect=QuotaExceededError(feature="trigger", tenant_id="tenant-1", required=1)), - ) - mark_rate_limited_mock = MagicMock() - monkeypatch.setattr(service_module.AppTriggerService, "mark_tenant_triggers_rate_limited", mark_rate_limited_mock) - - # Act / Assert - with pytest.raises(QuotaExceededError): - WebhookService.trigger_workflow_execution(webhook_trigger, {"body": {}}, workflow) - mark_rate_limited_mock.assert_called_once_with("tenant-1") - - -def test_trigger_workflow_execution_should_log_and_reraise_unexpected_errors(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - webhook_trigger = _workflow_trigger( - app_id="app-1", - node_id="node-1", - tenant_id="tenant-1", - webhook_id="webhook-1", - ) - workflow = _workflow(id="wf-1") - - session = MagicMock() - _patch_session(monkeypatch, session) - - monkeypatch.setattr( - service_module.EndUserService, "get_or_create_end_user_by_type", MagicMock(side_effect=RuntimeError("boom")) - ) - logger_exception_mock = MagicMock() - monkeypatch.setattr(service_module.logger, "exception", logger_exception_mock) - - # Act / Assert - with pytest.raises(RuntimeError, match="boom"): - WebhookService.trigger_workflow_execution(webhook_trigger, {"body": {}}, workflow) - logger_exception_mock.assert_called_once() - - -def test_sync_webhook_relationships_should_raise_when_workflow_exceeds_node_limit() -> None: - # Arrange - app = _app(id="app-1", tenant_id="tenant-1", created_by="user-1") - workflow = _workflow( - walk_nodes=lambda _node_type: [ - (f"node-{i}", {}) for i in range(WebhookService.MAX_WEBHOOK_NODES_PER_WORKFLOW + 1) - ] - ) - - # Act / Assert - with pytest.raises(ValueError, match="maximum webhook node limit"): - WebhookService.sync_webhook_relationships(app, workflow) - - -def test_sync_webhook_relationships_should_raise_when_lock_not_acquired(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - app = _app(id="app-1", tenant_id="tenant-1", created_by="user-1") - workflow = _workflow(walk_nodes=lambda _node_type: [("node-1", {})]) - - lock = MagicMock() - lock.acquire.return_value = False - monkeypatch.setattr(service_module.redis_client, "get", MagicMock(return_value=None)) - monkeypatch.setattr(service_module.redis_client, "lock", MagicMock(return_value=lock)) - - # Act / Assert - with pytest.raises(RuntimeError, match="Failed to acquire lock"): - WebhookService.sync_webhook_relationships(app, workflow) - - -def test_sync_webhook_relationships_should_create_missing_records_and_delete_stale_records( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - app = _app(id="app-1", tenant_id="tenant-1", created_by="user-1") - workflow = _workflow(walk_nodes=lambda _node_type: [("node-new", {})]) - - class _WorkflowWebhookTrigger: - app_id = "app_id" - tenant_id = "tenant_id" - webhook_id = "webhook_id" - node_id = "node_id" - - def __init__(self, app_id: str, tenant_id: str, node_id: str, webhook_id: str, created_by: str) -> None: - self.id = None - self.app_id = app_id - self.tenant_id = tenant_id - self.node_id = node_id - self.webhook_id = webhook_id - self.created_by = created_by - - class _Select: - def where(self, *args: Any, **kwargs: Any) -> "_Select": - return self - - class _Session: - def __init__(self) -> None: - self.added: list[Any] = [] - self.deleted: list[Any] = [] - self.commit_count = 0 - self.existing_records = [SimpleNamespace(node_id="node-stale")] - - def scalars(self, _stmt: Any) -> Any: - return SimpleNamespace(all=lambda: self.existing_records) - - def add(self, obj: Any) -> None: - self.added.append(obj) - - def flush(self) -> None: - for idx, obj in enumerate(self.added, start=1): - if obj.id is None: - obj.id = f"rec-{idx}" - - def commit(self) -> None: - self.commit_count += 1 - - def delete(self, obj: Any) -> None: - self.deleted.append(obj) - - lock = MagicMock() - lock.acquire.return_value = True - lock.release.return_value = None - - fake_session = _Session() - - monkeypatch.setattr(service_module, "WorkflowWebhookTrigger", _WorkflowWebhookTrigger) - monkeypatch.setattr(service_module, "select", MagicMock(return_value=_Select())) - monkeypatch.setattr(service_module.redis_client, "get", MagicMock(return_value=None)) - monkeypatch.setattr(service_module.redis_client, "lock", MagicMock(return_value=lock)) - redis_set_mock = MagicMock() - redis_delete_mock = MagicMock() - monkeypatch.setattr(service_module.redis_client, "set", redis_set_mock) - monkeypatch.setattr(service_module.redis_client, "delete", redis_delete_mock) - monkeypatch.setattr(WebhookService, "generate_webhook_id", MagicMock(return_value="generated-webhook-id")) - _patch_session(monkeypatch, fake_session) - - # Act - WebhookService.sync_webhook_relationships(app, workflow) - - # Assert - assert len(fake_session.added) == 1 - assert len(fake_session.deleted) == 1 - redis_set_mock.assert_called_once() - redis_delete_mock.assert_called_once() - lock.release.assert_called_once() - - -def test_sync_webhook_relationships_should_log_when_lock_release_fails(monkeypatch: pytest.MonkeyPatch) -> None: - # Arrange - app = _app(id="app-1", tenant_id="tenant-1", created_by="user-1") - workflow = _workflow(walk_nodes=lambda _node_type: []) - - class _Select: - def where(self, *args: Any, **kwargs: Any) -> "_Select": - return self - - class _Session: - def scalars(self, _stmt: Any) -> Any: - return SimpleNamespace(all=lambda: []) - - def commit(self) -> None: - return None - - lock = MagicMock() - lock.acquire.return_value = True - lock.release.side_effect = RuntimeError("release failed") - - logger_exception_mock = MagicMock() - - monkeypatch.setattr(service_module, "select", MagicMock(return_value=_Select())) - monkeypatch.setattr(service_module.redis_client, "get", MagicMock(return_value=None)) - monkeypatch.setattr(service_module.redis_client, "lock", MagicMock(return_value=lock)) - monkeypatch.setattr(service_module.logger, "exception", logger_exception_mock) - _patch_session(monkeypatch, _Session()) - - # Act - WebhookService.sync_webhook_relationships(app, workflow) - - # Assert - assert logger_exception_mock.call_count == 1 - - -def test_generate_webhook_response_should_fallback_when_response_body_is_not_json() -> None: - # Arrange - node_config = {"data": {"status_code": 200, "response_body": "{bad-json"}} - - # Act - body, status = WebhookService.generate_webhook_response(node_config) - - # Assert - assert status == 200 - assert "message" in body - - -def test_generate_webhook_id_should_return_24_character_identifier() -> None: - # Arrange - - # Act - webhook_id = WebhookService.generate_webhook_id() - - # Assert - assert isinstance(webhook_id, str) - assert len(webhook_id) == 24 - - -def test_sanitize_key_should_return_original_value_for_non_string_input() -> None: - # Arrange - - # Act - result = WebhookService._sanitize_key(123) # type: ignore[arg-type] - - # Assert - assert result == 123 diff --git a/api/tests/unit_tests/services/test_webhook_service_additional.py b/api/tests/unit_tests/services/test_webhook_service_additional.py new file mode 100644 index 0000000000..776cb5dc3f --- /dev/null +++ b/api/tests/unit_tests/services/test_webhook_service_additional.py @@ -0,0 +1,292 @@ +from types import SimpleNamespace +from typing import Any +from unittest.mock import MagicMock + +import pytest +from flask import Flask +from werkzeug.exceptions import RequestEntityTooLarge + +from core.workflow.nodes.trigger_webhook.entities import ( + ContentType, + WebhookBodyParameter, + WebhookData, + WebhookParameter, +) +from graphon.variables.types import SegmentType +from services.trigger import webhook_service as service_module +from services.trigger.webhook_service import WebhookService + + +class _FakeQuery: + def __init__(self, result: Any) -> None: + self._result = result + + def where(self, *args: Any, **kwargs: Any) -> "_FakeQuery": + return self + + def filter(self, *args: Any, **kwargs: Any) -> "_FakeQuery": + return self + + def order_by(self, *args: Any, **kwargs: Any) -> "_FakeQuery": + return self + + def first(self) -> Any: + return self._result + + +@pytest.fixture +def flask_app() -> Flask: + return Flask(__name__) + + +def _workflow_trigger(**kwargs: Any) -> Any: + return SimpleNamespace(**kwargs) + + +class TestWebhookServiceExtractionFallbacks: + def test_extract_webhook_data_should_use_text_fallback_for_unknown_content_type( + self, + flask_app: Flask, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + warning_mock = MagicMock() + monkeypatch.setattr(service_module.logger, "warning", warning_mock) + webhook_trigger = MagicMock() + + with flask_app.test_request_context( + "/webhook", + method="POST", + headers={"Content-Type": "application/vnd.custom"}, + data="plain content", + ): + result = WebhookService.extract_webhook_data(webhook_trigger) + + assert result["body"] == {"raw": "plain content"} + warning_mock.assert_called_once() + + def test_extract_webhook_data_should_raise_for_request_too_large( + self, + flask_app: Flask, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + monkeypatch.setattr(service_module.dify_config, "WEBHOOK_REQUEST_BODY_MAX_SIZE", 1) + + with flask_app.test_request_context("/webhook", method="POST", data="ab"): + with pytest.raises(RequestEntityTooLarge): + WebhookService.extract_webhook_data(MagicMock()) + + def test_extract_octet_stream_body_should_return_none_when_empty_payload(self, flask_app: Flask) -> None: + webhook_trigger = MagicMock() + + with flask_app.test_request_context("/webhook", method="POST", data=b""): + body, files = WebhookService._extract_octet_stream_body(webhook_trigger) + + assert body == {"raw": None} + assert files == {} + + def test_extract_octet_stream_body_should_return_none_when_processing_raises( + self, + flask_app: Flask, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + webhook_trigger = MagicMock() + monkeypatch.setattr( + WebhookService, "_detect_binary_mimetype", MagicMock(return_value="application/octet-stream") + ) + monkeypatch.setattr(WebhookService, "_create_file_from_binary", MagicMock(side_effect=RuntimeError("boom"))) + + with flask_app.test_request_context("/webhook", method="POST", data=b"abc"): + body, files = WebhookService._extract_octet_stream_body(webhook_trigger) + + assert body == {"raw": None} + assert files == {} + + def test_extract_text_body_should_return_empty_string_when_request_read_fails( + self, + flask_app: Flask, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + monkeypatch.setattr("flask.wrappers.Request.get_data", MagicMock(side_effect=RuntimeError("read error"))) + + with flask_app.test_request_context("/webhook", method="POST", data="abc"): + body, files = WebhookService._extract_text_body() + + assert body == {"raw": ""} + assert files == {} + + def test_detect_binary_mimetype_should_fallback_when_magic_raises( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + fake_magic = MagicMock() + fake_magic.from_buffer.side_effect = RuntimeError("magic failed") + monkeypatch.setattr(service_module, "magic", fake_magic) + + result = WebhookService._detect_binary_mimetype(b"binary") + + assert result == "application/octet-stream" + + def test_process_file_uploads_should_use_octet_stream_fallback_when_mimetype_unknown( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + webhook_trigger = _workflow_trigger(created_by="user-1", tenant_id="tenant-1") + file_obj = MagicMock() + file_obj.to_dict.return_value = {"id": "f-1"} + monkeypatch.setattr(WebhookService, "_create_file_from_binary", MagicMock(return_value=file_obj)) + monkeypatch.setattr(service_module.mimetypes, "guess_type", MagicMock(return_value=(None, None))) + + uploaded = MagicMock() + uploaded.filename = "file.unknown" + uploaded.content_type = None + uploaded.read.return_value = b"content" + + result = WebhookService._process_file_uploads({"f": uploaded}, webhook_trigger) + + assert result == {"f": {"id": "f-1"}} + + def test_create_file_from_binary_should_call_tool_file_manager_and_file_factory( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + webhook_trigger = _workflow_trigger(created_by="user-1", tenant_id="tenant-1") + manager = MagicMock() + manager.create_file_by_raw.return_value = SimpleNamespace(id="tool-file-1") + monkeypatch.setattr(service_module, "ToolFileManager", MagicMock(return_value=manager)) + expected_file = MagicMock() + monkeypatch.setattr(service_module.file_factory, "build_from_mapping", MagicMock(return_value=expected_file)) + + result = WebhookService._create_file_from_binary(b"abc", "text/plain", webhook_trigger) + + assert result is expected_file + manager.create_file_by_raw.assert_called_once() + + +class TestWebhookServiceValidationAndConversion: + @pytest.mark.parametrize( + ("raw_value", "param_type", "expected"), + [ + ("42", SegmentType.NUMBER, 42), + ("3.14", SegmentType.NUMBER, 3.14), + ("yes", SegmentType.BOOLEAN, True), + ("no", SegmentType.BOOLEAN, False), + ], + ) + def test_convert_form_value_should_convert_supported_types( + self, + raw_value: str, + param_type: str, + expected: Any, + ) -> None: + result = WebhookService._convert_form_value("param", raw_value, param_type) + assert result == expected + + def test_convert_form_value_should_raise_for_unsupported_type(self) -> None: + with pytest.raises(ValueError, match="Unsupported type"): + WebhookService._convert_form_value("p", "x", SegmentType.FILE) + + def test_validate_json_value_should_return_original_for_unmapped_supported_segment_type( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + warning_mock = MagicMock() + monkeypatch.setattr(service_module.logger, "warning", warning_mock) + + result = WebhookService._validate_json_value("param", {"x": 1}, "unsupported-type") + + assert result == {"x": 1} + warning_mock.assert_called_once() + + def test_validate_and_convert_value_should_wrap_conversion_errors(self) -> None: + with pytest.raises(ValueError, match="validation failed"): + WebhookService._validate_and_convert_value("param", "bad", SegmentType.NUMBER, is_form_data=True) + + def test_process_parameters_should_raise_when_required_parameter_missing(self) -> None: + raw_params = {"optional": "x"} + config = [WebhookParameter(name="required_param", type=SegmentType.STRING, required=True)] + + with pytest.raises(ValueError, match="Required parameter missing"): + WebhookService._process_parameters(raw_params, config, is_form_data=True) + + def test_process_parameters_should_include_unconfigured_parameters(self) -> None: + raw_params = {"known": "1", "unknown": "x"} + config = [WebhookParameter(name="known", type=SegmentType.NUMBER, required=False)] + + result = WebhookService._process_parameters(raw_params, config, is_form_data=True) + + assert result == {"known": 1, "unknown": "x"} + + def test_process_body_parameters_should_raise_when_required_text_raw_is_missing(self) -> None: + with pytest.raises(ValueError, match="Required body content missing"): + WebhookService._process_body_parameters( + raw_body={"raw": ""}, + body_configs=[WebhookBodyParameter(name="raw", required=True)], + content_type=ContentType.TEXT, + ) + + def test_process_body_parameters_should_skip_file_config_for_multipart_form_data(self) -> None: + raw_body = {"message": "hello", "extra": "x"} + body_configs = [ + WebhookBodyParameter(name="upload", type=SegmentType.FILE, required=True), + WebhookBodyParameter(name="message", type=SegmentType.STRING, required=True), + ] + + result = WebhookService._process_body_parameters(raw_body, body_configs, ContentType.FORM_DATA) + + assert result == {"message": "hello", "extra": "x"} + + def test_validate_required_headers_should_accept_sanitized_header_names(self) -> None: + headers = {"x_api_key": "123"} + configs = [WebhookParameter(name="x-api-key", required=True)] + + WebhookService._validate_required_headers(headers, configs) + + def test_validate_required_headers_should_raise_when_required_header_missing(self) -> None: + headers = {"x-other": "123"} + configs = [WebhookParameter(name="x-api-key", required=True)] + + with pytest.raises(ValueError, match="Required header missing"): + WebhookService._validate_required_headers(headers, configs) + + def test_validate_http_metadata_should_return_content_type_mismatch_error(self) -> None: + webhook_data = {"method": "POST", "headers": {"Content-Type": "application/json"}} + node_data = WebhookData(method="post", content_type=ContentType.TEXT) + + result = WebhookService._validate_http_metadata(webhook_data, node_data) + + assert result["valid"] is False + assert "Content-type mismatch" in result["error"] + + def test_extract_content_type_should_fallback_to_lowercase_header_key(self) -> None: + headers = {"content-type": "application/json; charset=utf-8"} + assert WebhookService._extract_content_type(headers) == "application/json" + + def test_build_workflow_inputs_should_include_expected_keys(self) -> None: + webhook_data = {"headers": {"h": "v"}, "query_params": {"q": 1}, "body": {"b": 2}} + + result = WebhookService.build_workflow_inputs(webhook_data) + + assert result["webhook_data"] == webhook_data + assert result["webhook_headers"] == {"h": "v"} + assert result["webhook_query_params"] == {"q": 1} + assert result["webhook_body"] == {"b": 2} + + +class TestWebhookServiceUtilities: + def test_generate_webhook_response_should_fallback_when_response_body_is_not_json(self) -> None: + node_config = {"data": {"status_code": 200, "response_body": "{bad-json"}} + + body, status = WebhookService.generate_webhook_response(node_config) + + assert status == 200 + assert "message" in body + + def test_generate_webhook_id_should_return_24_character_identifier(self) -> None: + webhook_id = WebhookService.generate_webhook_id() + + assert isinstance(webhook_id, str) + assert len(webhook_id) == 24 + + def test_sanitize_key_should_return_original_value_for_non_string_input(self) -> None: + result = WebhookService._sanitize_key(123) # type: ignore[arg-type] + assert result == 123 diff --git a/api/tests/unit_tests/services/test_website_service.py b/api/tests/unit_tests/services/test_website_service.py index b0ddc7388a..2024aec13a 100644 --- a/api/tests/unit_tests/services/test_website_service.py +++ b/api/tests/unit_tests/services/test_website_service.py @@ -89,7 +89,7 @@ def test_website_crawl_api_request_from_args_valid_and_to_crawl_request() -> Non ({"provider": "firecrawl", "url": "https://example.com"}, "Options are required"), ], ) -def test_website_crawl_api_request_from_args_requires_fields(args: dict, missing_msg: str) -> None: +def test_website_crawl_api_request_from_args_requires_fields(args: dict[str, Any], missing_msg: str) -> None: with pytest.raises(ValueError, match=missing_msg): WebsiteCrawlApiRequest.from_args(args) diff --git a/api/tests/unit_tests/services/test_workflow_collaboration_service.py b/api/tests/unit_tests/services/test_workflow_collaboration_service.py new file mode 100644 index 0000000000..8a6addfece --- /dev/null +++ b/api/tests/unit_tests/services/test_workflow_collaboration_service.py @@ -0,0 +1,608 @@ +from unittest.mock import Mock, patch + +import pytest + +from repositories.workflow_collaboration_repository import WorkflowCollaborationRepository +from services.workflow_collaboration_service import WorkflowCollaborationService + + +class TestWorkflowCollaborationService: + @pytest.fixture + def service(self) -> tuple[WorkflowCollaborationService, Mock, Mock]: + repository = Mock(spec=WorkflowCollaborationRepository) + socketio = Mock() + return WorkflowCollaborationService(repository, socketio), repository, socketio + + def test_authorize_and_join_workflow_room_returns_leader_status( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, socketio = service + socketio.get_session.return_value = { + "user_id": "u-1", + "username": "Jane", + "avatar": None, + "tenant_id": "t-1", + } + + with ( + patch.object(collaboration_service, "_can_access_workflow", return_value=True), + patch.object(collaboration_service, "get_or_set_leader", return_value="sid-1"), + patch.object(collaboration_service, "broadcast_online_users"), + ): + # Act + result = collaboration_service.authorize_and_join_workflow_room("wf-1", "sid-1") + + # Assert + assert result == ("u-1", True) + repository.set_session_info.assert_called_once() + socketio.enter_room.assert_called_once_with("sid-1", "wf-1") + socketio.emit.assert_called_once_with("status", {"isLeader": True}, room="sid-1") + + def test_authorize_and_join_workflow_room_returns_none_when_missing_user( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, _repository, socketio = service + socketio.get_session.return_value = {} + + # Act + result = collaboration_service.authorize_and_join_workflow_room("wf-1", "sid-1") + + # Assert + assert result is None + + def test_authorize_and_join_workflow_room_returns_none_when_missing_tenant( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + socketio.get_session.return_value = {"user_id": "u-1", "username": "Jane", "avatar": None} + + result = collaboration_service.authorize_and_join_workflow_room("wf-1", "sid-1") + + assert result is None + repository.set_session_info.assert_not_called() + socketio.enter_room.assert_not_called() + socketio.emit.assert_not_called() + + def test_authorize_and_join_workflow_room_returns_none_when_workflow_is_not_accessible( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + socketio.get_session.return_value = { + "user_id": "u-1", + "username": "Jane", + "avatar": None, + "tenant_id": "t-1", + } + + with patch.object(collaboration_service, "_can_access_workflow", return_value=False): + result = collaboration_service.authorize_and_join_workflow_room("wf-1", "sid-1") + + assert result is None + repository.set_session_info.assert_not_called() + socketio.enter_room.assert_not_called() + socketio.emit.assert_not_called() + + def test_repr_and_save_socket_identity(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + collaboration_service, _repository, socketio = service + user = Mock() + user.id = "u-1" + user.name = "Jane" + user.avatar = "avatar.png" + user.current_tenant_id = "t-1" + + assert "WorkflowCollaborationService" in repr(collaboration_service) + + collaboration_service.save_socket_identity("sid-1", user) + + socketio.save_session.assert_called_once_with( + "sid-1", + {"user_id": "u-1", "username": "Jane", "avatar": "avatar.png", "tenant_id": "t-1"}, + ) + + def test_can_access_workflow_uses_session_factory( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, _repository, _socketio = service + session = Mock() + session.scalar.return_value = "wf-1" + session_context = Mock() + session_context.__enter__ = Mock(return_value=session) + session_context.__exit__ = Mock(return_value=False) + + with patch( + "services.workflow_collaboration_service.session_factory.create_session", + return_value=session_context, + ): + result = collaboration_service._can_access_workflow("wf-1", "tenant-1") + + assert result is True + session.scalar.assert_called_once() + + def test_relay_collaboration_event_unauthorized( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = None + + # Act + result = collaboration_service.relay_collaboration_event("sid-1", {}) + + # Assert + assert result == ({"msg": "unauthorized"}, 401) + + def test_relay_collaboration_event_emits_update( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + payload = {"type": "mouse_move", "data": {"x": 1}, "timestamp": 123} + + # Act + result = collaboration_service.relay_collaboration_event("sid-1", payload) + + # Assert + assert result == ({"msg": "event_broadcasted"}, 200) + socketio.emit.assert_called_once_with( + "collaboration_update", + {"type": "mouse_move", "userId": "u-1", "data": {"x": 1}, "timestamp": 123}, + room="wf-1", + skip_sid="sid-1", + ) + + def test_relay_collaboration_event_requires_event_type( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + + result = collaboration_service.relay_collaboration_event("sid-1", {"data": {"x": 1}}) + + assert result == ({"msg": "invalid event type"}, 400) + + def test_relay_collaboration_event_sync_request_forwards_to_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + repository.get_current_leader.return_value = "sid-leader" + payload = {"type": "sync_request", "data": {"reason": "join"}, "timestamp": 123} + + with ( + patch.object(collaboration_service, "refresh_session_state"), + patch.object(collaboration_service, "is_session_active", return_value=True), + ): + result = collaboration_service.relay_collaboration_event("sid-1", payload) + + assert result == ({"msg": "sync_request_forwarded"}, 200) + socketio.emit.assert_called_once_with( + "collaboration_update", + {"type": "sync_request", "userId": "u-1", "data": {"reason": "join"}, "timestamp": 123}, + room="sid-leader", + ) + repository.set_leader.assert_not_called() + + def test_relay_collaboration_event_sync_request_reelects_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + repository.get_current_leader.return_value = "sid-old" + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + }, + { + "user_id": "u-3", + "username": "C", + "avatar": None, + "sid": "sid-3", + "connected_at": 2, + "graph_active": True, + }, + ] + payload = {"type": "sync_request", "data": {"reason": "join"}, "timestamp": 123} + + def _is_session_active(_workflow_id: str, session_sid: str) -> bool: + return session_sid != "sid-old" + + with ( + patch.object(collaboration_service, "refresh_session_state"), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + patch.object(collaboration_service, "is_session_active", side_effect=_is_session_active), + ): + result = collaboration_service.relay_collaboration_event("sid-2", payload) + + assert result == ({"msg": "sync_request_forwarded"}, 200) + repository.delete_leader.assert_called_once_with("wf-1") + repository.set_leader.assert_called_once_with("wf-1", "sid-2") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + socketio.emit.assert_called_once_with( + "collaboration_update", + {"type": "sync_request", "userId": "u-1", "data": {"reason": "join"}, "timestamp": 123}, + room="sid-2", + ) + + def test_relay_collaboration_event_sync_request_returns_when_no_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + repository.get_current_leader.return_value = "sid-old" + repository.list_sessions.return_value = [] + payload = {"type": "sync_request", "data": {"reason": "join"}, "timestamp": 123} + + with ( + patch.object(collaboration_service, "refresh_session_state"), + patch.object(collaboration_service, "is_session_active", return_value=False), + ): + result = collaboration_service.relay_collaboration_event("sid-2", payload) + + assert result == ({"msg": "no_active_leader"}, 200) + repository.delete_leader.assert_called_once_with("wf-1") + socketio.emit.assert_not_called() + + def test_relay_graph_event_unauthorized(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = None + + # Act + result = collaboration_service.relay_graph_event("sid-1", {"nodes": []}) + + # Assert + assert result == ({"msg": "unauthorized"}, 401) + + def test_disconnect_session_no_mapping(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = None + + # Act + collaboration_service.disconnect_session("sid-1") + + # Assert + repository.delete_session.assert_not_called() + + def test_disconnect_session_cleans_up(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + + with ( + patch.object(collaboration_service, "handle_leader_disconnect") as handle_leader_disconnect, + patch.object(collaboration_service, "broadcast_online_users") as broadcast_online_users, + ): + # Act + collaboration_service.disconnect_session("sid-1") + + # Assert + repository.delete_session.assert_called_once_with("wf-1", "sid-1") + handle_leader_disconnect.assert_called_once_with("wf-1", "sid-1") + broadcast_online_users.assert_called_once_with("wf-1") + + def test_get_or_set_leader_returns_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + + with patch.object(collaboration_service, "is_session_active", return_value=True): + # Act + result = collaboration_service.get_or_set_leader("wf-1", "sid-2") + + # Assert + assert result == "sid-1" + repository.set_leader_if_absent.assert_not_called() + + def test_get_or_set_leader_replaces_dead_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + repository.set_leader_if_absent.return_value = True + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + } + ] + + with ( + patch.object(collaboration_service, "is_session_active", side_effect=lambda _wf, sid: sid != "sid-1"), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + # Act + result = collaboration_service.get_or_set_leader("wf-1", "sid-2") + + # Assert + assert result == "sid-2" + repository.delete_session.assert_called_once_with("wf-1", "sid-1") + repository.delete_leader.assert_called_once_with("wf-1") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + + def test_get_or_set_leader_falls_back_to_existing( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.side_effect = [None, "sid-3"] + repository.set_leader_if_absent.return_value = False + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + } + ] + + # Act + result = collaboration_service.get_or_set_leader("wf-1", "sid-2") + + # Assert + assert result == "sid-3" + + def test_get_or_set_leader_returns_sid_when_leader_still_missing( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + repository.get_current_leader.side_effect = [None, None] + repository.set_leader_if_absent.return_value = False + + result = collaboration_service.get_or_set_leader("wf-1", "sid-2") + + assert result == "sid-2" + + def test_handle_leader_disconnect_elects_new( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + } + ] + + with ( + patch.object(collaboration_service, "is_session_active", return_value=True), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + # Act + collaboration_service.handle_leader_disconnect("wf-1", "sid-1") + + # Assert + repository.set_leader.assert_called_once_with("wf-1", "sid-2") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + + def test_handle_leader_disconnect_clears_when_empty( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + repository.list_sessions.return_value = [] + + # Act + collaboration_service.handle_leader_disconnect("wf-1", "sid-1") + + # Assert + repository.delete_leader.assert_called_once_with("wf-1") + + def test_handle_leader_disconnect_ignores_non_leader_or_missing_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + + repository.get_current_leader.return_value = None + collaboration_service.handle_leader_disconnect("wf-1", "sid-1") + + repository.get_current_leader.return_value = "sid-leader" + collaboration_service.handle_leader_disconnect("wf-1", "sid-other") + + repository.set_leader.assert_not_called() + repository.delete_leader.assert_not_called() + + def test_broadcast_leader_change_logs_emit_errors( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + repository.get_session_sids.return_value = ["sid-1", "sid-2"] + socketio.emit.side_effect = [RuntimeError("boom"), None] + + with patch("services.workflow_collaboration_service.logging.exception") as exception_mock: + collaboration_service.broadcast_leader_change("wf-1", "sid-2") + + assert exception_mock.call_count == 1 + + def test_broadcast_online_users_sorts_and_emits( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, socketio = service + repository.list_sessions.return_value = [ + {"user_id": "u-1", "username": "A", "avatar": None, "sid": "sid-1", "connected_at": 3}, + {"user_id": "u-2", "username": "B", "avatar": None, "sid": "sid-2", "connected_at": 1}, + ] + repository.get_current_leader.return_value = "sid-1" + + with patch.object(collaboration_service, "is_session_active", return_value=True): + # Act + collaboration_service.broadcast_online_users("wf-1") + + # Assert + socketio.emit.assert_called_once_with( + "online_users", + { + "workflow_id": "wf-1", + "users": [ + {"user_id": "u-2", "username": "B", "avatar": None, "sid": "sid-2", "connected_at": 1}, + {"user_id": "u-1", "username": "A", "avatar": None, "sid": "sid-1", "connected_at": 3}, + ], + "leader": "sid-1", + }, + room="wf-1", + ) + + def test_broadcast_online_users_reassigns_missing_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, socketio = service + users = [{"user_id": "u-2", "username": "B", "avatar": None, "sid": "sid-2", "connected_at": 1}] + repository.get_current_leader.return_value = "sid-old" + + with ( + patch.object(collaboration_service, "_prune_inactive_sessions", return_value=users), + patch.object(collaboration_service, "_select_graph_leader", return_value="sid-2"), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + collaboration_service.broadcast_online_users("wf-1") + + repository.delete_leader.assert_called_once_with("wf-1") + repository.set_leader.assert_called_once_with("wf-1", "sid-2") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + socketio.emit.assert_called_once_with( + "online_users", + {"workflow_id": "wf-1", "users": users, "leader": "sid-2"}, + room="wf-1", + ) + + def test_refresh_session_state_expires_active_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-1" + + with patch.object(collaboration_service, "is_session_active", return_value=True): + # Act + collaboration_service.refresh_session_state("wf-1", "sid-1") + + # Assert + repository.refresh_session_state.assert_called_once_with("wf-1", "sid-1") + repository.expire_leader.assert_called_once_with("wf-1") + repository.set_leader.assert_not_called() + + def test_refresh_session_state_sets_leader_when_missing( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + # Arrange + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = None + repository.list_sessions.return_value = [ + { + "user_id": "u-2", + "username": "B", + "avatar": None, + "sid": "sid-2", + "connected_at": 1, + "graph_active": True, + } + ] + + with ( + patch.object(collaboration_service, "is_session_active", return_value=True), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + # Act + collaboration_service.refresh_session_state("wf-1", "sid-2") + + # Assert + repository.set_leader.assert_called_once_with("wf-1", "sid-2") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-2") + + def test_refresh_session_state_replaces_inactive_existing_leader( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + repository.get_current_leader.return_value = "sid-old" + + with ( + patch.object(collaboration_service, "is_session_active", return_value=False), + patch.object(collaboration_service, "broadcast_leader_change") as broadcast_leader_change, + ): + collaboration_service.refresh_session_state("wf-1", "sid-new") + + repository.delete_leader.assert_called_once_with("wf-1") + repository.set_leader.assert_called_once_with("wf-1", "sid-new") + broadcast_leader_change.assert_called_once_with("wf-1", "sid-new") + + def test_relay_graph_event_emits_update(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + # Arrange + collaboration_service, repository, socketio = service + repository.get_sid_mapping.return_value = {"workflow_id": "wf-1", "user_id": "u-1"} + + # Act + result = collaboration_service.relay_graph_event("sid-1", {"nodes": []}) + + # Assert + assert result == ({"msg": "graph_update_broadcasted"}, 200) + repository.refresh_session_state.assert_called_once_with("wf-1", "sid-1") + socketio.emit.assert_called_once_with("graph_update", {"nodes": []}, room="wf-1", skip_sid="sid-1") + + def test_prune_inactive_sessions_handles_empty_and_removes_stale( + self, service: tuple[WorkflowCollaborationService, Mock, Mock] + ) -> None: + collaboration_service, repository, _socketio = service + repository.list_sessions.return_value = [] + assert collaboration_service._prune_inactive_sessions("wf-1") == [] + + active = {"sid": "sid-1", "user_id": "u-1", "connected_at": 1} + stale = {"sid": "sid-2", "user_id": "u-2", "connected_at": 2} + repository.list_sessions.return_value = [active, stale] + + with patch.object( + collaboration_service, + "is_session_active", + side_effect=lambda _workflow_id, sid: sid == "sid-1", + ): + users = collaboration_service._prune_inactive_sessions("wf-1") + + assert users == [active] + repository.delete_session.assert_called_with("wf-1", "sid-2") + + def test_is_session_active_guard_branches(self, service: tuple[WorkflowCollaborationService, Mock, Mock]) -> None: + collaboration_service, repository, socketio = service + socketio.manager.is_connected.return_value = True + repository.session_exists.return_value = True + repository.sid_mapping_exists.return_value = True + + assert collaboration_service.is_session_active("wf-1", "") is False + + socketio.manager.is_connected.return_value = False + assert collaboration_service.is_session_active("wf-1", "sid-1") is False + + socketio.manager.is_connected.side_effect = AttributeError("missing manager") + assert collaboration_service.is_session_active("wf-1", "sid-1") is False + socketio.manager.is_connected.side_effect = None + + socketio.manager.is_connected.return_value = True + repository.session_exists.return_value = False + assert collaboration_service.is_session_active("wf-1", "sid-1") is False + + repository.session_exists.return_value = True + repository.sid_mapping_exists.return_value = False + assert collaboration_service.is_session_active("wf-1", "sid-1") is False diff --git a/api/tests/unit_tests/services/test_workflow_comment_service.py b/api/tests/unit_tests/services/test_workflow_comment_service.py new file mode 100644 index 0000000000..e6db068e07 --- /dev/null +++ b/api/tests/unit_tests/services/test_workflow_comment_service.py @@ -0,0 +1,578 @@ +from unittest.mock import MagicMock, Mock, patch + +import pytest +from werkzeug.exceptions import Forbidden, NotFound + +from services import workflow_comment_service as service_module +from services.workflow_comment_service import WorkflowCommentService + + +@pytest.fixture +def mock_session(monkeypatch: pytest.MonkeyPatch) -> Mock: + session = Mock() + context_manager = MagicMock() + context_manager.__enter__.return_value = session + context_manager.__exit__.return_value = False + mock_db = MagicMock() + mock_db.engine = Mock() + empty_scalars = Mock() + empty_scalars.all.return_value = [] + session.scalars.return_value = empty_scalars + monkeypatch.setattr(service_module, "Session", Mock(return_value=context_manager)) + monkeypatch.setattr(service_module, "db", mock_db) + monkeypatch.setattr(service_module.send_workflow_comment_mention_email_task, "delay", Mock()) + return session + + +def _mock_scalars(result_list: list[object]) -> Mock: + scalars = Mock() + scalars.all.return_value = result_list + return scalars + + +class TestWorkflowCommentService: + def test_validate_content_rejects_empty(self) -> None: + with pytest.raises(ValueError): + WorkflowCommentService._validate_content(" ") + + def test_validate_content_rejects_too_long(self) -> None: + with pytest.raises(ValueError): + WorkflowCommentService._validate_content("a" * 1001) + + def test_filter_valid_mentioned_user_ids_filters_by_tenant_and_preserves_order(self, mock_session: Mock) -> None: + tenant_member_1 = "123e4567-e89b-12d3-a456-426614174000" + tenant_member_2 = "123e4567-e89b-12d3-a456-426614174002" + non_tenant_member = "123e4567-e89b-12d3-a456-426614174001" + mock_session.scalars.return_value = _mock_scalars([tenant_member_1, tenant_member_2]) + + result = WorkflowCommentService._filter_valid_mentioned_user_ids( + [ + tenant_member_1, + "", + 123, # type: ignore[list-item] + tenant_member_1, + non_tenant_member, + tenant_member_2, + ], + session=mock_session, + tenant_id="tenant-1", + ) + + assert result == [ + tenant_member_1, + tenant_member_2, + ] + + def test_format_comment_excerpt_handles_short_and_long_limits(self) -> None: + assert WorkflowCommentService._format_comment_excerpt(" hello ", max_length=10) == "hello" + assert WorkflowCommentService._format_comment_excerpt("abcdefghijk", max_length=3) == "abc" + assert WorkflowCommentService._format_comment_excerpt(" abcdefghijk ", max_length=8) == "abcde..." + + def test_build_mention_email_payloads_returns_empty_for_no_candidates(self, mock_session: Mock) -> None: + assert ( + WorkflowCommentService._build_mention_email_payloads( + session=mock_session, + tenant_id="tenant-1", + app_id="app-1", + mentioner_id="user-1", + mentioned_user_ids=[], + content="hello", + ) + == [] + ) + assert ( + WorkflowCommentService._build_mention_email_payloads( + session=mock_session, + tenant_id="tenant-1", + app_id="app-1", + mentioner_id="user-1", + mentioned_user_ids=["user-1"], + content="hello", + ) + == [] + ) + + def test_dispatch_mention_emails_enqueues_each_payload(self) -> None: + delay_mock = Mock() + with patch.object(service_module.send_workflow_comment_mention_email_task, "delay", delay_mock): + WorkflowCommentService._dispatch_mention_emails( + [ + {"to": "a@example.com"}, + {"to": "b@example.com"}, + ] + ) + + assert delay_mock.call_count == 2 + + def test_build_mention_email_payloads_skips_accounts_without_email(self, mock_session: Mock) -> None: + account_without_email = Mock() + account_without_email.email = None + account_without_email.name = "No Email" + account_without_email.interface_language = "en-US" + + account_with_email = Mock() + account_with_email.email = "user@example.com" + account_with_email.name = "" + account_with_email.interface_language = None + + mock_session.scalar.side_effect = ["My App", "Commenter"] + mock_session.scalars.return_value = _mock_scalars([account_without_email, account_with_email]) + + payloads = WorkflowCommentService._build_mention_email_payloads( + session=mock_session, + tenant_id="tenant-1", + app_id="app-1", + mentioner_id="user-1", + mentioned_user_ids=["user-2"], + content="hello", + ) + expected_app_url = f"{service_module.dify_config.CONSOLE_WEB_URL.rstrip('/')}/app/app-1/workflow" + + assert payloads == [ + { + "language": "en-US", + "to": "user@example.com", + "mentioned_name": "user@example.com", + "commenter_name": "Commenter", + "app_name": "My App", + "comment_content": "hello", + "app_url": expected_app_url, + } + ] + + def test_create_comment_creates_mentions(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_at = "ts" + + with ( + patch.object(service_module, "WorkflowComment", return_value=comment), + patch.object(service_module, "WorkflowCommentMention", return_value=Mock()), + patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=["user-2"]), + ): + result = WorkflowCommentService.create_comment( + tenant_id="tenant-1", + app_id="app-1", + created_by="user-1", + content="hello", + position_x=1.0, + position_y=2.0, + mentioned_user_ids=["user-2", "bad-id"], + ) + + assert result == {"id": "comment-1", "created_at": "ts"} + assert mock_session.add.call_args_list[0].args[0] is comment + assert mock_session.add.call_count == 2 + mock_session.commit.assert_called_once() + + def test_update_comment_raises_not_found(self, mock_session: Mock) -> None: + mock_session.scalar.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="user-1", + content="hello", + ) + + def test_update_comment_raises_forbidden(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + with pytest.raises(Forbidden): + WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="intruder", + content="hello", + ) + + def test_update_comment_replaces_mentions(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + existing_mentions = [Mock(), Mock()] + mock_session.scalars.return_value = _mock_scalars(existing_mentions) + + with patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=["user-2"]): + result = WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + mentioned_user_ids=["user-2", "bad-id"], + ) + + assert result == {"id": "comment-1", "updated_at": comment.updated_at} + assert mock_session.delete.call_count == 2 + assert mock_session.add.call_count == 1 + mock_session.commit.assert_called_once() + + def test_update_comment_preserves_mentions_when_mentioned_user_ids_omitted(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + with ( + patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids") as filter_mentions_mock, + patch.object(WorkflowCommentService, "_build_mention_email_payloads") as build_payloads_mock, + patch.object(WorkflowCommentService, "_dispatch_mention_emails") as dispatch_mock, + ): + result = WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + ) + + assert result == {"id": "comment-1", "updated_at": comment.updated_at} + mock_session.delete.assert_not_called() + mock_session.add.assert_not_called() + filter_mentions_mock.assert_not_called() + build_payloads_mock.assert_not_called() + dispatch_mock.assert_called_once_with([]) + mock_session.commit.assert_called_once() + + def test_update_comment_clears_mentions_when_empty_list_provided(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + existing_mentions = [Mock(), Mock()] + mock_session.scalars.return_value = _mock_scalars(existing_mentions) + + with patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=[]): + result = WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + mentioned_user_ids=[], + ) + + assert result == {"id": "comment-1", "updated_at": comment.updated_at} + assert mock_session.delete.call_count == 2 + mock_session.add.assert_not_called() + mock_session.commit.assert_called_once() + + def test_update_comment_notifies_only_new_mentions(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + mock_session.scalar.return_value = comment + + existing_mention = Mock() + existing_mention.mentioned_user_id = "user-2" + mock_session.scalars.return_value = _mock_scalars([existing_mention]) + + with ( + patch.object( + WorkflowCommentService, + "_filter_valid_mentioned_user_ids", + return_value=["user-2", "user-3"], + ), + patch.object( + WorkflowCommentService, + "_build_mention_email_payloads", + return_value=[], + ) as build_payloads_mock, + patch.object(WorkflowCommentService, "_dispatch_mention_emails") as dispatch_mock, + ): + WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + mentioned_user_ids=["user-2", "user-3"], + ) + + assert build_payloads_mock.call_args.kwargs["mentioned_user_ids"] == ["user-3"] + dispatch_mock.assert_called_once_with([]) + + def test_get_comments_preloads_related_accounts(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "user-1" + comment.resolved_by = "user-2" + reply = Mock() + reply.created_by = "user-3" + mention = Mock() + mention.mentioned_user_id = "user-4" + comment.replies = [reply] + comment.mentions = [mention] + comment.cache_created_by_account = Mock() + comment.cache_resolved_by_account = Mock() + reply.cache_created_by_account = Mock() + mention.cache_mentioned_user_account = Mock() + + account_1 = Mock() + account_1.id = "user-1" + account_2 = Mock() + account_2.id = "user-2" + account_3 = Mock() + account_3.id = "user-3" + account_4 = Mock() + account_4.id = "user-4" + + mock_session.scalars.side_effect = [ + _mock_scalars([comment]), + _mock_scalars([account_1, account_2, account_3, account_4]), + ] + + result = WorkflowCommentService.get_comments("tenant-1", "app-1") + + assert result == [comment] + comment.cache_created_by_account.assert_called_once_with(account_1) + comment.cache_resolved_by_account.assert_called_once_with(account_2) + reply.cache_created_by_account.assert_called_once_with(account_3) + mention.cache_mentioned_user_account.assert_called_once_with(account_4) + + def test_preload_accounts_returns_early_for_empty_comments(self, mock_session: Mock) -> None: + WorkflowCommentService._preload_accounts(mock_session, []) + + mock_session.scalars.assert_not_called() + + def test_get_comment_raises_not_found_with_provided_session(self) -> None: + session = Mock() + session.scalar.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.get_comment("tenant-1", "app-1", "comment-1", session=session) + + def test_get_comment_uses_context_manager_when_session_not_provided(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "user-1" + comment.resolved_by = None + comment.replies = [] + comment.mentions = [] + comment.cache_created_by_account = Mock() + comment.cache_resolved_by_account = Mock() + mock_session.scalar.return_value = comment + mock_session.scalars.return_value = _mock_scalars([]) + + result = WorkflowCommentService.get_comment("tenant-1", "app-1", "comment-1") + + assert result is comment + comment.cache_created_by_account.assert_called_once() + comment.cache_resolved_by_account.assert_called_once_with(None) + + def test_delete_comment_raises_forbidden(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "owner" + + with patch.object(WorkflowCommentService, "get_comment", return_value=comment): + with pytest.raises(Forbidden): + WorkflowCommentService.delete_comment("tenant-1", "app-1", "comment-1", "intruder") + + def test_delete_comment_removes_related_entities(self, mock_session: Mock) -> None: + comment = Mock() + comment.created_by = "owner" + + mentions = [Mock(), Mock()] + replies = [Mock()] + mock_session.scalars.side_effect = [_mock_scalars(mentions), _mock_scalars(replies)] + + with patch.object(WorkflowCommentService, "get_comment", return_value=comment): + WorkflowCommentService.delete_comment("tenant-1", "app-1", "comment-1", "owner") + + assert mock_session.delete.call_count == 4 + mock_session.commit.assert_called_once() + + def test_resolve_comment_sets_fields(self, mock_session: Mock) -> None: + comment = Mock() + comment.resolved = False + comment.resolved_at = None + comment.resolved_by = None + + with ( + patch.object(WorkflowCommentService, "get_comment", return_value=comment), + patch.object(service_module, "naive_utc_now", return_value="now"), + ): + result = WorkflowCommentService.resolve_comment("tenant-1", "app-1", "comment-1", "user-1") + + assert result is comment + assert comment.resolved is True + assert comment.resolved_at == "now" + assert comment.resolved_by == "user-1" + mock_session.commit.assert_called_once() + + def test_resolve_comment_noop_when_already_resolved(self, mock_session: Mock) -> None: + comment = Mock() + comment.resolved = True + + with patch.object(WorkflowCommentService, "get_comment", return_value=comment): + result = WorkflowCommentService.resolve_comment("tenant-1", "app-1", "comment-1", "user-1") + + assert result is comment + mock_session.commit.assert_not_called() + + def test_create_reply_requires_comment(self, mock_session: Mock) -> None: + mock_session.get.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.create_reply("comment-1", "hello", "user-1") + + def test_create_reply_creates_mentions(self, mock_session: Mock) -> None: + mock_session.get.return_value = Mock() + reply = Mock() + reply.id = "reply-1" + reply.created_at = "ts" + + with ( + patch.object(service_module, "WorkflowCommentReply", return_value=reply), + patch.object(service_module, "WorkflowCommentMention", return_value=Mock()), + patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=["user-2"]), + ): + result = WorkflowCommentService.create_reply( + comment_id="comment-1", + content="hello", + created_by="user-1", + mentioned_user_ids=["user-2", "bad-id"], + ) + + assert result == {"id": "reply-1", "created_at": "ts"} + assert mock_session.add.call_count == 2 + mock_session.commit.assert_called_once() + + def test_update_reply_raises_not_found(self, mock_session: Mock) -> None: + mock_session.scalar.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.update_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="user-1", + content="hello", + ) + + def test_update_reply_raises_forbidden(self, mock_session: Mock) -> None: + reply = Mock() + reply.created_by = "owner" + mock_session.scalar.return_value = reply + + with pytest.raises(Forbidden): + WorkflowCommentService.update_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="intruder", + content="hello", + ) + + def test_update_reply_replaces_mentions(self, mock_session: Mock) -> None: + reply = Mock() + reply.id = "reply-1" + reply.comment_id = "comment-1" + reply.created_by = "owner" + reply.updated_at = "updated" + mock_session.scalar.return_value = reply + mock_session.scalars.return_value = _mock_scalars([Mock()]) + comment = Mock() + comment.tenant_id = "tenant-1" + comment.app_id = "app-1" + mock_session.get.return_value = comment + + with patch.object(WorkflowCommentService, "_filter_valid_mentioned_user_ids", return_value=["user-2"]): + result = WorkflowCommentService.update_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="owner", + content="new", + mentioned_user_ids=["user-2", "bad-id"], + ) + + assert result == {"id": "reply-1", "updated_at": "updated"} + assert mock_session.delete.call_count == 1 + assert mock_session.add.call_count == 1 + mock_session.commit.assert_called_once() + mock_session.refresh.assert_called_once_with(reply) + + def test_update_comment_updates_position_coordinates_when_provided(self, mock_session: Mock) -> None: + comment = Mock() + comment.id = "comment-1" + comment.created_by = "owner" + comment.position_x = 1.0 + comment.position_y = 2.0 + mock_session.scalar.return_value = comment + mock_session.scalars.return_value = _mock_scalars([]) + + WorkflowCommentService.update_comment( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + user_id="owner", + content="updated", + position_x=10.5, + position_y=20.5, + mentioned_user_ids=[], + ) + + assert comment.position_x == 10.5 + assert comment.position_y == 20.5 + + def test_delete_reply_raises_forbidden(self, mock_session: Mock) -> None: + reply = Mock() + reply.created_by = "owner" + mock_session.scalar.return_value = reply + + with pytest.raises(Forbidden): + WorkflowCommentService.delete_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="intruder", + ) + + def test_delete_reply_raises_not_found(self, mock_session: Mock) -> None: + mock_session.scalar.return_value = None + + with pytest.raises(NotFound): + WorkflowCommentService.delete_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="owner", + ) + + def test_delete_reply_removes_mentions(self, mock_session: Mock) -> None: + reply = Mock() + reply.created_by = "owner" + mock_session.scalar.return_value = reply + mock_session.scalars.return_value = _mock_scalars([Mock(), Mock()]) + + WorkflowCommentService.delete_reply( + tenant_id="tenant-1", + app_id="app-1", + comment_id="comment-1", + reply_id="reply-1", + user_id="owner", + ) + + assert mock_session.delete.call_count == 3 + mock_session.commit.assert_called_once() + + def test_validate_comment_access_delegates_to_get_comment(self) -> None: + comment = Mock() + with patch.object(WorkflowCommentService, "get_comment", return_value=comment) as get_comment_mock: + result = WorkflowCommentService.validate_comment_access("comment-1", "tenant-1", "app-1") + + assert result is comment + get_comment_mock.assert_called_once_with("tenant-1", "app-1", "comment-1") diff --git a/api/tests/unit_tests/services/test_workflow_run_service.py b/api/tests/unit_tests/services/test_workflow_run_service.py new file mode 100644 index 0000000000..03471389a6 --- /dev/null +++ b/api/tests/unit_tests/services/test_workflow_run_service.py @@ -0,0 +1,262 @@ +from types import SimpleNamespace +from typing import Any, cast +from unittest.mock import MagicMock + +import pytest +from sqlalchemy import Engine + +from models import Account, App, EndUser, WorkflowRunTriggeredFrom +from services import workflow_run_service as service_module +from services.workflow_run_service import WorkflowRunService + + +@pytest.fixture +def repository_factory_mocks(monkeypatch: pytest.MonkeyPatch) -> tuple[MagicMock, MagicMock, Any]: + node_repo = MagicMock() + workflow_run_repo = MagicMock() + factory = SimpleNamespace( + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + return node_repo, workflow_run_repo, factory + + +def _app_model(**kwargs: Any) -> App: + return cast(App, SimpleNamespace(**kwargs)) + + +def _account(**kwargs: Any) -> Account: + return cast(Account, SimpleNamespace(**kwargs)) + + +def _end_user(**kwargs: Any) -> EndUser: + return cast(EndUser, SimpleNamespace(**kwargs)) + + +class TestWorkflowRunServiceInitialization: + def test___init___should_create_sessionmaker_from_db_engine_when_session_factory_missing( + self, + monkeypatch: pytest.MonkeyPatch, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + session_factory = MagicMock(name="session_factory") + sessionmaker_mock = MagicMock(return_value=session_factory) + monkeypatch.setattr(service_module, "sessionmaker", sessionmaker_mock) + monkeypatch.setattr(service_module, "db", SimpleNamespace(engine="db-engine")) + + service = WorkflowRunService() + + sessionmaker_mock.assert_called_once_with(bind="db-engine", expire_on_commit=False) + assert service._session_factory is session_factory + + def test___init___should_create_sessionmaker_when_engine_is_provided( + self, + monkeypatch: pytest.MonkeyPatch, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + class FakeEngine: + pass + + session_factory = MagicMock(name="session_factory") + sessionmaker_mock = MagicMock(return_value=session_factory) + monkeypatch.setattr(service_module, "Engine", FakeEngine) + monkeypatch.setattr(service_module, "sessionmaker", sessionmaker_mock) + engine = cast(Engine, FakeEngine()) + + service = WorkflowRunService(session_factory=engine) + + sessionmaker_mock.assert_called_once_with(bind=engine, expire_on_commit=False) + assert service._session_factory is session_factory + + def test___init___should_keep_provided_sessionmaker_and_create_repositories( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + node_repo, workflow_run_repo, factory = repository_factory_mocks + session_factory = MagicMock(name="session_factory") + + service = WorkflowRunService(session_factory=session_factory) + + assert service._session_factory is session_factory + assert service._node_execution_service_repo is node_repo + assert service._workflow_run_repo is workflow_run_repo + factory.create_api_workflow_node_execution_repository.assert_called_once_with(session_factory) + factory.create_api_workflow_run_repository.assert_called_once_with(session_factory) + + +class TestWorkflowRunServiceQueries: + def test_get_paginate_workflow_runs_should_forward_filters_and_parse_limit( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + _, workflow_run_repo, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + app_model = _app_model(tenant_id="tenant-1", id="app-1") + expected = MagicMock(name="pagination") + workflow_run_repo.get_paginated_workflow_runs.return_value = expected + args = {"limit": "7", "last_id": "last-1", "status": "succeeded"} + + result = service.get_paginate_workflow_runs( + app_model=app_model, + args=args, + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + ) + + assert result is expected + workflow_run_repo.get_paginated_workflow_runs.assert_called_once_with( + tenant_id="tenant-1", + app_id="app-1", + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + limit=7, + last_id="last-1", + status="succeeded", + ) + + def test_get_paginate_advanced_chat_workflow_runs_should_attach_message_fields_when_message_exists( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + app_model = _app_model(tenant_id="tenant-1", id="app-1") + run_with_message = SimpleNamespace( + id="run-1", + status="running", + message=SimpleNamespace(id="msg-1", conversation_id="conv-1"), + ) + run_without_message = SimpleNamespace(id="run-2", status="succeeded", message=None) + pagination = SimpleNamespace(data=[run_with_message, run_without_message]) + monkeypatch.setattr(service, "get_paginate_workflow_runs", MagicMock(return_value=pagination)) + + result = service.get_paginate_advanced_chat_workflow_runs(app_model=app_model, args={"limit": "2"}) + + assert result is pagination + assert len(result.data) == 2 + assert result.data[0].message_id == "msg-1" + assert result.data[0].conversation_id == "conv-1" + assert result.data[0].status == "running" + assert not hasattr(result.data[1], "message_id") + assert result.data[1].id == "run-2" + + def test_get_workflow_run_should_delegate_to_repository_by_tenant_and_app( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + _, workflow_run_repo, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + app_model = _app_model(tenant_id="tenant-1", id="app-1") + expected = MagicMock(name="workflow_run") + workflow_run_repo.get_workflow_run_by_id.return_value = expected + + result = service.get_workflow_run(app_model=app_model, run_id="run-1") + + assert result is expected + workflow_run_repo.get_workflow_run_by_id.assert_called_once_with( + tenant_id="tenant-1", + app_id="app-1", + run_id="run-1", + ) + + def test_get_workflow_runs_count_should_forward_optional_filters( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + ) -> None: + _, workflow_run_repo, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + app_model = _app_model(tenant_id="tenant-1", id="app-1") + expected = {"total": 3, "succeeded": 2} + workflow_run_repo.get_workflow_runs_count.return_value = expected + + result = service.get_workflow_runs_count( + app_model=app_model, + status="succeeded", + time_range="7d", + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + ) + + assert result == expected + workflow_run_repo.get_workflow_runs_count.assert_called_once_with( + tenant_id="tenant-1", + app_id="app-1", + triggered_from=WorkflowRunTriggeredFrom.APP_RUN, + status="succeeded", + time_range="7d", + ) + + def test_get_workflow_run_node_executions_should_return_empty_list_when_run_not_found( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=None)) + app_model = _app_model(id="app-1") + user = _account(current_tenant_id="tenant-1") + + result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) + + assert result == [] + + def test_get_workflow_run_node_executions_should_use_end_user_tenant_id( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + node_repo, _, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) + + class FakeEndUser: + def __init__(self, tenant_id: str) -> None: + self.tenant_id = tenant_id + + monkeypatch.setattr(service_module, "EndUser", FakeEndUser) + user = cast(EndUser, FakeEndUser(tenant_id="tenant-end-user")) + app_model = _app_model(id="app-1") + expected = [SimpleNamespace(id="exec-1")] + node_repo.get_executions_by_workflow_run.return_value = expected + + result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) + + assert result == expected + node_repo.get_executions_by_workflow_run.assert_called_once_with( + tenant_id="tenant-end-user", + app_id="app-1", + workflow_run_id="run-1", + ) + + def test_get_workflow_run_node_executions_should_use_account_current_tenant_id( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + node_repo, _, _ = repository_factory_mocks + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) + app_model = _app_model(id="app-1") + user = _account(current_tenant_id="tenant-account") + expected = [SimpleNamespace(id="exec-1"), SimpleNamespace(id="exec-2")] + node_repo.get_executions_by_workflow_run.return_value = expected + + result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) + + assert result == expected + node_repo.get_executions_by_workflow_run.assert_called_once_with( + tenant_id="tenant-account", + app_id="app-1", + workflow_run_id="run-1", + ) + + def test_get_workflow_run_node_executions_should_raise_when_resolved_tenant_id_is_none( + self, + repository_factory_mocks: tuple[MagicMock, MagicMock, Any], + monkeypatch: pytest.MonkeyPatch, + ) -> None: + service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) + monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) + app_model = _app_model(id="app-1") + user = _account(current_tenant_id=None) + + with pytest.raises(ValueError, match="tenant_id cannot be None"): + service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) diff --git a/api/tests/unit_tests/services/test_workflow_run_service_pause.py b/api/tests/unit_tests/services/test_workflow_run_service_pause.py index 64b21317ab..239cc83518 100644 --- a/api/tests/unit_tests/services/test_workflow_run_service_pause.py +++ b/api/tests/unit_tests/services/test_workflow_run_service_pause.py @@ -13,10 +13,10 @@ from datetime import datetime from unittest.mock import MagicMock, create_autospec, patch import pytest -from graphon.enums import WorkflowExecutionStatus from sqlalchemy import Engine from sqlalchemy.orm import Session, sessionmaker +from graphon.enums import WorkflowExecutionStatus from models.workflow import WorkflowPause from repositories.api_workflow_run_repository import APIWorkflowRunRepository from repositories.sqlalchemy_api_workflow_run_repository import _PrivateWorkflowPauseEntity @@ -176,300 +176,3 @@ class TestWorkflowRunService: service = WorkflowRunService(session_factory) assert service._session_factory == session_factory - - -# === Merged from test_workflow_run_service.py === - - -from types import SimpleNamespace -from typing import Any, cast -from unittest.mock import MagicMock - -import pytest - -from models import Account, App, EndUser, WorkflowRunTriggeredFrom -from services import workflow_run_service as service_module -from services.workflow_run_service import WorkflowRunService - - -@pytest.fixture -def repository_factory_mocks(monkeypatch: pytest.MonkeyPatch) -> tuple[MagicMock, MagicMock, Any]: - # Arrange - node_repo = MagicMock() - workflow_run_repo = MagicMock() - factory = SimpleNamespace( - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - - # Assert - return node_repo, workflow_run_repo, factory - - -def _app_model(**kwargs: Any) -> App: - return cast(App, SimpleNamespace(**kwargs)) - - -def _account(**kwargs: Any) -> Account: - return cast(Account, SimpleNamespace(**kwargs)) - - -def _end_user(**kwargs: Any) -> EndUser: - return cast(EndUser, SimpleNamespace(**kwargs)) - - -def test___init___should_create_sessionmaker_from_db_engine_when_session_factory_missing( - monkeypatch: pytest.MonkeyPatch, - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - session_factory = MagicMock(name="session_factory") - sessionmaker_mock = MagicMock(return_value=session_factory) - monkeypatch.setattr(service_module, "sessionmaker", sessionmaker_mock) - monkeypatch.setattr(service_module, "db", SimpleNamespace(engine="db-engine")) - - # Act - service = WorkflowRunService() - - # Assert - sessionmaker_mock.assert_called_once_with(bind="db-engine", expire_on_commit=False) - assert service._session_factory is session_factory - - -def test___init___should_create_sessionmaker_when_engine_is_provided( - monkeypatch: pytest.MonkeyPatch, - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - class FakeEngine: - pass - - session_factory = MagicMock(name="session_factory") - sessionmaker_mock = MagicMock(return_value=session_factory) - monkeypatch.setattr(service_module, "Engine", FakeEngine) - monkeypatch.setattr(service_module, "sessionmaker", sessionmaker_mock) - engine = cast(Engine, FakeEngine()) - - # Act - service = WorkflowRunService(session_factory=engine) - - # Assert - sessionmaker_mock.assert_called_once_with(bind=engine, expire_on_commit=False) - assert service._session_factory is session_factory - - -def test___init___should_keep_provided_sessionmaker_and_create_repositories( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - node_repo, workflow_run_repo, factory = repository_factory_mocks - session_factory = MagicMock(name="session_factory") - - # Act - service = WorkflowRunService(session_factory=session_factory) - - # Assert - assert service._session_factory is session_factory - assert service._node_execution_service_repo is node_repo - assert service._workflow_run_repo is workflow_run_repo - factory.create_api_workflow_node_execution_repository.assert_called_once_with(session_factory) - factory.create_api_workflow_run_repository.assert_called_once_with(session_factory) - - -def test_get_paginate_workflow_runs_should_forward_filters_and_parse_limit( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - _, workflow_run_repo, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - app_model = _app_model(tenant_id="tenant-1", id="app-1") - expected = MagicMock(name="pagination") - workflow_run_repo.get_paginated_workflow_runs.return_value = expected - args = {"limit": "7", "last_id": "last-1", "status": "succeeded"} - - # Act - result = service.get_paginate_workflow_runs( - app_model=app_model, - args=args, - triggered_from=WorkflowRunTriggeredFrom.APP_RUN, - ) - - # Assert - assert result is expected - workflow_run_repo.get_paginated_workflow_runs.assert_called_once_with( - tenant_id="tenant-1", - app_id="app-1", - triggered_from=WorkflowRunTriggeredFrom.APP_RUN, - limit=7, - last_id="last-1", - status="succeeded", - ) - - -def test_get_paginate_advanced_chat_workflow_runs_should_attach_message_fields_when_message_exists( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - app_model = _app_model(tenant_id="tenant-1", id="app-1") - run_with_message = SimpleNamespace( - id="run-1", - status="running", - message=SimpleNamespace(id="msg-1", conversation_id="conv-1"), - ) - run_without_message = SimpleNamespace(id="run-2", status="succeeded", message=None) - pagination = SimpleNamespace(data=[run_with_message, run_without_message]) - monkeypatch.setattr(service, "get_paginate_workflow_runs", MagicMock(return_value=pagination)) - - # Act - result = service.get_paginate_advanced_chat_workflow_runs(app_model=app_model, args={"limit": "2"}) - - # Assert - assert result is pagination - assert len(result.data) == 2 - assert result.data[0].message_id == "msg-1" - assert result.data[0].conversation_id == "conv-1" - assert result.data[0].status == "running" - assert not hasattr(result.data[1], "message_id") - assert result.data[1].id == "run-2" - - -def test_get_workflow_run_should_delegate_to_repository_by_tenant_and_app( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - _, workflow_run_repo, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - app_model = _app_model(tenant_id="tenant-1", id="app-1") - expected = MagicMock(name="workflow_run") - workflow_run_repo.get_workflow_run_by_id.return_value = expected - - # Act - result = service.get_workflow_run(app_model=app_model, run_id="run-1") - - # Assert - assert result is expected - workflow_run_repo.get_workflow_run_by_id.assert_called_once_with( - tenant_id="tenant-1", - app_id="app-1", - run_id="run-1", - ) - - -def test_get_workflow_runs_count_should_forward_optional_filters( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], -) -> None: - # Arrange - _, workflow_run_repo, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - app_model = _app_model(tenant_id="tenant-1", id="app-1") - expected = {"total": 3, "succeeded": 2} - workflow_run_repo.get_workflow_runs_count.return_value = expected - - # Act - result = service.get_workflow_runs_count( - app_model=app_model, - status="succeeded", - time_range="7d", - triggered_from=WorkflowRunTriggeredFrom.APP_RUN, - ) - - # Assert - assert result == expected - workflow_run_repo.get_workflow_runs_count.assert_called_once_with( - tenant_id="tenant-1", - app_id="app-1", - triggered_from=WorkflowRunTriggeredFrom.APP_RUN, - status="succeeded", - time_range="7d", - ) - - -def test_get_workflow_run_node_executions_should_return_empty_list_when_run_not_found( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=None)) - app_model = _app_model(id="app-1") - user = _account(current_tenant_id="tenant-1") - - # Act - result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) - - # Assert - assert result == [] - - -def test_get_workflow_run_node_executions_should_use_end_user_tenant_id( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - node_repo, _, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) - - class FakeEndUser: - def __init__(self, tenant_id: str) -> None: - self.tenant_id = tenant_id - - monkeypatch.setattr(service_module, "EndUser", FakeEndUser) - user = cast(EndUser, FakeEndUser(tenant_id="tenant-end-user")) - app_model = _app_model(id="app-1") - expected = [SimpleNamespace(id="exec-1")] - node_repo.get_executions_by_workflow_run.return_value = expected - - # Act - result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) - - # Assert - assert result == expected - node_repo.get_executions_by_workflow_run.assert_called_once_with( - tenant_id="tenant-end-user", - app_id="app-1", - workflow_run_id="run-1", - ) - - -def test_get_workflow_run_node_executions_should_use_account_current_tenant_id( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - node_repo, _, _ = repository_factory_mocks - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) - app_model = _app_model(id="app-1") - user = _account(current_tenant_id="tenant-account") - expected = [SimpleNamespace(id="exec-1"), SimpleNamespace(id="exec-2")] - node_repo.get_executions_by_workflow_run.return_value = expected - - # Act - result = service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) - - # Assert - assert result == expected - node_repo.get_executions_by_workflow_run.assert_called_once_with( - tenant_id="tenant-account", - app_id="app-1", - workflow_run_id="run-1", - ) - - -def test_get_workflow_run_node_executions_should_raise_when_resolved_tenant_id_is_none( - repository_factory_mocks: tuple[MagicMock, MagicMock, Any], - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - service = WorkflowRunService(session_factory=MagicMock(name="session_factory")) - monkeypatch.setattr(service, "get_workflow_run", MagicMock(return_value=SimpleNamespace(id="run-1"))) - app_model = _app_model(id="app-1") - user = _account(current_tenant_id=None) - - # Act / Assert - with pytest.raises(ValueError, match="tenant_id cannot be None"): - service.get_workflow_run_node_executions(app_model=app_model, run_id="run-1", user=user) diff --git a/api/tests/unit_tests/services/test_workflow_service.py b/api/tests/unit_tests/services/test_workflow_service.py index 406b4fb9d0..287f5f2e5e 100644 --- a/api/tests/unit_tests/services/test_workflow_service.py +++ b/api/tests/unit_tests/services/test_workflow_service.py @@ -12,9 +12,10 @@ This test suite covers: import json import uuid from typing import Any, cast -from unittest.mock import ANY, MagicMock, patch +from unittest.mock import ANY, MagicMock, Mock, patch import pytest + from graphon.entities import WorkflowNodeExecution from graphon.enums import ( BuiltinNodeTypes, @@ -28,7 +29,6 @@ from graphon.model_runtime.entities.model_entities import ModelType from graphon.node_events import NodeRunResult from graphon.nodes.http_request import HTTP_REQUEST_CONFIG_FILTER_KEY, HttpRequestNode, HttpRequestNodeConfig from graphon.variables.input_entities import VariableEntityType - from libs.datetime_utils import naive_utc_now from models.human_input import RecipientType from models.model import App, AppMode @@ -94,8 +94,8 @@ class TestWorkflowAssociatedDataFactory: app_id: str = "app-123", version: str = Workflow.VERSION_DRAFT, workflow_type: str = WorkflowType.WORKFLOW.value, - graph: dict | None = None, - features: dict | None = None, + graph: dict[str, Any] | None = None, + features: dict[str, Any] | None = None, unique_hash: str | None = None, **kwargs, ) -> MagicMock: @@ -713,6 +713,79 @@ class TestWorkflowService: with pytest.raises(ValueError, match="Invalid app mode"): workflow_service.validate_features_structure(app, features) + # ==================== Draft Workflow Variable Update Tests ==================== + # These tests verify updating draft workflow environment/conversation variables + + def test_update_draft_workflow_environment_variables_updates_workflow(self, workflow_service, mock_db_session): + """Test update_draft_workflow_environment_variables updates draft fields.""" + app = TestWorkflowAssociatedDataFactory.create_app_mock() + account = TestWorkflowAssociatedDataFactory.create_account_mock() + workflow = TestWorkflowAssociatedDataFactory.create_workflow_mock() + variables = [Mock()] + + with ( + patch.object(workflow_service, "get_draft_workflow", return_value=workflow), + patch("services.workflow_service.naive_utc_now", return_value="now"), + ): + workflow_service.update_draft_workflow_environment_variables( + app_model=app, + environment_variables=variables, + account=account, + ) + + assert workflow.environment_variables == variables + assert workflow.updated_by == account.id + assert workflow.updated_at == "now" + mock_db_session.session.commit.assert_called_once() + + def test_update_draft_workflow_environment_variables_raises_when_missing(self, workflow_service): + """Test update_draft_workflow_environment_variables raises when draft missing.""" + app = TestWorkflowAssociatedDataFactory.create_app_mock() + account = TestWorkflowAssociatedDataFactory.create_account_mock() + + with patch.object(workflow_service, "get_draft_workflow", return_value=None): + with pytest.raises(ValueError, match="No draft workflow found."): + workflow_service.update_draft_workflow_environment_variables( + app_model=app, + environment_variables=[], + account=account, + ) + + def test_update_draft_workflow_conversation_variables_updates_workflow(self, workflow_service, mock_db_session): + """Test update_draft_workflow_conversation_variables updates draft fields.""" + app = TestWorkflowAssociatedDataFactory.create_app_mock() + account = TestWorkflowAssociatedDataFactory.create_account_mock() + workflow = TestWorkflowAssociatedDataFactory.create_workflow_mock() + variables = [Mock()] + + with ( + patch.object(workflow_service, "get_draft_workflow", return_value=workflow), + patch("services.workflow_service.naive_utc_now", return_value="now"), + ): + workflow_service.update_draft_workflow_conversation_variables( + app_model=app, + conversation_variables=variables, + account=account, + ) + + assert workflow.conversation_variables == variables + assert workflow.updated_by == account.id + assert workflow.updated_at == "now" + mock_db_session.session.commit.assert_called_once() + + def test_update_draft_workflow_conversation_variables_raises_when_missing(self, workflow_service): + """Test update_draft_workflow_conversation_variables raises when draft missing.""" + app = TestWorkflowAssociatedDataFactory.create_app_mock() + account = TestWorkflowAssociatedDataFactory.create_account_mock() + + with patch.object(workflow_service, "get_draft_workflow", return_value=None): + with pytest.raises(ValueError, match="No draft workflow found."): + workflow_service.update_draft_workflow_conversation_variables( + app_model=app, + conversation_variables=[], + account=account, + ) + # ==================== Publish Workflow Tests ==================== # These tests verify creating published versions from draft workflows @@ -1686,7 +1759,7 @@ class TestWorkflowServiceCredentialValidation: """Missing provider or model in node_data should be a no-op.""" # Arrange workflow = self._make_workflow([]) - node_data: dict = {} # no model key + node_data: dict[str, Any] = {} # no model key # Act + Assert (no error expected) service._validate_load_balancing_credentials(workflow, node_data, "node-1") @@ -2269,7 +2342,7 @@ class TestRebuildFileForUserInputsInStartNode: # Arrange file_var = self._make_variable("attachment", VariableEntityType.FILE) start_data = self._make_start_node_data([file_var]) - user_inputs: dict = {} # attachment not provided + user_inputs: dict[str, Any] = {} # attachment not provided # Act result = _rebuild_file_for_user_inputs_in_start_node( diff --git a/api/tests/unit_tests/services/vector_service.py b/api/tests/unit_tests/services/vector_service.py index ee9ba1c6d6..ad80beb4e3 100644 --- a/api/tests/unit_tests/services/vector_service.py +++ b/api/tests/unit_tests/services/vector_service.py @@ -114,6 +114,7 @@ This test suite follows a comprehensive testing strategy that covers: ================================================================================ """ +from typing import Any from unittest.mock import Mock, patch import pytest @@ -156,7 +157,7 @@ class VectorServiceTestDataFactory: indexing_technique: str = IndexTechniqueType.HIGH_QUALITY, embedding_model_provider: str = "openai", embedding_model: str = "text-embedding-ada-002", - index_struct_dict: dict | None = None, + index_struct_dict: dict[str, Any] | None = None, **kwargs, ) -> Mock: """ diff --git a/api/tests/unit_tests/services/workflow/test_draft_var_loader_simple.py b/api/tests/unit_tests/services/workflow/test_draft_var_loader_simple.py index 8525672da8..60beec7964 100644 --- a/api/tests/unit_tests/services/workflow/test_draft_var_loader_simple.py +++ b/api/tests/unit_tests/services/workflow/test_draft_var_loader_simple.py @@ -4,12 +4,12 @@ import json from unittest.mock import Mock, patch import pytest -from graphon.file import File, FileTransferMethod, FileType -from graphon.variables.segments import ObjectSegment, StringSegment -from graphon.variables.types import SegmentType from sqlalchemy import Engine from core.workflow.file_reference import build_file_reference +from graphon.file import File, FileTransferMethod, FileType +from graphon.variables.segments import ObjectSegment, StringSegment +from graphon.variables.types import SegmentType from models.model import UploadFile from models.workflow import WorkflowDraftVariable, WorkflowDraftVariableFile from services.workflow_draft_variable_service import DraftVarLoader diff --git a/api/tests/unit_tests/services/workflow/test_workflow_draft_variable_service.py b/api/tests/unit_tests/services/workflow/test_workflow_draft_variable_service.py index e7e72793a3..f6bdb6a60e 100644 --- a/api/tests/unit_tests/services/workflow/test_workflow_draft_variable_service.py +++ b/api/tests/unit_tests/services/workflow/test_workflow_draft_variable_service.py @@ -4,10 +4,6 @@ import uuid from unittest.mock import MagicMock, Mock, patch import pytest -from graphon.enums import BuiltinNodeTypes -from graphon.file import File, FileTransferMethod, FileType -from graphon.variables.segments import StringSegment -from graphon.variables.types import SegmentType from sqlalchemy import Engine from sqlalchemy.orm import Session @@ -17,6 +13,10 @@ from core.workflow.variable_prefixes import ( ENVIRONMENT_VARIABLE_NODE_ID, SYSTEM_VARIABLE_NODE_ID, ) +from graphon.enums import BuiltinNodeTypes +from graphon.file import File, FileTransferMethod, FileType +from graphon.variables.segments import StringSegment +from graphon.variables.types import SegmentType from libs.uuid_utils import uuidv7 from models.account import Account from models.enums import DraftVariableType diff --git a/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service.py b/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service.py index b8b073f75c..d570dce107 100644 --- a/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service.py +++ b/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service.py @@ -3,17 +3,16 @@ import queue from collections.abc import Sequence from dataclasses import dataclass from datetime import UTC, datetime -from itertools import cycle from threading import Event import pytest -from graphon.entities.pause_reason import HumanInputRequired -from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool from core.app.app_config.entities import WorkflowUIBasedAppConfig from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext, _WorkflowGenerateEntityWrapper +from graphon.entities.pause_reason import HumanInputRequired +from graphon.enums import WorkflowExecutionStatus, WorkflowNodeExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool from models.enums import CreatorUserRole from models.model import AppMode from models.workflow import WorkflowRun @@ -223,577 +222,3 @@ def test_resolve_task_id_priority(context_task_id, buffered_task_id, expected) - buffer_state.task_id_ready.set() task_id = _resolve_task_id(resumption_context, buffer_state, "run-1", wait_timeout=0.0) assert task_id == expected - - -# === Merged from test_workflow_event_snapshot_service_additional.py === - - -import json -import queue -from collections.abc import Mapping -from dataclasses import dataclass -from datetime import UTC, datetime -from threading import Event -from types import SimpleNamespace -from typing import Any, cast -from unittest.mock import MagicMock - -import pytest -from graphon.enums import WorkflowExecutionStatus -from graphon.runtime import GraphRuntimeState, VariablePool -from sqlalchemy.orm import Session, sessionmaker - -from core.app.app_config.entities import WorkflowUIBasedAppConfig -from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity -from core.app.entities.task_entities import StreamEvent -from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext, _WorkflowGenerateEntityWrapper -from models.enums import CreatorUserRole -from models.model import AppMode -from models.workflow import WorkflowRun -from repositories.entities.workflow_pause import WorkflowPauseEntity -from services import workflow_event_snapshot_service as service_module -from services.workflow_event_snapshot_service import BufferState, MessageContext, build_workflow_event_stream - - -def _build_workflow_run_additional(status: WorkflowExecutionStatus = WorkflowExecutionStatus.RUNNING) -> WorkflowRun: - return WorkflowRun( - id="run-1", - tenant_id="tenant-1", - app_id="app-1", - workflow_id="workflow-1", - type="workflow", - triggered_from="app-run", - version="v1", - graph=None, - inputs=json.dumps({"query": "hello"}), - status=status, - outputs=json.dumps({}), - error=None, - elapsed_time=1.2, - total_tokens=5, - total_steps=2, - created_by_role=CreatorUserRole.END_USER, - created_by="user-1", - created_at=datetime(2024, 1, 1, tzinfo=UTC), - ) - - -def _build_resumption_context_additional(task_id: str) -> WorkflowResumptionContext: - app_config = WorkflowUIBasedAppConfig( - tenant_id="tenant-1", - app_id="app-1", - app_mode=AppMode.WORKFLOW, - workflow_id="workflow-1", - ) - generate_entity = WorkflowAppGenerateEntity( - task_id=task_id, - app_config=app_config, - inputs={}, - files=[], - user_id="user-1", - stream=True, - invoke_from=InvokeFrom.EXPLORE, - call_depth=0, - workflow_execution_id="run-1", - ) - runtime_state = GraphRuntimeState(variable_pool=VariablePool(), start_at=0.0) - runtime_state.outputs = {"answer": "ok"} - wrapper = _WorkflowGenerateEntityWrapper(entity=generate_entity) - return WorkflowResumptionContext( - generate_entity=wrapper, - serialized_graph_runtime_state=runtime_state.dumps(), - ) - - -class _SessionContext: - def __init__(self, session: Any) -> None: - self._session = session - - def __enter__(self) -> Any: - return self._session - - def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: - return False - - -class _SessionMaker: - def __init__(self, session: Any) -> None: - self._session = session - - def __call__(self) -> _SessionContext: - return _SessionContext(self._session) - - -class _SubscriptionContext: - def __init__(self, subscription: Any) -> None: - self._subscription = subscription - - def __enter__(self) -> Any: - return self._subscription - - def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: - return False - - -class _Topic: - def __init__(self, subscription: Any) -> None: - self._subscription = subscription - - def subscribe(self) -> _SubscriptionContext: - return _SubscriptionContext(self._subscription) - - -class _StaticSubscription: - def receive(self, timeout: int = 1) -> None: - return None - - -@dataclass(frozen=True) -class _PauseEntity(WorkflowPauseEntity): - state: bytes - - @property - def id(self) -> str: - return "pause-1" - - @property - def workflow_execution_id(self) -> str: - return "run-1" - - @property - def resumed_at(self) -> datetime | None: - return None - - @property - def paused_at(self) -> datetime: - return datetime(2024, 1, 1, tzinfo=UTC) - - def get_state(self) -> bytes: - return self.state - - def get_pause_reasons(self) -> list[Any]: - return [] - - -def test_get_message_context_should_return_none_when_no_message() -> None: - # Arrange - session = SimpleNamespace(scalar=MagicMock(return_value=None)) - session_maker = _SessionMaker(session) - - # Act - result = service_module._get_message_context(cast(sessionmaker[Session], session_maker), "run-1") - - # Assert - assert result is None - - -def test_get_message_context_should_default_created_at_to_zero_when_message_has_no_timestamp() -> None: - # Arrange - message = SimpleNamespace( - id="msg-1", - conversation_id="conv-1", - created_at=None, - answer="answer", - ) - session = SimpleNamespace(scalar=MagicMock(return_value=message)) - session_maker = _SessionMaker(session) - - # Act - result = service_module._get_message_context(cast(sessionmaker[Session], session_maker), "run-1") - - # Assert - assert result is not None - assert result.created_at == 0 - assert result.message_id == "msg-1" - assert result.conversation_id == "conv-1" - assert result.answer == "answer" - - -def test_load_resumption_context_should_return_none_when_pause_entity_missing() -> None: - # Arrange - - # Act - result = service_module._load_resumption_context(None) - - # Assert - assert result is None - - -def test_load_resumption_context_should_return_none_when_pause_entity_state_is_invalid() -> None: - # Arrange - pause_entity = _PauseEntity(state=b"not-a-valid-state") - - # Act - result = service_module._load_resumption_context(pause_entity) - - # Assert - assert result is None - - -def test_load_resumption_context_should_parse_valid_state_into_context() -> None: - # Arrange - context = _build_resumption_context_additional(task_id="task-ctx") - pause_entity = _PauseEntity(state=context.dumps().encode()) - - # Act - result = service_module._load_resumption_context(pause_entity) - - # Assert - assert result is not None - assert result.get_generate_entity().task_id == "task-ctx" - - -def test_resolve_task_id_should_return_workflow_run_id_when_buffer_state_is_missing() -> None: - # Arrange - - # Act - result = service_module._resolve_task_id( - resumption_context=None, - buffer_state=None, - workflow_run_id="run-1", - ) - - # Assert - assert result == "run-1" - - -@pytest.mark.parametrize( - ("payload", "expected"), - [ - (b'{"event":"node_started"}', {"event": "node_started"}), - (b"invalid-json", None), - (b"[]", None), - ], -) -def test_parse_event_message_should_parse_only_json_object( - payload: bytes, - expected: dict[str, Any] | None, -) -> None: - # Arrange - - # Act - result = service_module._parse_event_message(payload) - - # Assert - assert result == expected - - -def test_is_terminal_event_should_recognize_finished_and_optional_paused_events() -> None: - # Arrange - finished_event = {"event": StreamEvent.WORKFLOW_FINISHED.value} - paused_event = {"event": StreamEvent.WORKFLOW_PAUSED.value} - - # Act - is_finished = service_module._is_terminal_event(finished_event, include_paused=False) - paused_without_flag = service_module._is_terminal_event(paused_event, include_paused=False) - paused_with_flag = service_module._is_terminal_event(paused_event, include_paused=True) - - # Assert - assert is_finished is True - assert paused_without_flag is False - assert paused_with_flag is True - assert service_module._is_terminal_event(StreamEvent.PING.value, include_paused=True) is False - - -def test_apply_message_context_should_update_payload_when_context_exists() -> None: - # Arrange - payload: dict[str, Any] = {"event": "workflow_started"} - context = MessageContext(conversation_id="conv-1", message_id="msg-1", created_at=1700000000) - - # Act - service_module._apply_message_context(payload, context) - - # Assert - assert payload["conversation_id"] == "conv-1" - assert payload["message_id"] == "msg-1" - assert payload["created_at"] == 1700000000 - - -def test_start_buffering_should_capture_task_id_and_enqueue_event() -> None: - # Arrange - class Subscription: - def __init__(self) -> None: - self._calls = 0 - - def receive(self, timeout: int = 1) -> bytes | None: - self._calls += 1 - if self._calls == 1: - return b'{"event":"node_started","task_id":"task-1"}' - return None - - subscription = Subscription() - - # Act - buffer_state = service_module._start_buffering(subscription) - ready = buffer_state.task_id_ready.wait(timeout=1) - event = buffer_state.queue.get(timeout=1) - buffer_state.stop_event.set() - finished = buffer_state.done_event.wait(timeout=1) - - # Assert - assert ready is True - assert finished is True - assert buffer_state.task_id_hint == "task-1" - assert event["event"] == "node_started" - - -def test_start_buffering_should_drop_old_event_when_queue_is_full( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - class QueueWithSingleFull: - def __init__(self) -> None: - self._first_put = True - self.items: list[dict[str, Any]] = [{"event": "old"}] - - def put_nowait(self, item: dict[str, Any]) -> None: - if self._first_put: - self._first_put = False - raise queue.Full - self.items.append(item) - - def get_nowait(self) -> dict[str, Any]: - if not self.items: - raise queue.Empty - return self.items.pop(0) - - def empty(self) -> bool: - return len(self.items) == 0 - - fake_queue = QueueWithSingleFull() - monkeypatch.setattr(service_module.queue, "Queue", lambda maxsize=2048: fake_queue) - - class Subscription: - def __init__(self) -> None: - self._calls = 0 - - def receive(self, timeout: int = 1) -> bytes | None: - self._calls += 1 - if self._calls == 1: - return b'{"event":"node_started","task_id":"task-2"}' - return None - - subscription = Subscription() - - # Act - buffer_state = service_module._start_buffering(subscription) - ready = buffer_state.task_id_ready.wait(timeout=1) - buffer_state.stop_event.set() - finished = buffer_state.done_event.wait(timeout=1) - - # Assert - assert ready is True - assert finished is True - assert fake_queue.items[-1]["task_id"] == "task-2" - - -def test_start_buffering_should_set_done_event_when_subscription_raises() -> None: - # Arrange - class Subscription: - def receive(self, timeout: int = 1) -> bytes | None: - raise RuntimeError("subscription failure") - - subscription = Subscription() - - # Act - buffer_state = service_module._start_buffering(subscription) - finished = buffer_state.done_event.wait(timeout=1) - - # Assert - assert finished is True - - -def test_build_workflow_event_stream_should_emit_ping_and_terminal_snapshot_event( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - workflow_run = _build_workflow_run_additional(status=WorkflowExecutionStatus.RUNNING) - topic = _Topic(_StaticSubscription()) - workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) - node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) - factory = SimpleNamespace( - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) - monkeypatch.setattr( - service_module, - "_get_message_context", - MagicMock(return_value=MessageContext("conv-1", "msg-1", 1700000000)), - ) - monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) - buffer_state = BufferState( - queue=queue.Queue(), - stop_event=Event(), - done_event=Event(), - task_id_ready=Event(), - task_id_hint="task-1", - ) - monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) - monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) - monkeypatch.setattr( - service_module, - "_build_snapshot_events", - MagicMock(return_value=[{"event": StreamEvent.WORKFLOW_FINISHED.value, "task_id": "task-1"}]), - ) - - # Act - events = list( - build_workflow_event_stream( - app_mode=AppMode.ADVANCED_CHAT, - workflow_run=workflow_run, - tenant_id="tenant-1", - app_id="app-1", - session_maker=MagicMock(), - ) - ) - - # Assert - assert events[0] == StreamEvent.PING.value - finished_event = cast(Mapping[str, Any], events[1]) - assert finished_event["event"] == StreamEvent.WORKFLOW_FINISHED.value - assert buffer_state.stop_event.is_set() is True - node_repo.get_execution_snapshots_by_workflow_run.assert_called_once() - called_kwargs = node_repo.get_execution_snapshots_by_workflow_run.call_args.kwargs - assert called_kwargs["workflow_run_id"] == "run-1" - - -def test_build_workflow_event_stream_should_emit_periodic_ping_and_stop_after_idle_timeout( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - workflow_run = _build_workflow_run_additional(status=WorkflowExecutionStatus.RUNNING) - topic = _Topic(_StaticSubscription()) - workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) - node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) - factory = SimpleNamespace( - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) - monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) - monkeypatch.setattr(service_module, "_build_snapshot_events", MagicMock(return_value=[])) - monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) - - class AlwaysEmptyQueue: - def empty(self) -> bool: - return False - - def get(self, timeout: int = 1) -> None: - raise queue.Empty - - buffer_state = BufferState( - queue=AlwaysEmptyQueue(), # type: ignore[arg-type] - stop_event=Event(), - done_event=Event(), - task_id_ready=Event(), - task_id_hint="task-1", - ) - monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) - time_values = cycle([0.0, 6.0, 21.0, 26.0]) - monkeypatch.setattr(service_module.time, "time", lambda: next(time_values)) - - # Act - events = list( - build_workflow_event_stream( - app_mode=AppMode.WORKFLOW, - workflow_run=workflow_run, - tenant_id="tenant-1", - app_id="app-1", - session_maker=MagicMock(), - idle_timeout=20.0, - ping_interval=5.0, - ) - ) - - # Assert - assert events == [StreamEvent.PING.value, StreamEvent.PING.value] - assert buffer_state.stop_event.is_set() is True - - -def test_build_workflow_event_stream_should_exit_when_buffer_done_and_empty( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - workflow_run = _build_workflow_run_additional(status=WorkflowExecutionStatus.RUNNING) - topic = _Topic(_StaticSubscription()) - workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) - node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) - factory = SimpleNamespace( - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) - monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) - monkeypatch.setattr(service_module, "_build_snapshot_events", MagicMock(return_value=[])) - monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) - buffer_state = BufferState( - queue=queue.Queue(), - stop_event=Event(), - done_event=Event(), - task_id_ready=Event(), - task_id_hint="task-1", - ) - buffer_state.done_event.set() - monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) - - # Act - events = list( - build_workflow_event_stream( - app_mode=AppMode.WORKFLOW, - workflow_run=workflow_run, - tenant_id="tenant-1", - app_id="app-1", - session_maker=MagicMock(), - ) - ) - - # Assert - assert events == [StreamEvent.PING.value] - assert buffer_state.stop_event.is_set() is True - - -def test_build_workflow_event_stream_should_continue_when_pause_loading_fails( - monkeypatch: pytest.MonkeyPatch, -) -> None: - # Arrange - workflow_run = _build_workflow_run_additional(status=WorkflowExecutionStatus.PAUSED) - topic = _Topic(_StaticSubscription()) - workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock(side_effect=RuntimeError("boom"))) - node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) - factory = SimpleNamespace( - create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), - create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), - ) - monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) - monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) - monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) - monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) - snapshot_builder = MagicMock(return_value=[{"event": StreamEvent.WORKFLOW_FINISHED.value}]) - monkeypatch.setattr(service_module, "_build_snapshot_events", snapshot_builder) - buffer_state = BufferState( - queue=queue.Queue(), - stop_event=Event(), - done_event=Event(), - task_id_ready=Event(), - task_id_hint="task-1", - ) - monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) - - # Act - events = list( - build_workflow_event_stream( - app_mode=AppMode.WORKFLOW, - workflow_run=workflow_run, - tenant_id="tenant-1", - app_id="app-1", - session_maker=MagicMock(), - ) - ) - - # Assert - assert events[0] == StreamEvent.PING.value - assert snapshot_builder.call_args.kwargs["pause_entity"] is None diff --git a/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service_additional.py b/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service_additional.py new file mode 100644 index 0000000000..d2634d7d7b --- /dev/null +++ b/api/tests/unit_tests/services/workflow/test_workflow_event_snapshot_service_additional.py @@ -0,0 +1,505 @@ +import json +import queue +from collections.abc import Mapping +from dataclasses import dataclass +from datetime import UTC, datetime +from itertools import cycle +from threading import Event +from types import SimpleNamespace +from typing import Any, cast +from unittest.mock import MagicMock + +import pytest +from sqlalchemy.orm import Session, sessionmaker + +from core.app.app_config.entities import WorkflowUIBasedAppConfig +from core.app.entities.app_invoke_entities import InvokeFrom, WorkflowAppGenerateEntity +from core.app.entities.task_entities import StreamEvent +from core.app.layers.pause_state_persist_layer import WorkflowResumptionContext, _WorkflowGenerateEntityWrapper +from graphon.enums import WorkflowExecutionStatus +from graphon.runtime import GraphRuntimeState, VariablePool +from models.enums import CreatorUserRole +from models.model import AppMode +from models.workflow import WorkflowRun +from repositories.entities.workflow_pause import WorkflowPauseEntity +from services import workflow_event_snapshot_service as service_module +from services.workflow_event_snapshot_service import BufferState, MessageContext, build_workflow_event_stream + + +def _build_workflow_run(status: WorkflowExecutionStatus = WorkflowExecutionStatus.RUNNING) -> WorkflowRun: + return WorkflowRun( + id="run-1", + tenant_id="tenant-1", + app_id="app-1", + workflow_id="workflow-1", + type="workflow", + triggered_from="app-run", + version="v1", + graph=None, + inputs=json.dumps({"query": "hello"}), + status=status, + outputs=json.dumps({}), + error=None, + elapsed_time=1.2, + total_tokens=5, + total_steps=2, + created_by_role=CreatorUserRole.END_USER, + created_by="user-1", + created_at=datetime(2024, 1, 1, tzinfo=UTC), + ) + + +def _build_resumption_context(task_id: str) -> WorkflowResumptionContext: + app_config = WorkflowUIBasedAppConfig( + tenant_id="tenant-1", + app_id="app-1", + app_mode=AppMode.WORKFLOW, + workflow_id="workflow-1", + ) + generate_entity = WorkflowAppGenerateEntity( + task_id=task_id, + app_config=app_config, + inputs={}, + files=[], + user_id="user-1", + stream=True, + invoke_from=InvokeFrom.EXPLORE, + call_depth=0, + workflow_execution_id="run-1", + ) + runtime_state = GraphRuntimeState(variable_pool=VariablePool(), start_at=0.0) + runtime_state.outputs = {"answer": "ok"} + wrapper = _WorkflowGenerateEntityWrapper(entity=generate_entity) + return WorkflowResumptionContext( + generate_entity=wrapper, + serialized_graph_runtime_state=runtime_state.dumps(), + ) + + +class _SessionContext: + def __init__(self, session: Any) -> None: + self._session = session + + def __enter__(self) -> Any: + return self._session + + def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: + return False + + +class _SessionMaker: + def __init__(self, session: Any) -> None: + self._session = session + + def __call__(self) -> _SessionContext: + return _SessionContext(self._session) + + +class _SubscriptionContext: + def __init__(self, subscription: Any) -> None: + self._subscription = subscription + + def __enter__(self) -> Any: + return self._subscription + + def __exit__(self, exc_type: Any, exc: Any, tb: Any) -> bool: + return False + + +class _Topic: + def __init__(self, subscription: Any) -> None: + self._subscription = subscription + + def subscribe(self) -> _SubscriptionContext: + return _SubscriptionContext(self._subscription) + + +class _StaticSubscription: + def receive(self, timeout: int = 1) -> None: + return None + + +@dataclass(frozen=True) +class _PauseEntity(WorkflowPauseEntity): + state: bytes + + @property + def id(self) -> str: + return "pause-1" + + @property + def workflow_execution_id(self) -> str: + return "run-1" + + @property + def resumed_at(self) -> datetime | None: + return None + + @property + def paused_at(self) -> datetime: + return datetime(2024, 1, 1, tzinfo=UTC) + + def get_state(self) -> bytes: + return self.state + + def get_pause_reasons(self) -> list[Any]: + return [] + + +class TestWorkflowEventSnapshotHelpers: + def test_get_message_context_should_return_none_when_no_message(self) -> None: + session = SimpleNamespace(scalar=MagicMock(return_value=None)) + session_maker = _SessionMaker(session) + + result = service_module._get_message_context(cast(sessionmaker[Session], session_maker), "run-1") + + assert result is None + + def test_get_message_context_should_default_created_at_to_zero_when_message_has_no_timestamp(self) -> None: + message = SimpleNamespace( + id="msg-1", + conversation_id="conv-1", + created_at=None, + answer="answer", + ) + session = SimpleNamespace(scalar=MagicMock(return_value=message)) + session_maker = _SessionMaker(session) + + result = service_module._get_message_context(cast(sessionmaker[Session], session_maker), "run-1") + + assert result is not None + assert result.created_at == 0 + assert result.message_id == "msg-1" + assert result.conversation_id == "conv-1" + assert result.answer == "answer" + + def test_load_resumption_context_should_return_none_when_pause_entity_missing(self) -> None: + assert service_module._load_resumption_context(None) is None + + def test_load_resumption_context_should_return_none_when_pause_entity_state_is_invalid(self) -> None: + pause_entity = _PauseEntity(state=b"not-a-valid-state") + assert service_module._load_resumption_context(pause_entity) is None + + def test_load_resumption_context_should_parse_valid_state_into_context(self) -> None: + context = _build_resumption_context(task_id="task-ctx") + pause_entity = _PauseEntity(state=context.dumps().encode()) + + result = service_module._load_resumption_context(pause_entity) + + assert result is not None + assert result.get_generate_entity().task_id == "task-ctx" + + def test_resolve_task_id_should_return_workflow_run_id_when_buffer_state_is_missing(self) -> None: + result = service_module._resolve_task_id( + resumption_context=None, + buffer_state=None, + workflow_run_id="run-1", + ) + + assert result == "run-1" + + @pytest.mark.parametrize( + ("payload", "expected"), + [ + (b'{"event":"node_started"}', {"event": "node_started"}), + (b"invalid-json", None), + (b"[]", None), + ], + ) + def test_parse_event_message_should_parse_only_json_object( + self, + payload: bytes, + expected: dict[str, Any] | None, + ) -> None: + result = service_module._parse_event_message(payload) + assert result == expected + + def test_is_terminal_event_should_recognize_finished_and_optional_paused_events(self) -> None: + finished_event = {"event": StreamEvent.WORKFLOW_FINISHED.value} + paused_event = {"event": StreamEvent.WORKFLOW_PAUSED.value} + + is_finished = service_module._is_terminal_event(finished_event, include_paused=False) + paused_without_flag = service_module._is_terminal_event(paused_event, include_paused=False) + paused_with_flag = service_module._is_terminal_event(paused_event, include_paused=True) + + assert is_finished is True + assert paused_without_flag is False + assert paused_with_flag is True + assert service_module._is_terminal_event(StreamEvent.PING.value, include_paused=True) is False + + def test_apply_message_context_should_update_payload_when_context_exists(self) -> None: + payload: dict[str, Any] = {"event": "workflow_started"} + context = MessageContext(conversation_id="conv-1", message_id="msg-1", created_at=1700000000) + + service_module._apply_message_context(payload, context) + + assert payload["conversation_id"] == "conv-1" + assert payload["message_id"] == "msg-1" + assert payload["created_at"] == 1700000000 + + def test_start_buffering_should_capture_task_id_and_enqueue_event(self) -> None: + class Subscription: + def __init__(self) -> None: + self._calls = 0 + + def receive(self, timeout: int = 1) -> bytes | None: + self._calls += 1 + if self._calls == 1: + return b'{"event":"node_started","task_id":"task-1"}' + return None + + subscription = Subscription() + + buffer_state = service_module._start_buffering(subscription) + ready = buffer_state.task_id_ready.wait(timeout=1) + event = buffer_state.queue.get(timeout=1) + buffer_state.stop_event.set() + finished = buffer_state.done_event.wait(timeout=1) + + assert ready is True + assert finished is True + assert buffer_state.task_id_hint == "task-1" + assert event["event"] == "node_started" + + def test_start_buffering_should_drop_old_event_when_queue_is_full( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + class QueueWithSingleFull: + def __init__(self) -> None: + self._first_put = True + self.items: list[dict[str, Any]] = [{"event": "old"}] + + def put_nowait(self, item: dict[str, Any]) -> None: + if self._first_put: + self._first_put = False + raise queue.Full + self.items.append(item) + + def get_nowait(self) -> dict[str, Any]: + if not self.items: + raise queue.Empty + return self.items.pop(0) + + def empty(self) -> bool: + return len(self.items) == 0 + + fake_queue = QueueWithSingleFull() + monkeypatch.setattr(service_module.queue, "Queue", lambda maxsize=2048: fake_queue) + + class Subscription: + def __init__(self) -> None: + self._calls = 0 + + def receive(self, timeout: int = 1) -> bytes | None: + self._calls += 1 + if self._calls == 1: + return b'{"event":"node_started","task_id":"task-2"}' + return None + + subscription = Subscription() + + buffer_state = service_module._start_buffering(subscription) + ready = buffer_state.task_id_ready.wait(timeout=1) + buffer_state.stop_event.set() + finished = buffer_state.done_event.wait(timeout=1) + + assert ready is True + assert finished is True + assert fake_queue.items[-1]["task_id"] == "task-2" + + def test_start_buffering_should_set_done_event_when_subscription_raises(self) -> None: + class Subscription: + def receive(self, timeout: int = 1) -> bytes | None: + raise RuntimeError("subscription failure") + + subscription = Subscription() + buffer_state = service_module._start_buffering(subscription) + + assert buffer_state.done_event.wait(timeout=1) is True + + +class TestBuildWorkflowEventStream: + def test_build_workflow_event_stream_should_emit_ping_and_terminal_snapshot_event( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + workflow_run = _build_workflow_run(status=WorkflowExecutionStatus.RUNNING) + topic = _Topic(_StaticSubscription()) + workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) + node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) + factory = SimpleNamespace( + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) + monkeypatch.setattr( + service_module, + "_get_message_context", + MagicMock(return_value=MessageContext("conv-1", "msg-1", 1700000000)), + ) + monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) + buffer_state = BufferState( + queue=queue.Queue(), + stop_event=Event(), + done_event=Event(), + task_id_ready=Event(), + task_id_hint="task-1", + ) + monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) + monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) + monkeypatch.setattr( + service_module, + "_build_snapshot_events", + MagicMock(return_value=[{"event": StreamEvent.WORKFLOW_FINISHED.value, "task_id": "task-1"}]), + ) + + events = list( + build_workflow_event_stream( + app_mode=AppMode.ADVANCED_CHAT, + workflow_run=workflow_run, + tenant_id="tenant-1", + app_id="app-1", + session_maker=MagicMock(), + ) + ) + + assert events[0] == StreamEvent.PING.value + finished_event = cast(Mapping[str, Any], events[1]) + assert finished_event["event"] == StreamEvent.WORKFLOW_FINISHED.value + assert buffer_state.stop_event.is_set() is True + node_repo.get_execution_snapshots_by_workflow_run.assert_called_once() + called_kwargs = node_repo.get_execution_snapshots_by_workflow_run.call_args.kwargs + assert called_kwargs["workflow_run_id"] == "run-1" + + def test_build_workflow_event_stream_should_emit_periodic_ping_and_stop_after_idle_timeout( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + workflow_run = _build_workflow_run(status=WorkflowExecutionStatus.RUNNING) + topic = _Topic(_StaticSubscription()) + workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) + node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) + factory = SimpleNamespace( + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) + monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) + monkeypatch.setattr(service_module, "_build_snapshot_events", MagicMock(return_value=[])) + monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) + + class AlwaysEmptyQueue: + def empty(self) -> bool: + return False + + def get(self, timeout: int = 1) -> None: + raise queue.Empty + + buffer_state = BufferState( + queue=AlwaysEmptyQueue(), # type: ignore[arg-type] + stop_event=Event(), + done_event=Event(), + task_id_ready=Event(), + task_id_hint="task-1", + ) + monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) + time_values = cycle([0.0, 6.0, 21.0, 26.0]) + monkeypatch.setattr(service_module.time, "time", lambda: next(time_values)) + + events = list( + build_workflow_event_stream( + app_mode=AppMode.WORKFLOW, + workflow_run=workflow_run, + tenant_id="tenant-1", + app_id="app-1", + session_maker=MagicMock(), + idle_timeout=20.0, + ping_interval=5.0, + ) + ) + + assert events == [StreamEvent.PING.value, StreamEvent.PING.value] + assert buffer_state.stop_event.is_set() is True + + def test_build_workflow_event_stream_should_exit_when_buffer_done_and_empty( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + workflow_run = _build_workflow_run(status=WorkflowExecutionStatus.RUNNING) + topic = _Topic(_StaticSubscription()) + workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock()) + node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) + factory = SimpleNamespace( + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) + monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) + monkeypatch.setattr(service_module, "_build_snapshot_events", MagicMock(return_value=[])) + monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) + buffer_state = BufferState( + queue=queue.Queue(), + stop_event=Event(), + done_event=Event(), + task_id_ready=Event(), + task_id_hint="task-1", + ) + buffer_state.done_event.set() + monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) + + events = list( + build_workflow_event_stream( + app_mode=AppMode.WORKFLOW, + workflow_run=workflow_run, + tenant_id="tenant-1", + app_id="app-1", + session_maker=MagicMock(), + ) + ) + + assert events == [StreamEvent.PING.value] + assert buffer_state.stop_event.is_set() is True + + def test_build_workflow_event_stream_should_continue_when_pause_loading_fails( + self, + monkeypatch: pytest.MonkeyPatch, + ) -> None: + workflow_run = _build_workflow_run(status=WorkflowExecutionStatus.PAUSED) + topic = _Topic(_StaticSubscription()) + workflow_run_repo = SimpleNamespace(get_workflow_pause=MagicMock(side_effect=RuntimeError("boom"))) + node_repo = SimpleNamespace(get_execution_snapshots_by_workflow_run=MagicMock(return_value=[])) + factory = SimpleNamespace( + create_api_workflow_run_repository=MagicMock(return_value=workflow_run_repo), + create_api_workflow_node_execution_repository=MagicMock(return_value=node_repo), + ) + monkeypatch.setattr(service_module, "DifyAPIRepositoryFactory", factory) + monkeypatch.setattr(service_module.MessageGenerator, "get_response_topic", MagicMock(return_value=topic)) + monkeypatch.setattr(service_module, "_load_resumption_context", MagicMock(return_value=None)) + monkeypatch.setattr(service_module, "_resolve_task_id", MagicMock(return_value="task-1")) + snapshot_builder = MagicMock(return_value=[{"event": StreamEvent.WORKFLOW_FINISHED.value}]) + monkeypatch.setattr(service_module, "_build_snapshot_events", snapshot_builder) + buffer_state = BufferState( + queue=queue.Queue(), + stop_event=Event(), + done_event=Event(), + task_id_ready=Event(), + task_id_hint="task-1", + ) + monkeypatch.setattr(service_module, "_start_buffering", MagicMock(return_value=buffer_state)) + + events = list( + build_workflow_event_stream( + app_mode=AppMode.WORKFLOW, + workflow_run=workflow_run, + tenant_id="tenant-1", + app_id="app-1", + session_maker=MagicMock(), + ) + ) + + assert events[0] == StreamEvent.PING.value + assert snapshot_builder.call_args.kwargs["pause_entity"] is None diff --git a/api/tests/unit_tests/services/workflow/test_workflow_human_input_delivery.py b/api/tests/unit_tests/services/workflow/test_workflow_human_input_delivery.py index 98d057e41f..d7192994b2 100644 --- a/api/tests/unit_tests/services/workflow/test_workflow_human_input_delivery.py +++ b/api/tests/unit_tests/services/workflow/test_workflow_human_input_delivery.py @@ -3,9 +3,6 @@ from types import SimpleNamespace from unittest.mock import MagicMock import pytest -from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter -from graphon.enums import BuiltinNodeTypes -from graphon.nodes.human_input.entities import HumanInputNodeData from sqlalchemy.orm import sessionmaker from core.workflow.human_input_compat import ( @@ -15,6 +12,9 @@ from core.workflow.human_input_compat import ( ExternalRecipient, MemberRecipient, ) +from graphon.entities.graph_config import NodeConfigDict, NodeConfigDictAdapter +from graphon.enums import BuiltinNodeTypes +from graphon.nodes.human_input.entities import HumanInputNodeData from services import workflow_service as workflow_service_module from services.workflow_service import WorkflowService diff --git a/api/tests/unit_tests/tasks/test_human_input_timeout_tasks.py b/api/tests/unit_tests/tasks/test_human_input_timeout_tasks.py index 7119217e94..591da56f49 100644 --- a/api/tests/unit_tests/tasks/test_human_input_timeout_tasks.py +++ b/api/tests/unit_tests/tasks/test_human_input_timeout_tasks.py @@ -5,8 +5,8 @@ from types import SimpleNamespace from typing import Any import pytest -from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus +from graphon.nodes.human_input.enums import HumanInputFormKind, HumanInputFormStatus from tasks import human_input_timeout_tasks as task_module diff --git a/api/tests/unit_tests/tools/test_mcp_tool.py b/api/tests/unit_tests/tools/test_mcp_tool.py index 68359ba078..689b973097 100644 --- a/api/tests/unit_tests/tools/test_mcp_tool.py +++ b/api/tests/unit_tests/tools/test_mcp_tool.py @@ -1,9 +1,9 @@ import base64 from decimal import Decimal +from typing import Any from unittest.mock import Mock, patch import pytest -from graphon.model_runtime.entities.llm_entities import LLMUsage from core.mcp.types import ( AudioContent, @@ -18,9 +18,10 @@ from core.tools.__base.tool_runtime import ToolRuntime from core.tools.entities.common_entities import I18nObject from core.tools.entities.tool_entities import ToolEntity, ToolIdentity, ToolInvokeMessage from core.tools.mcp_tool.tool import MCPTool +from graphon.model_runtime.entities.llm_entities import LLMUsage -def _make_mcp_tool(output_schema: dict | None = None) -> MCPTool: +def _make_mcp_tool(output_schema: dict[str, Any] | None = None) -> MCPTool: identity = ToolIdentity( author="test", name="test_mcp_tool", diff --git a/api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py b/api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py index ffa6833524..c166a946d9 100644 --- a/api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py +++ b/api/tests/unit_tests/utils/structured_output_parser/test_structured_output_parser.py @@ -2,6 +2,9 @@ from decimal import Decimal from unittest.mock import MagicMock, patch import pytest + +from core.llm_generator.output_parser.errors import OutputParserError +from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output from graphon.model_runtime.entities.llm_entities import ( LLMResult, LLMResultChunk, @@ -18,9 +21,6 @@ from graphon.model_runtime.entities.message_entities import ( ) from graphon.model_runtime.entities.model_entities import AIModelEntity, ModelType -from core.llm_generator.output_parser.errors import OutputParserError -from core.llm_generator.output_parser.structured_output import invoke_llm_with_structured_output - def create_mock_usage(prompt_tokens: int = 10, completion_tokens: int = 5) -> LLMUsage: """Create a mock LLMUsage with all required fields""" diff --git a/api/tests/workflow_test_utils.py b/api/tests/workflow_test_utils.py index d33ac2c710..1415bb1d52 100644 --- a/api/tests/workflow_test_utils.py +++ b/api/tests/workflow_test_utils.py @@ -1,13 +1,12 @@ from collections.abc import Mapping from typing import Any +from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context +from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool from graphon.entities import GraphInitParams from graphon.runtime import VariablePool from graphon.variables.variables import Variable -from core.app.entities.app_invoke_entities import InvokeFrom, UserFrom, build_dify_run_context -from core.workflow.variable_pool_initializer import add_node_inputs_to_pool, add_variables_to_pool - def build_test_run_context( *, diff --git a/api/uv.lock b/api/uv.lock index 38a2ea21e2..77ba905a67 100644 --- a/api/uv.lock +++ b/api/uv.lock @@ -42,6 +42,7 @@ members = [ "dify-vdb-vikingdb", "dify-vdb-weaviate", ] +overrides = [{ name = "pyarrow", specifier = ">=18.0.0" }] [[package]] name = "abnf" @@ -370,14 +371,14 @@ wheels = [ [[package]] name = "authlib" -version = "1.6.9" +version = "1.6.11" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "cryptography" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/af/98/00d3dd826d46959ad8e32af2dbb2398868fd9fd0683c26e56d0789bd0e68/authlib-1.6.9.tar.gz", hash = "sha256:d8f2421e7e5980cc1ddb4e32d3f5fa659cfaf60d8eaf3281ebed192e4ab74f04", size = 165134, upload-time = "2026-03-02T07:44:01.998Z" } +sdist = { url = "https://files.pythonhosted.org/packages/28/10/b325d58ffe86815b399334a101e63bc6fa4e1953921cb23703b48a0a0220/authlib-1.6.11.tar.gz", hash = "sha256:64db35b9b01aeccb4715a6c9a6613a06f2bd7be2ab9d2eb89edd1dfc7580a38f", size = 165359, upload-time = "2026-04-16T07:22:50.279Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/53/23/b65f568ed0c22f1efacb744d2db1a33c8068f384b8c9b482b52ebdbc3ef6/authlib-1.6.9-py2.py3-none-any.whl", hash = "sha256:f08b4c14e08f0861dc18a32357b33fbcfd2ea86cfe3fe149484b4d764c4a0ac3", size = 244197, upload-time = "2026-03-02T07:44:00.307Z" }, + { url = "https://files.pythonhosted.org/packages/57/2f/55fca558f925a51db046e5b929deb317ddb05afed74b22d89f4eca578980/authlib-1.6.11-py2.py3-none-any.whl", hash = "sha256:c8687a9a26451c51a34a06fa17bb97cb15bba46a6a626755e2d7f50da8bff3e3", size = 244469, upload-time = "2026-04-16T07:22:48.413Z" }, ] [[package]] @@ -536,6 +537,15 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/1a/39/47f9197bdd44df24d67ac8893641e16f386c984a0619ef2ee4c51fbbc019/beautifulsoup4-4.14.3-py3-none-any.whl", hash = "sha256:0918bfe44902e6ad8d57732ba310582e98da931428d231a5ecb9e7c703a735bb", size = 107721, upload-time = "2025-11-30T15:08:24.087Z" }, ] +[[package]] +name = "bidict" +version = "0.23.1" +source = { registry = "https://pypi.org/simple" } +sdist = { url = "https://files.pythonhosted.org/packages/9a/6e/026678aa5a830e07cd9498a05d3e7e650a4f56a42f267a53d22bcda1bdc9/bidict-0.23.1.tar.gz", hash = "sha256:03069d763bc387bbd20e7d49914e75fc4132a41937fa3405417e1a5a2d006d71", size = 29093, upload-time = "2024-02-18T19:09:05.748Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/99/37/e8730c3587a65eb5645d4aba2d27aae48e8003614d6aaf15dda67f702f1f/bidict-0.23.1-py3-none-any.whl", hash = "sha256:5dae8d4d79b552a71cbabc7deb25dfe8ce710b17ff41711e13010ead2abfc3e5", size = 32764, upload-time = "2024-02-18T19:09:04.156Z" }, +] + [[package]] name = "billiard" version = "4.2.3" @@ -642,24 +652,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/57/b7/f4a051cefaf76930c77558b31646bcce7e9b3fbdcbc89e4073783e961519/botocore_stubs-1.41.3-py3-none-any.whl", hash = "sha256:6ab911bd9f7256f1dcea2e24a4af7ae0f9f07e83d0a760bba37f028f4a2e5589", size = 66749, upload-time = "2025-11-24T20:29:26.142Z" }, ] -[[package]] -name = "bottleneck" -version = "1.6.0" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "numpy" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/14/d8/6d641573e210768816023a64966d66463f2ce9fc9945fa03290c8a18f87c/bottleneck-1.6.0.tar.gz", hash = "sha256:028d46ee4b025ad9ab4d79924113816f825f62b17b87c9e1d0d8ce144a4a0e31", size = 104311, upload-time = "2025-09-08T16:30:38.617Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/8d/72/7e3593a2a3dd69ec831a9981a7b1443647acb66a5aec34c1620a5f7f8498/bottleneck-1.6.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:3bb16a16a86a655fdbb34df672109a8a227bb5f9c9cf5bb8ae400a639bc52fa3", size = 100515, upload-time = "2025-09-08T16:29:55.141Z" }, - { url = "https://files.pythonhosted.org/packages/b5/d4/e7bbea08f4c0f0bab819d38c1a613da5f194fba7b19aae3e2b3a27e78886/bottleneck-1.6.0-cp312-cp312-manylinux1_x86_64.manylinux_2_28_x86_64.manylinux_2_5_x86_64.whl", hash = "sha256:0fbf5d0787af9aee6cef4db9cdd14975ce24bd02e0cc30155a51411ebe2ff35f", size = 377451, upload-time = "2025-09-08T16:29:56.718Z" }, - { url = "https://files.pythonhosted.org/packages/fe/80/a6da430e3b1a12fd85f9fe90d3ad8fe9a527ecb046644c37b4b3f4baacfc/bottleneck-1.6.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d08966f4a22384862258940346a72087a6f7cebb19038fbf3a3f6690ee7fd39f", size = 368303, upload-time = "2025-09-08T16:29:57.834Z" }, - { url = "https://files.pythonhosted.org/packages/30/11/abd30a49f3251f4538430e5f876df96f2b39dabf49e05c5836820d2c31fe/bottleneck-1.6.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:604f0b898b43b7bc631c564630e936a8759d2d952641c8b02f71e31dbcd9deaa", size = 361232, upload-time = "2025-09-08T16:29:59.104Z" }, - { url = "https://files.pythonhosted.org/packages/1d/ac/1c0e09d8d92b9951f675bd42463ce76c3c3657b31c5bf53ca1f6dd9eccff/bottleneck-1.6.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:d33720bad761e642abc18eda5f188ff2841191c9f63f9d0c052245decc0faeb9", size = 373234, upload-time = "2025-09-08T16:30:00.488Z" }, - { url = "https://files.pythonhosted.org/packages/fb/ea/382c572ae3057ba885d484726bb63629d1f63abedf91c6cd23974eb35a9b/bottleneck-1.6.0-cp312-cp312-win32.whl", hash = "sha256:a1e5907ec2714efbe7075d9207b58c22ab6984a59102e4ecd78dced80dab8374", size = 108020, upload-time = "2025-09-08T16:30:01.773Z" }, - { url = "https://files.pythonhosted.org/packages/48/ad/d71da675eef85ac153eef5111ca0caa924548c9591da00939bcabba8de8e/bottleneck-1.6.0-cp312-cp312-win_amd64.whl", hash = "sha256:81e3822499f057a917b7d3972ebc631ac63c6bbcc79ad3542a66c4c40634e3a6", size = 113493, upload-time = "2025-09-08T16:30:02.872Z" }, -] - [[package]] name = "brotli" version = "1.2.0" @@ -694,18 +686,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/07/6b/6e92009df3b8b7272f85a0992b306b61c34b7ea1c4776643746e61c380ac/brotlicffi-1.2.0.0-cp38-abi3-win_amd64.whl", hash = "sha256:f139a7cdfe4ae7859513067b736eb44d19fae1186f9e99370092f6915216451b", size = 378586, upload-time = "2025-11-21T18:17:50.531Z" }, ] -[[package]] -name = "bs4" -version = "0.0.2" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "beautifulsoup4" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/c9/aa/4acaf814ff901145da37332e05bb510452ebed97bc9602695059dd46ef39/bs4-0.0.2.tar.gz", hash = "sha256:a48685c58f50fe127722417bae83fe6badf500d54b55f7e39ffe43b798653925", size = 698, upload-time = "2024-01-17T18:15:47.371Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/51/bb/bf7aab772a159614954d84aa832c129624ba6c32faa559dfb200a534e50b/bs4-0.0.2-py2.py3-none-any.whl", hash = "sha256:abf8742c0805ef7f662dce4b51cca104cffe52b835238afc169142ab9b3fbccc", size = 1189, upload-time = "2024-01-17T18:15:48.613Z" }, -] - [[package]] name = "build" version = "1.3.0" @@ -1305,91 +1285,49 @@ version = "1.13.3" source = { virtual = "." } dependencies = [ { name = "aliyun-log-python-sdk" }, - { name = "apscheduler" }, { name = "arize-phoenix-otel" }, { name = "azure-identity" }, - { name = "beautifulsoup4" }, { name = "bleach" }, { name = "boto3" }, - { name = "bs4" }, - { name = "cachetools" }, { name = "celery" }, - { name = "charset-normalizer" }, { name = "croniter" }, { name = "fastopenapi", extra = ["flask"] }, - { name = "flask" }, { name = "flask-compress" }, { name = "flask-cors" }, { name = "flask-login" }, { name = "flask-migrate" }, { name = "flask-orjson" }, { name = "flask-restx" }, - { name = "flask-sqlalchemy" }, { name = "gevent" }, + { name = "gevent-websocket" }, { name = "gmpy2" }, - { name = "google-api-core" }, { name = "google-api-python-client" }, - { name = "google-auth" }, - { name = "google-auth-httplib2" }, { name = "google-cloud-aiplatform" }, - { name = "googleapis-common-protos" }, { name = "graphon" }, { name = "gunicorn" }, { name = "httpx", extra = ["socks"] }, { name = "httpx-sse" }, - { name = "jieba" }, { name = "json-repair" }, { name = "langfuse" }, { name = "langsmith" }, - { name = "litellm" }, - { name = "markdown" }, { name = "mlflow-skinny" }, - { name = "numpy" }, - { name = "openpyxl" }, - { name = "opentelemetry-api" }, { name = "opentelemetry-distro" }, - { name = "opentelemetry-exporter-otlp" }, - { name = "opentelemetry-exporter-otlp-proto-common" }, - { name = "opentelemetry-exporter-otlp-proto-grpc" }, - { name = "opentelemetry-exporter-otlp-proto-http" }, - { name = "opentelemetry-instrumentation" }, { name = "opentelemetry-instrumentation-celery" }, { name = "opentelemetry-instrumentation-flask" }, { name = "opentelemetry-instrumentation-httpx" }, { name = "opentelemetry-instrumentation-redis" }, { name = "opentelemetry-instrumentation-sqlalchemy" }, { name = "opentelemetry-propagator-b3" }, - { name = "opentelemetry-proto" }, - { name = "opentelemetry-sdk" }, - { name = "opentelemetry-semantic-conventions" }, - { name = "opentelemetry-util-http" }, { name = "opik" }, - { name = "packaging" }, - { name = "pandas", extra = ["excel", "output-formatting", "performance"] }, { name = "psycogreen" }, { name = "psycopg2-binary" }, - { name = "pycryptodome" }, - { name = "pydantic" }, - { name = "pydantic-settings" }, - { name = "pyjwt" }, - { name = "pypandoc" }, - { name = "pypdfium2" }, - { name = "python-docx" }, - { name = "python-dotenv" }, - { name = "pyyaml" }, + { name = "python-socketio" }, { name = "readabilipy" }, { name = "redis", extra = ["hiredis"] }, { name = "resend" }, { name = "sendgrid" }, - { name = "sentry-sdk", extra = ["flask"] }, - { name = "sqlalchemy" }, { name = "sseclient-py" }, - { name = "starlette" }, - { name = "tiktoken" }, - { name = "transformers" }, - { name = "unstructured", extra = ["docx", "epub", "md", "ppt", "pptx"] }, { name = "weave" }, - { name = "yarl" }, ] [package.dev-dependencies] @@ -1415,7 +1353,6 @@ dev = [ { name = "pytest-xdist" }, { name = "ruff" }, { name = "scipy-stubs" }, - { name = "sseclient-py" }, { name = "testcontainers" }, { name = "types-aiofiles" }, { name = "types-beautifulsoup4" }, @@ -1601,174 +1538,131 @@ vdb-xinference = [ [package.metadata] requires-dist = [ - { name = "aliyun-log-python-sdk", specifier = "~=0.9.44" }, - { name = "apscheduler", specifier = ">=3.11.2" }, + { name = "aliyun-log-python-sdk", specifier = ">=0.9.44,<1.0.0" }, { name = "arize-phoenix-otel", specifier = "~=0.15.0" }, - { name = "azure-identity", specifier = "==1.25.3" }, - { name = "beautifulsoup4", specifier = "==4.14.3" }, - { name = "bleach", specifier = "~=6.3.0" }, - { name = "boto3", specifier = "==1.42.88" }, - { name = "bs4", specifier = "~=0.0.1" }, - { name = "cachetools", specifier = "~=7.0.5" }, - { name = "celery", specifier = "~=5.6.3" }, - { name = "charset-normalizer", specifier = ">=3.4.7" }, + { name = "azure-identity", specifier = ">=1.25.3,<2.0.0" }, + { name = "bleach", specifier = ">=6.3.0" }, + { name = "boto3", specifier = ">=1.42.88" }, + { name = "celery", specifier = ">=5.6.3" }, { name = "croniter", specifier = ">=6.2.2" }, - { name = "fastopenapi", extras = ["flask"], specifier = ">=0.7.0" }, - { name = "flask", specifier = "~=3.1.3" }, - { name = "flask-compress", specifier = ">=1.24,<1.25" }, - { name = "flask-cors", specifier = "~=6.0.2" }, - { name = "flask-login", specifier = "~=0.6.3" }, - { name = "flask-migrate", specifier = "~=4.1.0" }, - { name = "flask-orjson", specifier = "~=2.0.0" }, - { name = "flask-restx", specifier = "~=1.3.2" }, - { name = "flask-sqlalchemy", specifier = "~=3.1.1" }, - { name = "gevent", specifier = "~=26.4.0" }, - { name = "gmpy2", specifier = "~=2.3.0" }, - { name = "google-api-core", specifier = ">=2.30.3" }, - { name = "google-api-python-client", specifier = "==2.194.0" }, - { name = "google-auth", specifier = ">=2.49.2" }, - { name = "google-auth-httplib2", specifier = "==0.3.1" }, - { name = "google-cloud-aiplatform", specifier = ">=1.147.0" }, - { name = "googleapis-common-protos", specifier = ">=1.74.0" }, - { name = "graphon", specifier = ">=0.1.2" }, - { name = "gunicorn", specifier = "~=25.3.0" }, - { name = "httpx", extras = ["socks"], specifier = "~=0.28.1" }, + { name = "fastopenapi", extras = ["flask"], specifier = "~=0.7.0" }, + { name = "flask-compress", specifier = ">=1.24,<2.0.0" }, + { name = "flask-cors", specifier = ">=6.0.2" }, + { name = "flask-login", specifier = ">=0.6.3,<1.0.0" }, + { name = "flask-migrate", specifier = ">=4.1.0,<5.0.0" }, + { name = "flask-orjson", specifier = ">=2.0.0,<3.0.0" }, + { name = "flask-restx", specifier = ">=1.3.2,<2.0.0" }, + { name = "gevent", specifier = ">=26.4.0" }, + { name = "gevent-websocket", specifier = ">=0.10.1" }, + { name = "gmpy2", specifier = ">=2.3.0" }, + { name = "google-api-python-client", specifier = ">=2.194.0" }, + { name = "google-cloud-aiplatform", specifier = ">=1.147.0,<2.0.0" }, + { name = "graphon", specifier = "~=0.1.2" }, + { name = "gunicorn", specifier = ">=25.3.0" }, + { name = "httpx", extras = ["socks"], specifier = ">=0.28.1,<1.0.0" }, { name = "httpx-sse", specifier = "~=0.4.0" }, - { name = "jieba", specifier = "==0.42.1" }, - { name = "json-repair", specifier = ">=0.59.2" }, + { name = "json-repair", specifier = "~=0.59.2" }, { name = "langfuse", specifier = ">=4.2.0,<5.0.0" }, - { name = "langsmith", specifier = "~=0.7.30" }, - { name = "litellm", specifier = "==1.83.0" }, - { name = "markdown", specifier = "~=3.10.2" }, - { name = "mlflow-skinny", specifier = ">=3.11.1" }, - { name = "numpy", specifier = "~=2.4.4" }, - { name = "openpyxl", specifier = "~=3.1.5" }, - { name = "opentelemetry-api", specifier = "==1.41.0" }, - { name = "opentelemetry-distro", specifier = "==0.62b0" }, - { name = "opentelemetry-exporter-otlp", specifier = "==1.41.0" }, - { name = "opentelemetry-exporter-otlp-proto-common", specifier = "==1.41.0" }, - { name = "opentelemetry-exporter-otlp-proto-grpc", specifier = "==1.41.0" }, - { name = "opentelemetry-exporter-otlp-proto-http", specifier = "==1.41.0" }, - { name = "opentelemetry-instrumentation", specifier = "==0.62b0" }, - { name = "opentelemetry-instrumentation-celery", specifier = "==0.62b0" }, - { name = "opentelemetry-instrumentation-flask", specifier = "==0.62b0" }, - { name = "opentelemetry-instrumentation-httpx", specifier = "==0.62b0" }, - { name = "opentelemetry-instrumentation-redis", specifier = "==0.62b0" }, - { name = "opentelemetry-instrumentation-sqlalchemy", specifier = "==0.62b0" }, - { name = "opentelemetry-propagator-b3", specifier = "==1.41.0" }, - { name = "opentelemetry-proto", specifier = "==1.41.0" }, - { name = "opentelemetry-sdk", specifier = "==1.41.0" }, - { name = "opentelemetry-semantic-conventions", specifier = "==0.62b0" }, - { name = "opentelemetry-util-http", specifier = "==0.62b0" }, + { name = "langsmith", specifier = ">=0.7.31,<1.0.0" }, + { name = "mlflow-skinny", specifier = ">=3.11.1,<4.0.0" }, + { name = "opentelemetry-distro", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-celery", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-flask", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-httpx", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-redis", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-instrumentation-sqlalchemy", specifier = ">=0.62b0,<1.0.0" }, + { name = "opentelemetry-propagator-b3", specifier = ">=1.41.0,<2.0.0" }, { name = "opik", specifier = "~=1.11.2" }, - { name = "packaging", specifier = "~=26.0" }, - { name = "pandas", extras = ["excel", "output-formatting", "performance"], specifier = "~=3.0.2" }, - { name = "psycogreen", specifier = "~=1.0.2" }, - { name = "psycopg2-binary", specifier = "~=2.9.11" }, - { name = "pycryptodome", specifier = "==3.23.0" }, - { name = "pydantic", specifier = "~=2.12.5" }, - { name = "pydantic-settings", specifier = "~=2.13.1" }, - { name = "pyjwt", specifier = "~=2.12.1" }, - { name = "pypandoc", specifier = "~=1.13" }, - { name = "pypdfium2", specifier = "==5.6.0" }, - { name = "python-docx", specifier = "~=1.2.0" }, - { name = "python-dotenv", specifier = "==1.2.2" }, - { name = "pyyaml", specifier = "~=6.0.1" }, - { name = "readabilipy", specifier = "~=0.3.0" }, - { name = "redis", extras = ["hiredis"], specifier = "~=7.4.0" }, - { name = "resend", specifier = "~=2.27.0" }, - { name = "sendgrid", specifier = "~=6.12.5" }, - { name = "sentry-sdk", extras = ["flask"], specifier = "~=2.57.0" }, - { name = "sqlalchemy", specifier = "~=2.0.49" }, - { name = "sseclient-py", specifier = "~=1.9.0" }, - { name = "starlette", specifier = "==1.0.0" }, - { name = "tiktoken", specifier = "~=0.12.0" }, - { name = "transformers", specifier = "~=5.3.0" }, - { name = "unstructured", extras = ["docx", "epub", "md", "ppt", "pptx"], specifier = "~=0.21.5" }, - { name = "weave", specifier = ">=0.52.36" }, - { name = "yarl", specifier = "~=1.23.0" }, + { name = "psycogreen", specifier = ">=1.0.2" }, + { name = "psycopg2-binary", specifier = ">=2.9.11" }, + { name = "python-socketio", specifier = ">=5.13.0" }, + { name = "readabilipy", specifier = ">=0.3.0,<1.0.0" }, + { name = "redis", extras = ["hiredis"], specifier = ">=7.4.0" }, + { name = "resend", specifier = ">=2.27.0,<3.0.0" }, + { name = "sendgrid", specifier = ">=6.12.5" }, + { name = "sseclient-py", specifier = ">=1.8.0" }, + { name = "weave", specifier = ">=0.52.36,<1.0.0" }, ] [package.metadata.requires-dev] dev = [ - { name = "basedpyright", specifier = "~=1.39.0" }, + { name = "basedpyright", specifier = ">=1.39.0" }, { name = "boto3-stubs", specifier = ">=1.42.88" }, { name = "celery-types", specifier = ">=0.23.0" }, - { name = "coverage", specifier = "~=7.13.4" }, - { name = "dotenv-linter", specifier = "~=0.7.0" }, - { name = "faker", specifier = "~=40.13.0" }, + { name = "coverage", specifier = ">=7.13.4" }, + { name = "dotenv-linter", specifier = ">=0.7.0" }, + { name = "faker", specifier = ">=20.1.0" }, { name = "hypothesis", specifier = ">=6.151.12" }, { name = "import-linter", specifier = ">=2.3" }, - { name = "lxml-stubs", specifier = "~=0.5.1" }, - { name = "mypy", specifier = "~=1.20.1" }, - { name = "pandas-stubs", specifier = "~=3.0.0" }, + { name = "lxml-stubs", specifier = ">=0.5.1" }, + { name = "mypy", specifier = ">=1.20.1" }, + { name = "pandas-stubs", specifier = ">=3.0.0" }, { name = "pyrefly", specifier = ">=0.60.0" }, - { name = "pytest", specifier = "~=9.0.3" }, - { name = "pytest-benchmark", specifier = "~=5.2.3" }, - { name = "pytest-cov", specifier = "~=7.1.0" }, - { name = "pytest-env", specifier = "~=1.6.0" }, - { name = "pytest-mock", specifier = "~=3.15.1" }, + { name = "pytest", specifier = ">=9.0.3" }, + { name = "pytest-benchmark", specifier = ">=5.2.3" }, + { name = "pytest-cov", specifier = ">=7.1.0" }, + { name = "pytest-env", specifier = ">=1.6.0" }, + { name = "pytest-mock", specifier = ">=3.15.1" }, { name = "pytest-timeout", specifier = ">=2.4.0" }, { name = "pytest-xdist", specifier = ">=3.8.0" }, - { name = "ruff", specifier = "~=0.15.10" }, + { name = "ruff", specifier = ">=0.15.10" }, { name = "scipy-stubs", specifier = ">=1.15.3.0" }, - { name = "sseclient-py", specifier = ">=1.8.0" }, - { name = "testcontainers", specifier = "~=4.14.2" }, - { name = "types-aiofiles", specifier = "~=25.1.0" }, - { name = "types-beautifulsoup4", specifier = "~=4.12.0" }, - { name = "types-cachetools", specifier = "~=6.2.0" }, + { name = "testcontainers", specifier = ">=4.14.2" }, + { name = "types-aiofiles", specifier = ">=25.1.0" }, + { name = "types-beautifulsoup4", specifier = ">=4.12.0" }, + { name = "types-cachetools", specifier = ">=6.2.0" }, { name = "types-cffi", specifier = ">=2.0.0.20260408" }, - { name = "types-colorama", specifier = "~=0.4.15" }, - { name = "types-defusedxml", specifier = "~=0.7.0" }, - { name = "types-deprecated", specifier = "~=1.3.1" }, - { name = "types-docutils", specifier = "~=0.22.3" }, - { name = "types-flask-cors", specifier = "~=6.0.0" }, - { name = "types-flask-migrate", specifier = "~=4.1.0" }, - { name = "types-gevent", specifier = "~=26.4.0" }, - { name = "types-greenlet", specifier = "~=3.4.0" }, - { name = "types-html5lib", specifier = "~=1.1.11" }, + { name = "types-colorama", specifier = ">=0.4.15" }, + { name = "types-defusedxml", specifier = ">=0.7.0" }, + { name = "types-deprecated", specifier = ">=1.3.1" }, + { name = "types-docutils", specifier = ">=0.22.3" }, + { name = "types-flask-cors", specifier = ">=6.0.0" }, + { name = "types-flask-migrate", specifier = ">=4.1.0" }, + { name = "types-gevent", specifier = ">=26.4.0" }, + { name = "types-greenlet", specifier = ">=3.4.0" }, + { name = "types-html5lib", specifier = ">=1.1.11" }, { name = "types-jmespath", specifier = ">=1.1.0.20260408" }, - { name = "types-markdown", specifier = "~=3.10.2" }, - { name = "types-oauthlib", specifier = "~=3.3.0" }, - { name = "types-objgraph", specifier = "~=3.6.0" }, - { name = "types-olefile", specifier = "~=0.47.0" }, - { name = "types-openpyxl", specifier = "~=3.1.5" }, - { name = "types-pexpect", specifier = "~=4.9.0" }, - { name = "types-protobuf", specifier = "~=7.34.1" }, - { name = "types-psutil", specifier = "~=7.2.2" }, - { name = "types-psycopg2", specifier = "~=2.9.21" }, - { name = "types-pygments", specifier = "~=2.20.0" }, - { name = "types-pymysql", specifier = "~=1.1.0" }, + { name = "types-markdown", specifier = ">=3.10.2" }, + { name = "types-oauthlib", specifier = ">=3.3.0" }, + { name = "types-objgraph", specifier = ">=3.6.0" }, + { name = "types-olefile", specifier = ">=0.47.0" }, + { name = "types-openpyxl", specifier = ">=3.1.5" }, + { name = "types-pexpect", specifier = ">=4.9.0" }, + { name = "types-protobuf", specifier = ">=7.34.1" }, + { name = "types-psutil", specifier = ">=7.2.2" }, + { name = "types-psycopg2", specifier = ">=2.9.21" }, + { name = "types-pygments", specifier = ">=2.20.0" }, + { name = "types-pymysql", specifier = ">=1.1.0" }, { name = "types-pyopenssl", specifier = ">=24.1.0" }, - { name = "types-python-dateutil", specifier = "~=2.9.0" }, + { name = "types-python-dateutil", specifier = ">=2.9.0" }, { name = "types-python-http-client", specifier = ">=3.3.7.20260408" }, - { name = "types-pywin32", specifier = "~=311.0.0" }, - { name = "types-pyyaml", specifier = "~=6.0.12" }, + { name = "types-pywin32", specifier = ">=311.0.0" }, + { name = "types-pyyaml", specifier = ">=6.0.12" }, { name = "types-redis", specifier = ">=4.6.0.20241004" }, - { name = "types-regex", specifier = "~=2026.4.4" }, + { name = "types-regex", specifier = ">=2026.4.4" }, { name = "types-setuptools", specifier = ">=82.0.0.20260408" }, - { name = "types-shapely", specifier = "~=2.1.0" }, + { name = "types-shapely", specifier = ">=2.1.0" }, { name = "types-simplejson", specifier = ">=3.20.0.20260408" }, { name = "types-six", specifier = ">=1.17.0.20260408" }, { name = "types-tensorflow", specifier = ">=2.18.0.20260408" }, { name = "types-tqdm", specifier = ">=4.67.3.20260408" }, { name = "types-ujson", specifier = ">=5.10.0" }, - { name = "xinference-client", specifier = "~=2.4.0" }, + { name = "xinference-client", specifier = ">=2.4.0" }, ] storage = [ - { name = "azure-storage-blob", specifier = "==12.28.0" }, - { name = "bce-python-sdk", specifier = "~=0.9.69" }, - { name = "cos-python-sdk-v5", specifier = "==1.9.41" }, - { name = "esdk-obs-python", specifier = "==3.26.2" }, + { name = "azure-storage-blob", specifier = ">=12.28.0" }, + { name = "bce-python-sdk", specifier = ">=0.9.69" }, + { name = "cos-python-sdk-v5", specifier = ">=1.9.41" }, + { name = "esdk-obs-python", specifier = ">=3.22.2" }, { name = "google-cloud-storage", specifier = ">=3.10.1" }, - { name = "opendal", specifier = "~=0.46.0" }, - { name = "oss2", specifier = "==2.19.1" }, - { name = "supabase", specifier = "~=2.18.1" }, - { name = "tos", specifier = "~=2.9.0" }, + { name = "opendal", specifier = ">=0.46.0" }, + { name = "oss2", specifier = ">=2.19.1" }, + { name = "supabase", specifier = ">=2.18.1" }, + { name = "tos", specifier = ">=2.9.0" }, ] tools = [ - { name = "cloudscraper", specifier = "~=1.2.71" }, - { name = "nltk", specifier = "~=3.9.1" }, + { name = "cloudscraper", specifier = ">=1.2.71" }, + { name = "nltk", specifier = ">=3.9.1" }, ] vdb-alibabacloud-mysql = [{ name = "dify-vdb-alibabacloud-mysql", editable = "providers/vdb/vdb-alibabacloud-mysql" }] vdb-all = [ @@ -1832,7 +1726,7 @@ vdb-upstash = [{ name = "dify-vdb-upstash", editable = "providers/vdb/vdb-upstas vdb-vastbase = [{ name = "dify-vdb-vastbase", editable = "providers/vdb/vdb-vastbase" }] vdb-vikingdb = [{ name = "dify-vdb-vikingdb", editable = "providers/vdb/vdb-vikingdb" }] vdb-weaviate = [{ name = "dify-vdb-weaviate", editable = "providers/vdb/vdb-weaviate" }] -vdb-xinference = [{ name = "xinference-client", specifier = "~=2.4.0" }] +vdb-xinference = [{ name = "xinference-client", specifier = ">=2.4.0" }] [[package]] name = "dify-vdb-alibabacloud-mysql" @@ -2583,6 +2477,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/f6/df/7875e08b06a95f4577b71708ec470d029fadf873a66eb813a2861d79dfb5/gevent-26.4.0-cp312-cp312-win_amd64.whl", hash = "sha256:1c737e6ac6ce1398df0e3f41c58d982e397c993cbe73ac05b7edbe39e128c9cb", size = 1680530, upload-time = "2026-04-08T23:15:38.714Z" }, ] +[[package]] +name = "gevent-websocket" +version = "0.10.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "gevent" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/98/d2/6fa19239ff1ab072af40ebf339acd91fb97f34617c2ee625b8e34bf42393/gevent-websocket-0.10.1.tar.gz", hash = "sha256:7eaef32968290c9121f7c35b973e2cc302ffb076d018c9068d2f5ca8b2d85fb0", size = 18366, upload-time = "2017-03-12T22:46:05.68Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/7b/84/2dc373eb6493e00c884cc11e6c059ec97abae2678d42f06bf780570b0193/gevent_websocket-0.10.1-py3-none-any.whl", hash = "sha256:17b67d91282f8f4c973eba0551183fc84f56f1c90c8f6b6b30256f31f66f5242", size = 22987, upload-time = "2017-03-12T22:46:03.611Z" }, +] + [[package]] name = "gitdb" version = "4.0.12" @@ -3544,7 +3450,7 @@ wheels = [ [[package]] name = "langsmith" -version = "0.7.30" +version = "0.7.31" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "httpx" }, @@ -3557,9 +3463,9 @@ dependencies = [ { name = "xxhash" }, { name = "zstandard" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/46/e7/d27d952ce9824d684a3bb500a06541a2d55734bc4d849cdfcca2dfd4d93a/langsmith-0.7.30.tar.gz", hash = "sha256:d9df7ba5e42f818b63bda78776c8f2fc853388be3ae77b117e5d183a149321a2", size = 1106040, upload-time = "2026-04-09T21:12:01.892Z" } +sdist = { url = "https://files.pythonhosted.org/packages/e6/11/696019490992db5c87774dc20515529ef42a01e1d770fb754ed6d9b12fb0/langsmith-0.7.31.tar.gz", hash = "sha256:331ee4f7c26bb5be4022b9859b7d7b122cbf8c9d01d9f530114c1914b0349ffb", size = 1178480, upload-time = "2026-04-14T17:55:41.242Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/37/19/96250cf58070c5563446651b03bb76c2eb5afbf08e754840ab639532d8c6/langsmith-0.7.30-py3-none-any.whl", hash = "sha256:43dd9f8d290e4d406606d6cc0bd62f5d1050963f05fe0ab6ffe50acf41f2f55a", size = 372682, upload-time = "2026-04-09T21:12:00.481Z" }, + { url = "https://files.pythonhosted.org/packages/1d/a1/a013cf458c301cda86a213dd153ce0a01c93f1ab5833f951e6a44c9763ce/langsmith-0.7.31-py3-none-any.whl", hash = "sha256:0291d49203f6e80dda011af1afda61eb0595a4d697adb684590a8805e1d61fb6", size = 373276, upload-time = "2026-04-14T17:55:39.677Z" }, ] [[package]] @@ -3680,14 +3586,14 @@ wheels = [ [[package]] name = "mako" -version = "1.3.10" +version = "1.3.11" source = { registry = "https://pypi.org/simple" } dependencies = [ { name = "markupsafe" }, ] -sdist = { url = "https://files.pythonhosted.org/packages/9e/38/bd5b78a920a64d708fe6bc8e0a2c075e1389d53bef8413725c63ba041535/mako-1.3.10.tar.gz", hash = "sha256:99579a6f39583fa7e5630a28c3c1f440e4e97a414b80372649c0ce338da2ea28", size = 392474, upload-time = "2025-04-10T12:44:31.16Z" } +sdist = { url = "https://files.pythonhosted.org/packages/59/8a/805404d0c0b9f3d7a326475ca008db57aea9c5c9f2e1e39ed0faa335571c/mako-1.3.11.tar.gz", hash = "sha256:071eb4ab4c5010443152255d77db7faa6ce5916f35226eb02dc34479b6858069", size = 399811, upload-time = "2026-04-14T20:19:51.493Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/87/fb/99f81ac72ae23375f22b7afdb7642aba97c00a713c217124420147681a2f/mako-1.3.10-py3-none-any.whl", hash = "sha256:baef24a52fc4fc514a0887ac600f9f1cff3d82c61d4d700a1fa84d597b88db59", size = 78509, upload-time = "2025-04-10T12:50:53.297Z" }, + { url = "https://files.pythonhosted.org/packages/68/a5/19d7aaa7e433713ffe881df33705925a196afb9532efc8475d26593921a6/mako-1.3.11-py3-none-any.whl", hash = "sha256:e372c6e333cf004aa736a15f425087ec977e1fcbd2966aae7f17c8dc1da27a77", size = 78503, upload-time = "2026-04-14T20:19:53.233Z" }, ] [[package]] @@ -3995,25 +3901,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/7d/86/db87a5393f1b1fabef53ac3ba4e6b938bb27e40a04ad7cc512098fcae032/numba-0.65.0-cp312-cp312-win_amd64.whl", hash = "sha256:59bb9f2bb9f1238dfd8e927ba50645c18ae769fef4f3d58ea0ea22a2683b91f5", size = 2749979, upload-time = "2026-04-01T03:51:37.88Z" }, ] -[[package]] -name = "numexpr" -version = "2.14.1" -source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "numpy" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/cb/2f/fdba158c9dbe5caca9c3eca3eaffffb251f2fb8674bf8e2d0aed5f38d319/numexpr-2.14.1.tar.gz", hash = "sha256:4be00b1086c7b7a5c32e31558122b7b80243fe098579b170967da83f3152b48b", size = 119400, upload-time = "2025-10-13T16:17:27.351Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/9d/20/c473fc04a371f5e2f8c5749e04505c13e7a8ede27c09e9f099b2ad6f43d6/numexpr-2.14.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:91ebae0ab18c799b0e6b8c5a8d11e1fa3848eb4011271d99848b297468a39430", size = 162790, upload-time = "2025-10-13T16:16:34.903Z" }, - { url = "https://files.pythonhosted.org/packages/45/93/b6760dd1904c2a498e5f43d1bb436f59383c3ddea3815f1461dfaa259373/numexpr-2.14.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:47041f2f7b9e69498fb311af672ba914a60e6e6d804011caacb17d66f639e659", size = 152196, upload-time = "2025-10-13T16:16:36.593Z" }, - { url = "https://files.pythonhosted.org/packages/72/94/cc921e35593b820521e464cbbeaf8212bbdb07f16dc79fe283168df38195/numexpr-2.14.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:d686dfb2c1382d9e6e0ee0b7647f943c1886dba3adbf606c625479f35f1956c1", size = 452468, upload-time = "2025-10-13T16:13:29.531Z" }, - { url = "https://files.pythonhosted.org/packages/d9/43/560e9ba23c02c904b5934496486d061bcb14cd3ebba2e3cf0e2dccb6c22b/numexpr-2.14.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:eee6d4fbbbc368e6cdd0772734d6249128d957b3b8ad47a100789009f4de7083", size = 443631, upload-time = "2025-10-13T16:15:02.473Z" }, - { url = "https://files.pythonhosted.org/packages/7b/6c/78f83b6219f61c2c22d71ab6e6c2d4e5d7381334c6c29b77204e59edb039/numexpr-2.14.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:3a2839efa25f3c8d4133252ea7342d8f81226c7c4dda81f97a57e090b9d87a48", size = 1417670, upload-time = "2025-10-13T16:13:33.464Z" }, - { url = "https://files.pythonhosted.org/packages/0e/bb/1ccc9dcaf46281568ce769888bf16294c40e98a5158e4b16c241de31d0d3/numexpr-2.14.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:9f9137f1351b310436662b5dc6f4082a245efa8950c3b0d9008028df92fefb9b", size = 1466212, upload-time = "2025-10-13T16:15:12.828Z" }, - { url = "https://files.pythonhosted.org/packages/31/9f/203d82b9e39dadd91d64bca55b3c8ca432e981b822468dcef41a4418626b/numexpr-2.14.1-cp312-cp312-win32.whl", hash = "sha256:36f8d5c1bd1355df93b43d766790f9046cccfc1e32b7c6163f75bcde682cda07", size = 166996, upload-time = "2025-10-13T16:17:10.369Z" }, - { url = "https://files.pythonhosted.org/packages/1f/67/ffe750b5452eb66de788c34e7d21ec6d886abb4d7c43ad1dc88ceb3d998f/numexpr-2.14.1-cp312-cp312-win_amd64.whl", hash = "sha256:fdd886f4b7dbaf167633ee396478f0d0aa58ea2f9e7ccc3c6431019623e8d68f", size = 160187, upload-time = "2025-10-13T16:17:11.974Z" }, -] - [[package]] name = "numpy" version = "2.4.4" @@ -4628,15 +4515,6 @@ excel = [ { name = "xlrd" }, { name = "xlsxwriter" }, ] -output-formatting = [ - { name = "jinja2" }, - { name = "tabulate" }, -] -performance = [ - { name = "bottleneck" }, - { name = "numba" }, - { name = "numexpr" }, -] [[package]] name = "pandas-stubs" @@ -4704,21 +4582,21 @@ wheels = [ [[package]] name = "pillow" -version = "12.1.1" +version = "12.2.0" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/1f/42/5c74462b4fd957fcd7b13b04fb3205ff8349236ea74c7c375766d6c82288/pillow-12.1.1.tar.gz", hash = "sha256:9ad8fa5937ab05218e2b6a4cff30295ad35afd2f83ac592e68c0d871bb0fdbc4", size = 46980264, upload-time = "2026-02-11T04:23:07.146Z" } +sdist = { url = "https://files.pythonhosted.org/packages/8c/21/c2bcdd5906101a30244eaffc1b6e6ce71a31bd0742a01eb89e660ebfac2d/pillow-12.2.0.tar.gz", hash = "sha256:a830b1a40919539d07806aa58e1b114df53ddd43213d9c8b75847eee6c0182b5", size = 46987819, upload-time = "2026-04-01T14:46:17.687Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/07/d3/8df65da0d4df36b094351dce696f2989bec731d4f10e743b1c5f4da4d3bf/pillow-12.1.1-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:ab323b787d6e18b3d91a72fc99b1a2c28651e4358749842b8f8dfacd28ef2052", size = 5262803, upload-time = "2026-02-11T04:20:47.653Z" }, - { url = "https://files.pythonhosted.org/packages/d6/71/5026395b290ff404b836e636f51d7297e6c83beceaa87c592718747e670f/pillow-12.1.1-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:adebb5bee0f0af4909c30db0d890c773d1a92ffe83da908e2e9e720f8edf3984", size = 4657601, upload-time = "2026-02-11T04:20:49.328Z" }, - { url = "https://files.pythonhosted.org/packages/b1/2e/1001613d941c67442f745aff0f7cc66dd8df9a9c084eb497e6a543ee6f7e/pillow-12.1.1-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:bb66b7cc26f50977108790e2456b7921e773f23db5630261102233eb355a3b79", size = 6234995, upload-time = "2026-02-11T04:20:51.032Z" }, - { url = "https://files.pythonhosted.org/packages/07/26/246ab11455b2549b9233dbd44d358d033a2f780fa9007b61a913c5b2d24e/pillow-12.1.1-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:aee2810642b2898bb187ced9b349e95d2a7272930796e022efaf12e99dccd293", size = 8045012, upload-time = "2026-02-11T04:20:52.882Z" }, - { url = "https://files.pythonhosted.org/packages/b2/8b/07587069c27be7535ac1fe33874e32de118fbd34e2a73b7f83436a88368c/pillow-12.1.1-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:a0b1cd6232e2b618adcc54d9882e4e662a089d5768cd188f7c245b4c8c44a397", size = 6349638, upload-time = "2026-02-11T04:20:54.444Z" }, - { url = "https://files.pythonhosted.org/packages/ff/79/6df7b2ee763d619cda2fb4fea498e5f79d984dae304d45a8999b80d6cf5c/pillow-12.1.1-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:7aac39bcf8d4770d089588a2e1dd111cbaa42df5a94be3114222057d68336bd0", size = 7041540, upload-time = "2026-02-11T04:20:55.97Z" }, - { url = "https://files.pythonhosted.org/packages/2c/5e/2ba19e7e7236d7529f4d873bdaf317a318896bac289abebd4bb00ef247f0/pillow-12.1.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:ab174cd7d29a62dd139c44bf74b698039328f45cb03b4596c43473a46656b2f3", size = 6462613, upload-time = "2026-02-11T04:20:57.542Z" }, - { url = "https://files.pythonhosted.org/packages/03/03/31216ec124bb5c3dacd74ce8efff4cc7f52643653bad4825f8f08c697743/pillow-12.1.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:339ffdcb7cbeaa08221cd401d517d4b1fe7a9ed5d400e4a8039719238620ca35", size = 7166745, upload-time = "2026-02-11T04:20:59.196Z" }, - { url = "https://files.pythonhosted.org/packages/1f/e7/7c4552d80052337eb28653b617eafdef39adfb137c49dd7e831b8dc13bc5/pillow-12.1.1-cp312-cp312-win32.whl", hash = "sha256:5d1f9575a12bed9e9eedd9a4972834b08c97a352bd17955ccdebfeca5913fa0a", size = 6328823, upload-time = "2026-02-11T04:21:01.385Z" }, - { url = "https://files.pythonhosted.org/packages/3d/17/688626d192d7261bbbf98846fc98995726bddc2c945344b65bec3a29d731/pillow-12.1.1-cp312-cp312-win_amd64.whl", hash = "sha256:21329ec8c96c6e979cd0dfd29406c40c1d52521a90544463057d2aaa937d66a6", size = 7033367, upload-time = "2026-02-11T04:21:03.536Z" }, - { url = "https://files.pythonhosted.org/packages/ed/fe/a0ef1f73f939b0eca03ee2c108d0043a87468664770612602c63266a43c4/pillow-12.1.1-cp312-cp312-win_arm64.whl", hash = "sha256:af9a332e572978f0218686636610555ae3defd1633597be015ed50289a03c523", size = 2453811, upload-time = "2026-02-11T04:21:05.116Z" }, + { url = "https://files.pythonhosted.org/packages/58/be/7482c8a5ebebbc6470b3eb791812fff7d5e0216c2be3827b30b8bb6603ed/pillow-12.2.0-cp312-cp312-macosx_10_13_x86_64.whl", hash = "sha256:2d192a155bbcec180f8564f693e6fd9bccff5a7af9b32e2e4bf8c9c69dbad6b5", size = 5308279, upload-time = "2026-04-01T14:43:13.246Z" }, + { url = "https://files.pythonhosted.org/packages/d8/95/0a351b9289c2b5cbde0bacd4a83ebc44023e835490a727b2a3bd60ddc0f4/pillow-12.2.0-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:f3f40b3c5a968281fd507d519e444c35f0ff171237f4fdde090dd60699458421", size = 4695490, upload-time = "2026-04-01T14:43:15.584Z" }, + { url = "https://files.pythonhosted.org/packages/de/af/4e8e6869cbed569d43c416fad3dc4ecb944cb5d9492defaed89ddd6fe871/pillow-12.2.0-cp312-cp312-manylinux2014_aarch64.manylinux_2_17_aarch64.whl", hash = "sha256:03e7e372d5240cc23e9f07deca4d775c0817bffc641b01e9c3af208dbd300987", size = 6284462, upload-time = "2026-04-01T14:43:18.268Z" }, + { url = "https://files.pythonhosted.org/packages/e9/9e/c05e19657fd57841e476be1ab46c4d501bffbadbafdc31a6d665f8b737b6/pillow-12.2.0-cp312-cp312-manylinux2014_x86_64.manylinux_2_17_x86_64.whl", hash = "sha256:b86024e52a1b269467a802258c25521e6d742349d760728092e1bc2d135b4d76", size = 8094744, upload-time = "2026-04-01T14:43:20.716Z" }, + { url = "https://files.pythonhosted.org/packages/2b/54/1789c455ed10176066b6e7e6da1b01e50e36f94ba584dc68d9eebfe9156d/pillow-12.2.0-cp312-cp312-manylinux_2_27_aarch64.manylinux_2_28_aarch64.whl", hash = "sha256:7371b48c4fa448d20d2714c9a1f775a81155050d383333e0a6c15b1123dda005", size = 6398371, upload-time = "2026-04-01T14:43:23.443Z" }, + { url = "https://files.pythonhosted.org/packages/43/e3/fdc657359e919462369869f1c9f0e973f353f9a9ee295a39b1fea8ee1a77/pillow-12.2.0-cp312-cp312-manylinux_2_27_x86_64.manylinux_2_28_x86_64.whl", hash = "sha256:62f5409336adb0663b7caa0da5c7d9e7bdbaae9ce761d34669420c2a801b2780", size = 7087215, upload-time = "2026-04-01T14:43:26.758Z" }, + { url = "https://files.pythonhosted.org/packages/8b/f8/2f6825e441d5b1959d2ca5adec984210f1ec086435b0ed5f52c19b3b8a6e/pillow-12.2.0-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:01afa7cf67f74f09523699b4e88c73fb55c13346d212a59a2db1f86b0a63e8c5", size = 6509783, upload-time = "2026-04-01T14:43:29.56Z" }, + { url = "https://files.pythonhosted.org/packages/67/f9/029a27095ad20f854f9dba026b3ea6428548316e057e6fc3545409e86651/pillow-12.2.0-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:fc3d34d4a8fbec3e88a79b92e5465e0f9b842b628675850d860b8bd300b159f5", size = 7212112, upload-time = "2026-04-01T14:43:32.091Z" }, + { url = "https://files.pythonhosted.org/packages/be/42/025cfe05d1be22dbfdb4f264fe9de1ccda83f66e4fc3aac94748e784af04/pillow-12.2.0-cp312-cp312-win32.whl", hash = "sha256:58f62cc0f00fd29e64b29f4fd923ffdb3859c9f9e6105bfc37ba1d08994e8940", size = 6378489, upload-time = "2026-04-01T14:43:34.601Z" }, + { url = "https://files.pythonhosted.org/packages/5d/7b/25a221d2c761c6a8ae21bfa3874988ff2583e19cf8a27bf2fee358df7942/pillow-12.2.0-cp312-cp312-win_amd64.whl", hash = "sha256:7f84204dee22a783350679a0333981df803dac21a0190d706a50475e361c93f5", size = 7084129, upload-time = "2026-04-01T14:43:37.213Z" }, + { url = "https://files.pythonhosted.org/packages/10/e1/542a474affab20fd4a0f1836cb234e8493519da6b76899e30bcc5d990b8b/pillow-12.2.0-cp312-cp312-win_arm64.whl", hash = "sha256:af73337013e0b3b46f175e79492d96845b16126ddf79c438d7ea7ff27783a414", size = 2463612, upload-time = "2026-04-01T14:43:39.421Z" }, ] [[package]] @@ -4986,20 +4864,17 @@ wheels = [ [[package]] name = "pyarrow" -version = "14.0.2" +version = "23.0.1" source = { registry = "https://pypi.org/simple" } -dependencies = [ - { name = "numpy" }, -] -sdist = { url = "https://files.pythonhosted.org/packages/d7/8b/d18b7eb6fb22e5ed6ffcbc073c85dae635778dbd1270a6cf5d750b031e84/pyarrow-14.0.2.tar.gz", hash = "sha256:36cef6ba12b499d864d1def3e990f97949e0b79400d08b7cf74504ffbd3eb025", size = 1063645, upload-time = "2023-12-18T15:43:41.625Z" } +sdist = { url = "https://files.pythonhosted.org/packages/88/22/134986a4cc224d593c1afde5494d18ff629393d74cc2eddb176669f234a4/pyarrow-23.0.1.tar.gz", hash = "sha256:b8c5873e33440b2bc2f4a79d2b47017a89c5a24116c055625e6f2ee50523f019", size = 1167336, upload-time = "2026-02-16T10:14:12.39Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/69/5b/d8ab6c20c43b598228710e4e4a6cba03a01f6faa3d08afff9ce76fd0fd47/pyarrow-14.0.2-cp312-cp312-macosx_10_14_x86_64.whl", hash = "sha256:c87824a5ac52be210d32906c715f4ed7053d0180c1060ae3ff9b7e560f53f944", size = 26819585, upload-time = "2023-12-18T15:41:27.59Z" }, - { url = "https://files.pythonhosted.org/packages/2d/29/bed2643d0dd5e9570405244a61f6db66c7f4704a6e9ce313f84fa5a3675a/pyarrow-14.0.2-cp312-cp312-macosx_11_0_arm64.whl", hash = "sha256:a25eb2421a58e861f6ca91f43339d215476f4fe159eca603c55950c14f378cc5", size = 23965222, upload-time = "2023-12-18T15:41:32.449Z" }, - { url = "https://files.pythonhosted.org/packages/2a/34/da464632e59a8cdd083370d69e6c14eae30221acb284f671c6bc9273fadd/pyarrow-14.0.2-cp312-cp312-manylinux_2_17_aarch64.manylinux2014_aarch64.whl", hash = "sha256:5c1da70d668af5620b8ba0a23f229030a4cd6c5f24a616a146f30d2386fec422", size = 35942036, upload-time = "2023-12-18T15:41:38.767Z" }, - { url = "https://files.pythonhosted.org/packages/a8/ff/cbed4836d543b29f00d2355af67575c934999ff1d43e3f438ab0b1b394f1/pyarrow-14.0.2-cp312-cp312-manylinux_2_17_x86_64.manylinux2014_x86_64.whl", hash = "sha256:2cc61593c8e66194c7cdfae594503e91b926a228fba40b5cf25cc593563bcd07", size = 38089266, upload-time = "2023-12-18T15:41:47.617Z" }, - { url = "https://files.pythonhosted.org/packages/38/41/345011cb831d3dbb2dab762fc244c745a5df94b199223a99af52a5f7dff6/pyarrow-14.0.2-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:78ea56f62fb7c0ae8ecb9afdd7893e3a7dbeb0b04106f5c08dbb23f9c0157591", size = 35404468, upload-time = "2023-12-18T15:41:54.49Z" }, - { url = "https://files.pythonhosted.org/packages/fd/af/2fc23ca2068ff02068d8dabf0fb85b6185df40ec825973470e613dbd8790/pyarrow-14.0.2-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:37c233ddbce0c67a76c0985612fef27c0c92aef9413cf5aa56952f359fcb7379", size = 38003134, upload-time = "2023-12-18T15:42:01.593Z" }, - { url = "https://files.pythonhosted.org/packages/95/1f/9d912f66a87e3864f694e000977a6a70a644ea560289eac1d733983f215d/pyarrow-14.0.2-cp312-cp312-win_amd64.whl", hash = "sha256:e4b123ad0f6add92de898214d404e488167b87b5dd86e9a434126bc2b7a5578d", size = 25043754, upload-time = "2023-12-18T15:42:07.108Z" }, + { url = "https://files.pythonhosted.org/packages/9a/4b/4166bb5abbfe6f750fc60ad337c43ecf61340fa52ab386da6e8dbf9e63c4/pyarrow-23.0.1-cp312-cp312-macosx_12_0_arm64.whl", hash = "sha256:f4b0dbfa124c0bb161f8b5ebb40f1a680b70279aa0c9901d44a2b5a20806039f", size = 34214575, upload-time = "2026-02-16T10:09:56.225Z" }, + { url = "https://files.pythonhosted.org/packages/e1/da/3f941e3734ac8088ea588b53e860baeddac8323ea40ce22e3d0baa865cc9/pyarrow-23.0.1-cp312-cp312-macosx_12_0_x86_64.whl", hash = "sha256:7707d2b6673f7de054e2e83d59f9e805939038eebe1763fe811ee8fa5c0cd1a7", size = 35832540, upload-time = "2026-02-16T10:10:03.428Z" }, + { url = "https://files.pythonhosted.org/packages/88/7c/3d841c366620e906d54430817531b877ba646310296df42ef697308c2705/pyarrow-23.0.1-cp312-cp312-manylinux_2_28_aarch64.whl", hash = "sha256:86ff03fb9f1a320266e0de855dee4b17da6794c595d207f89bba40d16b5c78b9", size = 44470940, upload-time = "2026-02-16T10:10:10.704Z" }, + { url = "https://files.pythonhosted.org/packages/2c/a5/da83046273d990f256cb79796a190bbf7ec999269705ddc609403f8c6b06/pyarrow-23.0.1-cp312-cp312-manylinux_2_28_x86_64.whl", hash = "sha256:813d99f31275919c383aab17f0f455a04f5a429c261cc411b1e9a8f5e4aaaa05", size = 47586063, upload-time = "2026-02-16T10:10:17.95Z" }, + { url = "https://files.pythonhosted.org/packages/5b/3c/b7d2ebcff47a514f47f9da1e74b7949138c58cfeb108cdd4ee62f43f0cf3/pyarrow-23.0.1-cp312-cp312-musllinux_1_2_aarch64.whl", hash = "sha256:bf5842f960cddd2ef757d486041d57c96483efc295a8c4a0e20e704cbbf39c67", size = 48173045, upload-time = "2026-02-16T10:10:25.363Z" }, + { url = "https://files.pythonhosted.org/packages/43/b2/b40961262213beaba6acfc88698eb773dfce32ecdf34d19291db94c2bd73/pyarrow-23.0.1-cp312-cp312-musllinux_1_2_x86_64.whl", hash = "sha256:564baf97c858ecc03ec01a41062e8f4698abc3e6e2acd79c01c2e97880a19730", size = 50621741, upload-time = "2026-02-16T10:10:33.477Z" }, + { url = "https://files.pythonhosted.org/packages/f6/70/1fdda42d65b28b078e93d75d371b2185a61da89dda4def8ba6ba41ebdeb4/pyarrow-23.0.1-cp312-cp312-win_amd64.whl", hash = "sha256:07deae7783782ac7250989a7b2ecde9b3c343a643f82e8a4df03d93b633006f0", size = 27620678, upload-time = "2026-02-16T10:10:39.31Z" }, ] [[package]] @@ -5238,11 +5113,11 @@ wheels = [ [[package]] name = "pypdf" -version = "6.10.0" +version = "6.10.2" source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/b8/9f/ca96abf18683ca12602065e4ed2bec9050b672c87d317f1079abc7b6d993/pypdf-6.10.0.tar.gz", hash = "sha256:4c5a48ba258c37024ec2505f7e8fd858525f5502784a2e1c8d415604af29f6ef", size = 5314833, upload-time = "2026-04-10T09:34:57.102Z" } +sdist = { url = "https://files.pythonhosted.org/packages/7b/3f/9f2167401c2e94833ca3b69535bad89e533b5de75fefe4197a2c224baec2/pypdf-6.10.2.tar.gz", hash = "sha256:7d09ce108eff6bf67465d461b6ef352dcb8d84f7a91befc02f904455c6eea11d", size = 5315679, upload-time = "2026-04-15T16:37:36.978Z" } wheels = [ - { url = "https://files.pythonhosted.org/packages/55/f2/7ebe366f633f30a6ad105f650f44f24f98cb1335c4157d21ae47138b3482/pypdf-6.10.0-py3-none-any.whl", hash = "sha256:90005e959e1596c6e6c84c8b0ad383285b3e17011751cedd17f2ce8fcdfc86de", size = 334459, upload-time = "2026-04-10T09:34:54.966Z" }, + { url = "https://files.pythonhosted.org/packages/0c/d6/1d5c60cc17bbdf37c1552d9c03862fc6d32c5836732a0415b2d637edc2d0/pypdf-6.10.2-py3-none-any.whl", hash = "sha256:aa53be9826655b51c96741e5d7983ca224d898ac0a77896e64636810517624aa", size = 336308, upload-time = "2026-04-15T16:37:34.851Z" }, ] [[package]] @@ -5463,6 +5338,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/0b/d7/1959b9648791274998a9c3526f6d0ec8fd2233e4d4acce81bbae76b44b2a/python_dotenv-1.2.2-py3-none-any.whl", hash = "sha256:1d8214789a24de455a8b8bd8ae6fe3c6b69a5e3d64aa8a8e5d68e694bbcb285a", size = 22101, upload-time = "2026-03-01T16:00:25.09Z" }, ] +[[package]] +name = "python-engineio" +version = "4.13.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "simple-websocket" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/34/12/bdef9dbeedbe2cdeba2a2056ad27b1fb081557d34b69a97f574843462cae/python_engineio-4.13.1.tar.gz", hash = "sha256:0a853fcef52f5b345425d8c2b921ac85023a04dfcf75d7b74696c61e940fd066", size = 92348, upload-time = "2026-02-06T23:38:06.12Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/aa/54/0cce26da03a981f949bb8449c9778537f75f5917c172e1d2992ff25cb57d/python_engineio-4.13.1-py3-none-any.whl", hash = "sha256:f32ad10589859c11053ad7d9bb3c9695cdf862113bfb0d20bc4d890198287399", size = 59847, upload-time = "2026-02-06T23:38:04.861Z" }, +] + [[package]] name = "python-http-client" version = "3.3.7" @@ -5519,6 +5406,19 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/d9/4f/00be2196329ebbff56ce564aa94efb0fbc828d00de250b1980de1a34ab49/python_pptx-1.0.2-py3-none-any.whl", hash = "sha256:160838e0b8565a8b1f67947675886e9fea18aa5e795db7ae531606d68e785cba", size = 472788, upload-time = "2024-08-07T17:33:28.192Z" }, ] +[[package]] +name = "python-socketio" +version = "5.16.1" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "bidict" }, + { name = "python-engineio" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/59/81/cf8284f45e32efa18d3848ed82cdd4dcc1b657b082458fbe01ad3e1f2f8d/python_socketio-5.16.1.tar.gz", hash = "sha256:f863f98eacce81ceea2e742f6388e10ca3cdd0764be21d30d5196470edf5ea89", size = 128508, upload-time = "2026-02-06T23:42:07Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/07/c7/deb8c5e604404dbf10a3808a858946ca3547692ff6316b698945bb72177e/python_socketio-5.16.1-py3-none-any.whl", hash = "sha256:a3eb1702e92aa2f2b5d3ba00261b61f062cce51f1cfb6900bf3ab4d1934d2d35", size = 82054, upload-time = "2026-02-06T23:42:05.772Z" }, +] + [[package]] name = "pytz" version = "2025.2" @@ -5881,13 +5781,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/c9/64/982e07b93219cb52e1cca5d272cb579e2f3eb001956c9e7a9a6d106c9473/sentry_sdk-2.57.0-py2.py3-none-any.whl", hash = "sha256:812c8bf5ff3d2f0e89c82f5ce80ab3a6423e102729c4706af7413fd1eb480585", size = 456489, upload-time = "2026-03-31T09:39:27.524Z" }, ] -[package.optional-dependencies] -flask = [ - { name = "blinker" }, - { name = "flask" }, - { name = "markupsafe" }, -] - [[package]] name = "setuptools" version = "80.9.0" @@ -5906,6 +5799,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/e0/f9/0595336914c5619e5f28a1fb793285925a8cd4b432c9da0a987836c7f822/shellingham-1.5.4-py2.py3-none-any.whl", hash = "sha256:7ecfff8f2fd72616f7481040475a65b2bf8af90a56c89140852d1120324e8686", size = 9755, upload-time = "2023-10-24T04:13:38.866Z" }, ] +[[package]] +name = "simple-websocket" +version = "1.1.0" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "wsproto" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/b0/d4/bfa032f961103eba93de583b161f0e6a5b63cebb8f2c7d0c6e6efe1e3d2e/simple_websocket-1.1.0.tar.gz", hash = "sha256:7939234e7aa067c534abdab3a9ed933ec9ce4691b0713c78acb195560aa52ae4", size = 17300, upload-time = "2024-10-10T22:39:31.412Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/52/59/0782e51887ac6b07ffd1570e0364cf901ebc36345fea669969d2084baebb/simple_websocket-1.1.0-py3-none-any.whl", hash = "sha256:4af6069630a38ed6c561010f0e11a5bc0d4ca569b36306eb257cd9a192497c8c", size = 13842, upload-time = "2024-10-10T22:39:29.645Z" }, +] + [[package]] name = "six" version = "1.17.0" @@ -6202,15 +6107,6 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/45/3f/48af1e72e59d60481724b326317bd311615bdedc31f8f81f9508fb84cda6/tablestore-6.4.4-py3-none-any.whl", hash = "sha256:984f086fa7acabaa3558da93205ad6df562b266b85fd249bc5891f2dd1d65814", size = 5118758, upload-time = "2026-04-09T09:40:17.209Z" }, ] -[[package]] -name = "tabulate" -version = "0.9.0" -source = { registry = "https://pypi.org/simple" } -sdist = { url = "https://files.pythonhosted.org/packages/ec/fe/802052aecb21e3797b8f7902564ab6ea0d60ff8ca23952079064155d1ae1/tabulate-0.9.0.tar.gz", hash = "sha256:0095b12bf5966de529c0feb1fa08671671b3368eec77d7ef7ab114be2c068b3c", size = 81090, upload-time = "2022-10-06T17:21:48.54Z" } -wheels = [ - { url = "https://files.pythonhosted.org/packages/40/44/4a5f08c96eb108af5cb50b41f76142f0afa346dfa99d5296fe7202a11854/tabulate-0.9.0-py3-none-any.whl", hash = "sha256:024ca478df22e9340661486f85298cff5f6dcdba14f3813e8830015b9ed1948f", size = 35252, upload-time = "2022-10-06T17:21:44.262Z" }, -] - [[package]] name = "tcvdb-text" version = "1.1.2" @@ -7351,6 +7247,18 @@ wheels = [ { url = "https://files.pythonhosted.org/packages/ff/21/abdedb4cdf6ff41ebf01a74087740a709e2edb146490e4d9beea054b0b7a/wrapt-1.16.0-py3-none-any.whl", hash = "sha256:6906c4100a8fcbf2fa735f6059214bb13b97f75b1a61777fcf6432121ef12ef1", size = 23362, upload-time = "2023-11-09T06:33:28.271Z" }, ] +[[package]] +name = "wsproto" +version = "1.3.2" +source = { registry = "https://pypi.org/simple" } +dependencies = [ + { name = "h11" }, +] +sdist = { url = "https://files.pythonhosted.org/packages/c7/79/12135bdf8b9c9367b8701c2c19a14c913c120b882d50b014ca0d38083c2c/wsproto-1.3.2.tar.gz", hash = "sha256:b86885dcf294e15204919950f666e06ffc6c7c114ca900b060d6e16293528294", size = 50116, upload-time = "2025-11-20T18:18:01.871Z" } +wheels = [ + { url = "https://files.pythonhosted.org/packages/a4/f5/10b68b7b1544245097b2a1b8238f66f2fc6dcaeb24ba5d917f52bd2eed4f/wsproto-1.3.2-py3-none-any.whl", hash = "sha256:61eea322cdf56e8cc904bd3ad7573359a242ba65688716b0710a5eb12beab584", size = 24405, upload-time = "2025-11-20T18:18:00.454Z" }, +] + [[package]] name = "xinference-client" version = "2.4.0" diff --git a/docker/.env.example b/docker/.env.example index 4426a882f1..8176155698 100644 --- a/docker/.env.example +++ b/docker/.env.example @@ -132,6 +132,10 @@ MIGRATION_ENABLED=true # The default value is 300 seconds. FILES_ACCESS_TIMEOUT=300 +# Collaboration mode toggle +# To open collaboration features, you also need to set SERVER_WORKER_CLASS=geventwebsocket.gunicorn.workers.GeventWebSocketWorker +ENABLE_COLLABORATION_MODE=false + # Access token expiration time in minutes ACCESS_TOKEN_EXPIRE_MINUTES=60 @@ -167,6 +171,7 @@ SERVER_WORKER_AMOUNT=1 # Modifying it may also decrease throughput. # # It is strongly discouraged to change this parameter. +# If enable collaboration mode, it must be set to geventwebsocket.gunicorn.workers.GeventWebSocketWorker SERVER_WORKER_CLASS=gevent # Default number of worker connections, the default is 10. @@ -351,6 +356,9 @@ REDIS_SSL_CERTFILE= REDIS_SSL_KEYFILE= # Path to client private key file for SSL authentication REDIS_DB=0 +# Optional global prefix for Redis keys, topics, streams, and Celery Redis transport artifacts. +# Leave empty to preserve current unprefixed behavior. +REDIS_KEY_PREFIX= # Optional: limit total Redis connections used by API/Worker (unset for default) # Align with API's REDIS_MAX_CONNECTIONS in configs REDIS_MAX_CONNECTIONS= @@ -425,6 +433,8 @@ CONSOLE_CORS_ALLOW_ORIGINS=* COOKIE_DOMAIN= # When the frontend and backend run on different subdomains, set NEXT_PUBLIC_COOKIE_DOMAIN=1. NEXT_PUBLIC_COOKIE_DOMAIN= +# WebSocket server URL. +NEXT_PUBLIC_SOCKET_URL=ws://localhost NEXT_PUBLIC_BATCH_CONCURRENCY=5 # ------------------------------ diff --git a/docker/README.md b/docker/README.md index 4c40317f37..3130fa9886 100644 --- a/docker/README.md +++ b/docker/README.md @@ -88,6 +88,7 @@ The `.env.example` file provided in the Docker setup is extensive and covers a w 1. **Redis Configuration**: - `REDIS_HOST`, `REDIS_PORT`, `REDIS_PASSWORD`: Redis server connection settings. + - `REDIS_KEY_PREFIX`: Optional global namespace prefix for Redis keys, topics, streams, and Celery Redis transport artifacts. 1. **Celery Configuration**: diff --git a/docker/docker-compose-template.yaml b/docker/docker-compose-template.yaml index 4f4b3851f6..888f96332c 100644 --- a/docker/docker-compose-template.yaml +++ b/docker/docker-compose-template.yaml @@ -159,6 +159,7 @@ services: APP_API_URL: ${APP_API_URL:-} AMPLITUDE_API_KEY: ${AMPLITUDE_API_KEY:-} NEXT_PUBLIC_COOKIE_DOMAIN: ${NEXT_PUBLIC_COOKIE_DOMAIN:-} + NEXT_PUBLIC_SOCKET_URL: ${NEXT_PUBLIC_SOCKET_URL:-ws://localhost} SENTRY_DSN: ${WEB_SENTRY_DSN:-} NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0} EXPERIMENTAL_ENABLE_VINEXT: ${EXPERIMENTAL_ENABLE_VINEXT:-false} diff --git a/docker/docker-compose.yaml b/docker/docker-compose.yaml index 1fc1cfdf9e..a10fdf77c6 100644 --- a/docker/docker-compose.yaml +++ b/docker/docker-compose.yaml @@ -34,6 +34,7 @@ x-shared-env: &shared-api-worker-env OPENAI_API_BASE: ${OPENAI_API_BASE:-https://api.openai.com/v1} MIGRATION_ENABLED: ${MIGRATION_ENABLED:-true} FILES_ACCESS_TIMEOUT: ${FILES_ACCESS_TIMEOUT:-300} + ENABLE_COLLABORATION_MODE: ${ENABLE_COLLABORATION_MODE:-false} ACCESS_TOKEN_EXPIRE_MINUTES: ${ACCESS_TOKEN_EXPIRE_MINUTES:-60} REFRESH_TOKEN_EXPIRE_DAYS: ${REFRESH_TOKEN_EXPIRE_DAYS:-30} APP_DEFAULT_ACTIVE_REQUESTS: ${APP_DEFAULT_ACTIVE_REQUESTS:-0} @@ -90,6 +91,7 @@ x-shared-env: &shared-api-worker-env REDIS_SSL_CERTFILE: ${REDIS_SSL_CERTFILE:-} REDIS_SSL_KEYFILE: ${REDIS_SSL_KEYFILE:-} REDIS_DB: ${REDIS_DB:-0} + REDIS_KEY_PREFIX: ${REDIS_KEY_PREFIX:-} REDIS_MAX_CONNECTIONS: ${REDIS_MAX_CONNECTIONS:-} REDIS_USE_SENTINEL: ${REDIS_USE_SENTINEL:-false} REDIS_SENTINELS: ${REDIS_SENTINELS:-} @@ -118,6 +120,7 @@ x-shared-env: &shared-api-worker-env CONSOLE_CORS_ALLOW_ORIGINS: ${CONSOLE_CORS_ALLOW_ORIGINS:-*} COOKIE_DOMAIN: ${COOKIE_DOMAIN:-} NEXT_PUBLIC_COOKIE_DOMAIN: ${NEXT_PUBLIC_COOKIE_DOMAIN:-} + NEXT_PUBLIC_SOCKET_URL: ${NEXT_PUBLIC_SOCKET_URL:-ws://localhost} NEXT_PUBLIC_BATCH_CONCURRENCY: ${NEXT_PUBLIC_BATCH_CONCURRENCY:-5} STORAGE_TYPE: ${STORAGE_TYPE:-opendal} OPENDAL_SCHEME: ${OPENDAL_SCHEME:-fs} @@ -877,6 +880,7 @@ services: APP_API_URL: ${APP_API_URL:-} AMPLITUDE_API_KEY: ${AMPLITUDE_API_KEY:-} NEXT_PUBLIC_COOKIE_DOMAIN: ${NEXT_PUBLIC_COOKIE_DOMAIN:-} + NEXT_PUBLIC_SOCKET_URL: ${NEXT_PUBLIC_SOCKET_URL:-ws://localhost} SENTRY_DSN: ${WEB_SENTRY_DSN:-} NEXT_TELEMETRY_DISABLED: ${NEXT_TELEMETRY_DISABLED:-0} EXPERIMENTAL_ENABLE_VINEXT: ${EXPERIMENTAL_ENABLE_VINEXT:-false} diff --git a/docker/nginx/conf.d/default.conf.template b/docker/nginx/conf.d/default.conf.template index 1d63c1b97d..94a748290f 100644 --- a/docker/nginx/conf.d/default.conf.template +++ b/docker/nginx/conf.d/default.conf.template @@ -14,6 +14,14 @@ server { include proxy.conf; } + location /socket.io/ { + proxy_pass http://api:5001; + include proxy.conf; + proxy_set_header Upgrade $http_upgrade; + proxy_set_header Connection "upgrade"; + proxy_cache_bypass $http_upgrade; + } + location /v1 { proxy_pass http://api:5001; include proxy.conf; diff --git a/e2e/AGENTS.md b/e2e/AGENTS.md index ae642768f5..e56aab20a7 100644 --- a/e2e/AGENTS.md +++ b/e2e/AGENTS.md @@ -165,3 +165,132 @@ Open the HTML report locally with: ```bash open cucumber-report/report.html ``` + +## Writing new scenarios + +### Workflow + +1. Create a `.feature` file under `features//` +1. Add step definitions under `features/step-definitions//` +1. Reuse existing steps from `common/` and other definition files before writing new ones +1. Run with `pnpm -C e2e e2e -- --tags @your-tag` to verify +1. Run `pnpm -C e2e check` before committing + +### Feature file conventions + +Tag every feature or scenario with a capability tag. Add auth tags only when they clarify intent or change the browser session behavior: + +```gherkin +@datasets @authenticated +Feature: Create dataset + Scenario: Create a new empty dataset + Given I am signed in as the default E2E admin + When I open the datasets page + ... +``` + +- Capability tags (`@apps`, `@auth`, `@datasets`, …) group related scenarios for selective runs +- Auth/session tags: + - default behavior — scenarios run with the shared authenticated storageState unless marked otherwise + - `@unauthenticated` — uses a clean BrowserContext with no cookies or storage + - `@authenticated` — optional intent tag for readability or selective runs; it does not currently change hook behavior on its own +- `@fresh` — only runs in `e2e:full` mode (requires uninitialized instance) +- `@skip` — excluded from all runs + +Keep scenarios short and declarative. Each step should describe **what** the user does, not **how** the UI works. + +### Step definition conventions + +```typescript +import type { DifyWorld } from '../../support/world' +import { Then, When } from '@cucumber/cucumber' +import { expect } from '@playwright/test' + +When('I open the datasets page', async function (this: DifyWorld) { + await this.getPage().goto('/datasets') +}) +``` + +Rules: + +- Always type `this` as `DifyWorld` for proper context access +- Use `async function` (not arrow functions — Cucumber binds `this`) +- One step = one user-visible action or one assertion +- Keep steps stateless across scenarios; use `DifyWorld` properties for in-scenario state + +### Locator priority + +Follow the Playwright recommended locator strategy, in order of preference: + +| Priority | Locator | Example | When to use | +| -------- | ------------------ | ----------------------------------------- | ----------------------------------------- | +| 1 | `getByRole` | `getByRole('button', { name: 'Create' })` | Default choice — accessible and resilient | +| 2 | `getByLabel` | `getByLabel('App name')` | Form inputs with visible labels | +| 3 | `getByPlaceholder` | `getByPlaceholder('Enter name')` | Inputs without visible labels | +| 4 | `getByText` | `getByText('Welcome')` | Static text content | +| 5 | `getByTestId` | `getByTestId('workflow-canvas')` | Only when no semantic locator works | + +Avoid raw CSS/XPath selectors. They break when the DOM structure changes. + +### Assertions + +Use `@playwright/test` `expect` — it auto-waits and retries until the condition is met or the timeout expires: + +```typescript +// URL assertion +await expect(page).toHaveURL(/\/datasets\/[a-f0-9-]+\/documents/) + +// Element visibility +await expect(page.getByRole('button', { name: 'Save' })).toBeVisible() + +// Element state +await expect(page.getByRole('button', { name: 'Submit' })).toBeEnabled() + +// Negation +await expect(page.getByText('Loading')).not.toBeVisible() +``` + +Do not use manual `waitForTimeout` or polling loops. If you need a longer wait for a specific assertion, pass `{ timeout: 30_000 }` to the assertion. + +### Cucumber expressions + +Use Cucumber expression parameter types to extract values from Gherkin steps: + +| Type | Pattern | Example step | +| ---------- | ------------- | ---------------------------------- | +| `{string}` | Quoted string | `I select the "Workflow" app type` | +| `{int}` | Integer | `I should see {int} items` | +| `{float}` | Decimal | `the progress is {float} percent` | +| `{word}` | Single word | `I click the {word} tab` | + +Prefer `{string}` for UI labels, names, and text content — it maps naturally to Gherkin's quoted values. + +### Scoping locators + +When the page has multiple similar elements, scope locators to a container: + +```typescript +When('I fill in the app name in the dialog', async function (this: DifyWorld) { + const dialog = this.getPage().getByRole('dialog') + await dialog.getByPlaceholder('Give your app a name').fill('My App') +}) +``` + +### Failure diagnostics + +The `After` hook automatically captures on failure: + +- Full-page screenshot (PNG) +- Page HTML dump +- Console errors and page errors + +Artifacts are saved to `cucumber-report/artifacts/` and attached to the HTML report. No extra code needed in step definitions. + +## Reusing existing steps + +Before writing a new step definition, inspect the existing step definition files first. Reuse a matching step when the wording and behavior already fit, and only add a new step when the scenario needs a genuinely new user action or assertion. Steps in `common/` are designed for broad reuse across all features. + +Or browse the step definition files directly: + +- `features/step-definitions/common/` — auth guards and navigation assertions shared by all features +- `features/step-definitions//` — domain-specific steps scoped to a single feature area diff --git a/e2e/README.md b/e2e/README.md index 9b4046eaff..feca0cb419 100644 --- a/e2e/README.md +++ b/e2e/README.md @@ -1,3 +1,5 @@ # E2E -Canonical documentation for this package lives in [AGENTS.md](./AGENTS.md). +Canonical documentation for this package lives in [AGENTS.md]. + +[AGENTS.md]: ./AGENTS.md diff --git a/e2e/features/auth/sign-out.feature b/e2e/features/auth/sign-out.feature index 0f377ea133..9112f1220a 100644 --- a/e2e/features/auth/sign-out.feature +++ b/e2e/features/auth/sign-out.feature @@ -6,3 +6,13 @@ Feature: Sign out And I open the account menu And I sign out Then I should be on the sign-in page + + Scenario: Redirect back to sign-in when reopening the apps console after signing out + Given I am signed in as the default E2E admin + When I open the apps console + And I open the account menu + And I sign out + Then I should be on the sign-in page + When I open the apps console + Then I should be redirected to the signin page + And I should see the "Sign in" button diff --git a/e2e/features/step-definitions/apps/create-app.steps.ts b/e2e/features/step-definitions/apps/create-app.steps.ts index 6bc9ae30b6..e444b97dc8 100644 --- a/e2e/features/step-definitions/apps/create-app.steps.ts +++ b/e2e/features/step-definitions/apps/create-app.steps.ts @@ -1,6 +1,6 @@ +import type { DifyWorld } from '../../support/world' import { Then, When } from '@cucumber/cucumber' import { expect } from '@playwright/test' -import type { DifyWorld } from '../../support/world' When('I start creating a blank app', async function (this: DifyWorld) { const page = this.getPage() diff --git a/e2e/features/step-definitions/auth/sign-out.steps.ts b/e2e/features/step-definitions/auth/sign-out.steps.ts index 935b73c3af..0cc5f76ccc 100644 --- a/e2e/features/step-definitions/auth/sign-out.steps.ts +++ b/e2e/features/step-definitions/auth/sign-out.steps.ts @@ -1,6 +1,6 @@ +import type { DifyWorld } from '../../support/world' import { Then, When } from '@cucumber/cucumber' import { expect } from '@playwright/test' -import type { DifyWorld } from '../../support/world' When('I open the account menu', async function (this: DifyWorld) { const page = this.getPage() diff --git a/e2e/features/step-definitions/common/auth.steps.ts b/e2e/features/step-definitions/common/auth.steps.ts index bed35244c5..67c18dfe6c 100644 --- a/e2e/features/step-definitions/common/auth.steps.ts +++ b/e2e/features/step-definitions/common/auth.steps.ts @@ -1,5 +1,5 @@ -import { Given } from '@cucumber/cucumber' import type { DifyWorld } from '../../support/world' +import { Given } from '@cucumber/cucumber' Given('I am signed in as the default E2E admin', async function (this: DifyWorld) { const session = await this.getAuthSession() diff --git a/e2e/features/step-definitions/common/navigation.steps.ts b/e2e/features/step-definitions/common/navigation.steps.ts index 28e6953d65..9bec34c224 100644 --- a/e2e/features/step-definitions/common/navigation.steps.ts +++ b/e2e/features/step-definitions/common/navigation.steps.ts @@ -1,6 +1,6 @@ +import type { DifyWorld } from '../../support/world' import { Then, When } from '@cucumber/cucumber' import { expect } from '@playwright/test' -import type { DifyWorld } from '../../support/world' When('I open the apps console', async function (this: DifyWorld) { await this.getPage().goto('/apps') diff --git a/e2e/features/step-definitions/smoke/install.steps.ts b/e2e/features/step-definitions/smoke/install.steps.ts index 857e01a971..3f2f8b5199 100644 --- a/e2e/features/step-definitions/smoke/install.steps.ts +++ b/e2e/features/step-definitions/smoke/install.steps.ts @@ -1,6 +1,6 @@ +import type { DifyWorld } from '../../support/world' import { Given } from '@cucumber/cucumber' import { expect } from '@playwright/test' -import type { DifyWorld } from '../../support/world' Given( 'the last authentication bootstrap came from a fresh install', diff --git a/e2e/features/support/hooks.ts b/e2e/features/support/hooks.ts index 9e8c025ef8..33b337fb93 100644 --- a/e2e/features/support/hooks.ts +++ b/e2e/features/support/hooks.ts @@ -1,11 +1,12 @@ -import { After, AfterAll, Before, BeforeAll, Status, setDefaultTimeout } from '@cucumber/cucumber' -import { chromium, type Browser } from '@playwright/test' +import type { Browser } from '@playwright/test' +import type { DifyWorld } from './world' import { mkdir, writeFile } from 'node:fs/promises' import path from 'node:path' import { fileURLToPath } from 'node:url' -import { ensureAuthenticatedState } from '../../fixtures/auth' +import { After, AfterAll, Before, BeforeAll, setDefaultTimeout, Status } from '@cucumber/cucumber' +import { chromium } from '@playwright/test' +import { AUTH_BOOTSTRAP_TIMEOUT_MS, ensureAuthenticatedState } from '../../fixtures/auth' import { baseURL, cucumberHeadless, cucumberSlowMo } from '../../test-env' -import type { DifyWorld } from './world' const e2eRoot = fileURLToPath(new URL('../..', import.meta.url)) const artifactsDir = path.join(e2eRoot, 'cucumber-report', 'artifacts') @@ -15,7 +16,7 @@ let browser: Browser | undefined setDefaultTimeout(60_000) const sanitizeForPath = (value: string) => - value.replaceAll(/[^a-zA-Z0-9_-]+/g, '-').replaceAll(/^-+|-+$/g, '') + value.replaceAll(/[^\w-]+/g, '-').replaceAll(/^-+|-+$/g, '') const writeArtifact = async ( scenarioName: string, @@ -31,7 +32,7 @@ const writeArtifact = async ( return artifactPath } -BeforeAll(async () => { +BeforeAll({ timeout: AUTH_BOOTSTRAP_TIMEOUT_MS }, async () => { await mkdir(artifactsDir, { recursive: true }) browser = await chromium.launch({ @@ -44,16 +45,18 @@ BeforeAll(async () => { }) Before(async function (this: DifyWorld, { pickle }) { - if (!browser) throw new Error('Shared Playwright browser is not available.') + if (!browser) + throw new Error('Shared Playwright browser is not available.') - const isUnauthenticatedScenario = pickle.tags.some((tag) => tag.name === '@unauthenticated') + const isUnauthenticatedScenario = pickle.tags.some(tag => tag.name === '@unauthenticated') - if (isUnauthenticatedScenario) await this.startUnauthenticatedSession(browser) + if (isUnauthenticatedScenario) + await this.startUnauthenticatedSession(browser) else await this.startAuthenticatedSession(browser) this.scenarioStartedAt = Date.now() - const tags = pickle.tags.map((tag) => tag.name).join(' ') + const tags = pickle.tags.map(tag => tag.name).join(' ') console.log(`[e2e] start ${pickle.name}${tags ? ` ${tags}` : ''}`) }) diff --git a/e2e/features/support/world.ts b/e2e/features/support/world.ts index bf63199107..0e9c4b9c84 100644 --- a/e2e/features/support/world.ts +++ b/e2e/features/support/world.ts @@ -1,9 +1,11 @@ -import { type IWorldOptions, World, setWorldConstructor } from '@cucumber/cucumber' +import type { IWorldOptions } from '@cucumber/cucumber' import type { Browser, BrowserContext, ConsoleMessage, Page } from '@playwright/test' +import type { AuthSessionMetadata } from '../../fixtures/auth' +import { setWorldConstructor, World } from '@cucumber/cucumber' import { + authStatePath, readAuthSessionMetadata, - type AuthSessionMetadata, } from '../../fixtures/auth' import { baseURL, defaultLocale } from '../../test-env' @@ -37,7 +39,8 @@ export class DifyWorld extends World { this.page.setDefaultTimeout(30_000) this.page.on('console', (message: ConsoleMessage) => { - if (message.type() === 'error') this.consoleErrors.push(message.text()) + if (message.type() === 'error') + this.consoleErrors.push(message.text()) }) this.page.on('pageerror', (error) => { this.pageErrors.push(error.message) @@ -53,7 +56,8 @@ export class DifyWorld extends World { } getPage() { - if (!this.page) throw new Error('Playwright page has not been initialized for this scenario.') + if (!this.page) + throw new Error('Playwright page has not been initialized for this scenario.') return this.page } diff --git a/e2e/fixtures/auth.ts b/e2e/fixtures/auth.ts index 853bfff5ed..cc54a6d47b 100644 --- a/e2e/fixtures/auth.ts +++ b/e2e/fixtures/auth.ts @@ -1,8 +1,8 @@ import type { Browser, Page } from '@playwright/test' -import { expect } from '@playwright/test' import { mkdir, readFile, writeFile } from 'node:fs/promises' import path from 'node:path' import { fileURLToPath } from 'node:url' +import { expect } from '@playwright/test' import { defaultBaseURL, defaultLocale } from '../test-env' export type AuthSessionMetadata = { @@ -12,7 +12,7 @@ export type AuthSessionMetadata = { usedInitPassword: boolean } -const WAIT_TIMEOUT_MS = 120_000 +export const AUTH_BOOTSTRAP_TIMEOUT_MS = 120_000 const e2eRoot = fileURLToPath(new URL('..', import.meta.url)) export const authDir = path.join(e2eRoot, '.auth') @@ -39,40 +39,56 @@ const escapeRegex = (value: string) => value.replaceAll(/[.*+?^${}()|[\]\\]/g, ' const appURL = (baseURL: string, pathname: string) => new URL(pathname, baseURL).toString() -const waitForPageState = async (page: Page) => { +type AuthPageState = 'install' | 'login' | 'init' + +const getRemainingTimeout = (deadline: number) => Math.max(deadline - Date.now(), 1) + +const waitForPageState = async (page: Page, deadline: number): Promise => { const installHeading = page.getByRole('heading', { name: 'Setting up an admin account' }) const signInButton = page.getByRole('button', { name: 'Sign in' }) const initPasswordField = page.getByLabel('Admin initialization password') - const deadline = Date.now() + WAIT_TIMEOUT_MS - - while (Date.now() < deadline) { - if (await installHeading.isVisible().catch(() => false)) return 'install' as const - if (await signInButton.isVisible().catch(() => false)) return 'login' as const - if (await initPasswordField.isVisible().catch(() => false)) return 'init' as const - - await page.waitForTimeout(1_000) + try { + return await Promise.any([ + installHeading + .waitFor({ state: 'visible', timeout: getRemainingTimeout(deadline) }) + .then(() => 'install'), + signInButton + .waitFor({ state: 'visible', timeout: getRemainingTimeout(deadline) }) + .then(() => 'login'), + initPasswordField + .waitFor({ state: 'visible', timeout: getRemainingTimeout(deadline) }) + .then(() => 'init'), + ]) + } + catch { + throw new Error(`Unable to determine auth page state for ${page.url()}`) } - - throw new Error(`Unable to determine auth page state for ${page.url()}`) } -const completeInitPasswordIfNeeded = async (page: Page) => { +const completeInitPasswordIfNeeded = async (page: Page, deadline: number) => { const initPasswordField = page.getByLabel('Admin initialization password') - if (!(await initPasswordField.isVisible({ timeout: 3_000 }).catch(() => false))) return false + + const needsInitPassword = await initPasswordField + .waitFor({ state: 'visible', timeout: Math.min(getRemainingTimeout(deadline), 3_000) }) + .then(() => true) + .catch(() => false) + + if (!needsInitPassword) + return false await initPasswordField.fill(initPassword) await page.getByRole('button', { name: 'Validate' }).click() await expect(page.getByRole('heading', { name: 'Setting up an admin account' })).toBeVisible({ - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) return true } -const completeInstall = async (page: Page, baseURL: string) => { +const completeInstall = async (page: Page, baseURL: string, deadline: number) => { await expect(page.getByRole('heading', { name: 'Setting up an admin account' })).toBeVisible({ - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) await page.getByLabel('Email address').fill(adminCredentials.email) @@ -81,13 +97,13 @@ const completeInstall = async (page: Page, baseURL: string) => { await page.getByRole('button', { name: 'Set up' }).click() await expect(page).toHaveURL(new RegExp(`^${escapeRegex(baseURL)}/apps(?:\\?.*)?$`), { - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) } -const completeLogin = async (page: Page, baseURL: string) => { +const completeLogin = async (page: Page, baseURL: string, deadline: number) => { await expect(page.getByRole('button', { name: 'Sign in' })).toBeVisible({ - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) await page.getByLabel('Email address').fill(adminCredentials.email) @@ -95,12 +111,13 @@ const completeLogin = async (page: Page, baseURL: string) => { await page.getByRole('button', { name: 'Sign in' }).click() await expect(page).toHaveURL(new RegExp(`^${escapeRegex(baseURL)}/apps(?:\\?.*)?$`), { - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) } export const ensureAuthenticatedState = async (browser: Browser, configuredBaseURL?: string) => { const baseURL = resolveBaseURL(configuredBaseURL) + const deadline = Date.now() + AUTH_BOOTSTRAP_TIMEOUT_MS await mkdir(authDir, { recursive: true }) @@ -111,25 +128,29 @@ export const ensureAuthenticatedState = async (browser: Browser, configuredBaseU const page = await context.newPage() try { - await page.goto(appURL(baseURL, '/install'), { waitUntil: 'networkidle' }) + await page.goto(appURL(baseURL, '/install'), { + timeout: getRemainingTimeout(deadline), + waitUntil: 'domcontentloaded', + }) - let usedInitPassword = await completeInitPasswordIfNeeded(page) - let pageState = await waitForPageState(page) + let usedInitPassword = await completeInitPasswordIfNeeded(page, deadline) + let pageState = await waitForPageState(page, deadline) while (pageState === 'init') { - const completedInitPassword = await completeInitPasswordIfNeeded(page) + const completedInitPassword = await completeInitPasswordIfNeeded(page, deadline) if (!completedInitPassword) throw new Error(`Unable to validate initialization password for ${page.url()}`) usedInitPassword = true - pageState = await waitForPageState(page) + pageState = await waitForPageState(page, deadline) } - if (pageState === 'install') await completeInstall(page, baseURL) - else await completeLogin(page, baseURL) + if (pageState === 'install') + await completeInstall(page, baseURL, deadline) + else await completeLogin(page, baseURL, deadline) await expect(page.getByRole('button', { name: 'Create from Blank' })).toBeVisible({ - timeout: WAIT_TIMEOUT_MS, + timeout: getRemainingTimeout(deadline), }) await context.storageState({ path: authStatePath }) @@ -142,7 +163,8 @@ export const ensureAuthenticatedState = async (browser: Browser, configuredBaseU } await writeFile(authMetadataPath, `${JSON.stringify(metadata, null, 2)}\n`, 'utf8') - } finally { + } + finally { await context.close() } } diff --git a/e2e/package.json b/e2e/package.json index 925418f223..94fc857c0b 100644 --- a/e2e/package.json +++ b/e2e/package.json @@ -1,7 +1,7 @@ { "name": "dify-e2e", - "private": true, "type": "module", + "private": true, "scripts": { "check": "vp check --fix", "e2e": "tsx ./scripts/run-cucumber.ts", @@ -11,10 +11,12 @@ "e2e:install": "playwright install --with-deps chromium", "e2e:middleware:down": "tsx ./scripts/setup.ts middleware-down", "e2e:middleware:up": "tsx ./scripts/setup.ts middleware-up", - "e2e:reset": "tsx ./scripts/setup.ts reset" + "e2e:reset": "tsx ./scripts/setup.ts reset", + "type-check": "tsc" }, "devDependencies": { "@cucumber/cucumber": "catalog:", + "@dify/tsconfig": "workspace:*", "@playwright/test": "catalog:", "@types/node": "catalog:", "tsx": "catalog:", diff --git a/e2e/scripts/common.ts b/e2e/scripts/common.ts index bb82121079..ea6c897b2d 100644 --- a/e2e/scripts/common.ts +++ b/e2e/scripts/common.ts @@ -1,4 +1,6 @@ -import { spawn, type ChildProcess } from 'node:child_process' +import type { ChildProcess } from 'node:child_process' +import { spawn } from 'node:child_process' +import { createHash } from 'node:crypto' import { access, copyFile, readFile, writeFile } from 'node:fs/promises' import net from 'node:net' import path from 'node:path' @@ -38,12 +40,17 @@ export const middlewareEnvExampleFile = path.join(dockerDir, 'middleware.env.exa export const webEnvLocalFile = path.join(webDir, '.env.local') export const webEnvExampleFile = path.join(webDir, '.env.example') export const apiEnvExampleFile = path.join(apiDir, 'tests', 'integration_tests', '.env.example') +export const e2eWebEnvOverrides = { + NEXT_PUBLIC_API_PREFIX: 'http://127.0.0.1:5001/console/api', + NEXT_PUBLIC_PUBLIC_API_PREFIX: 'http://127.0.0.1:5001/api', +} satisfies Record const formatCommand = (command: string, args: string[]) => [command, ...args].join(' ') export const isMainModule = (metaUrl: string) => { const entrypoint = process.argv[1] - if (!entrypoint) return false + if (!entrypoint) + return false return pathToFileURL(entrypoint).href === metaUrl } @@ -102,7 +109,8 @@ export const runCommandOrThrow = async (options: RunCommandOptions) => { const forwardSignalsToChild = (childProcess: ChildProcess) => { const handleSignal = (signal: NodeJS.Signals) => { - if (childProcess.exitCode === null) childProcess.kill(signal) + if (childProcess.exitCode === null) + childProcess.kill(signal) } const onSigint = () => handleSignal('SIGINT') @@ -147,7 +155,8 @@ export const runForegroundProcess = async ({ export const ensureFileExists = async (filePath: string, exampleFilePath: string) => { try { await access(filePath) - } catch { + } + catch { await copyFile(exampleFilePath, filePath) } } @@ -157,38 +166,42 @@ export const ensureLineInFile = async (filePath: string, line: string) => { const lines = fileContent.split(/\r?\n/) const assignmentPrefix = line.includes('=') ? `${line.slice(0, line.indexOf('='))}=` : null - if (lines.includes(line)) return + if (lines.includes(line)) + return - if (assignmentPrefix && lines.some((existingLine) => existingLine.startsWith(assignmentPrefix))) + if (assignmentPrefix && lines.some(existingLine => existingLine.startsWith(assignmentPrefix))) return const normalizedContent = fileContent.endsWith('\n') ? fileContent : `${fileContent}\n` await writeFile(filePath, `${normalizedContent}${line}\n`, 'utf8') } -export const ensureWebEnvLocal = async () => { - await ensureFileExists(webEnvLocalFile, webEnvExampleFile) - - const fileContent = await readFile(webEnvLocalFile, 'utf8') - const nextContent = fileContent.replaceAll('http://localhost:5001', 'http://127.0.0.1:5001') - - if (nextContent !== fileContent) await writeFile(webEnvLocalFile, nextContent, 'utf8') +export const getWebEnvLocalHash = async () => { + const fileContent = await readFile(webEnvLocalFile, 'utf8').catch(() => '') + return createHash('sha256') + .update( + JSON.stringify({ + envLocal: fileContent, + overrides: e2eWebEnvOverrides, + }), + ) + .digest('hex') } export const readSimpleDotenv = async (filePath: string) => { const fileContent = await readFile(filePath, 'utf8') const entries = fileContent .split(/\r?\n/) - .map((line) => line.trim()) - .filter((line) => line && !line.startsWith('#')) + .map(line => line.trim()) + .filter(line => line && !line.startsWith('#')) .map<[string, string]>((line) => { const separatorIndex = line.indexOf('=') const key = separatorIndex === -1 ? line : line.slice(0, separatorIndex).trim() const rawValue = separatorIndex === -1 ? '' : line.slice(separatorIndex + 1).trim() if ( - (rawValue.startsWith('"') && rawValue.endsWith('"')) || - (rawValue.startsWith("'") && rawValue.endsWith("'")) + (rawValue.startsWith('"') && rawValue.endsWith('"')) + || (rawValue.startsWith('\'') && rawValue.endsWith('\'')) ) { return [key, rawValue.slice(1, -1)] } @@ -213,7 +226,8 @@ export const waitForCondition = async ({ const deadline = Date.now() + timeoutMs while (Date.now() < deadline) { - if (await check()) return + if (await check()) + return await sleep(intervalMs) } diff --git a/e2e/scripts/run-cucumber.ts b/e2e/scripts/run-cucumber.ts index 39e9157916..d7778e65e2 100644 --- a/e2e/scripts/run-cucumber.ts +++ b/e2e/scripts/run-cucumber.ts @@ -1,7 +1,7 @@ import { mkdir, rm } from 'node:fs/promises' import path from 'node:path' +import { startLoggedProcess, stopManagedProcess, waitForUrl } from '../support/process' import { startWebServer, stopWebServer } from '../support/web-server' -import { waitForUrl, startLoggedProcess, stopManagedProcess } from '../support/process' import { apiURL, baseURL, reuseExistingWebServer } from '../test-env' import { e2eDir, isMainModule, runCommand } from './common' import { resetState, startMiddleware, stopMiddleware } from './setup' @@ -17,12 +17,10 @@ const parseArgs = (argv: string[]): RunOptions => { let headed = false const forwardArgs: string[] = [] - for (let index = 0; index < argv.length; index += 1) { - const arg = argv[index] - + for (const [index, arg] of argv.entries()) { if (arg === '--') { forwardArgs.push(...argv.slice(index + 1)) - break + return { forwardArgs, full, headed } } if (arg === '--full') { @@ -38,24 +36,22 @@ const parseArgs = (argv: string[]): RunOptions => { forwardArgs.push(arg) } - return { - forwardArgs, - full, - headed, - } + return { forwardArgs, full, headed } } const hasCustomTags = (forwardArgs: string[]) => - forwardArgs.some((arg) => arg === '--tags' || arg.startsWith('--tags=')) + forwardArgs.some(arg => arg === '--tags' || arg.startsWith('--tags=')) const main = async () => { const { forwardArgs, full, headed } = parseArgs(process.argv.slice(2)) const startMiddlewareForRun = full const resetStateForRun = full - if (resetStateForRun) await resetState() + if (resetStateForRun) + await resetState() - if (startMiddlewareForRun) await startMiddleware() + if (startMiddlewareForRun) + await startMiddleware() const cucumberReportDir = path.join(e2eDir, 'cucumber-report') const logDir = path.join(e2eDir, '.logs') @@ -81,7 +77,8 @@ const main = async () => { if (startMiddlewareForRun) { try { await stopMiddleware() - } catch { + } + catch { // Cleanup should continue even if middleware shutdown fails. } } @@ -103,7 +100,8 @@ const main = async () => { try { try { await waitForUrl(`${apiURL}/health`, 180_000, 1_000) - } catch { + } + catch { throw new Error(`API did not become ready at ${apiURL}/health.`) } @@ -139,7 +137,8 @@ const main = async () => { }) process.exitCode = result.exitCode - } finally { + } + finally { process.off('SIGINT', onTerminate) process.off('SIGTERM', onTerminate) await cleanup() diff --git a/e2e/scripts/setup.ts b/e2e/scripts/setup.ts index 6f38598df4..ba4c011b04 100644 --- a/e2e/scripts/setup.ts +++ b/e2e/scripts/setup.ts @@ -1,4 +1,4 @@ -import { access, mkdir, rm } from 'node:fs/promises' +import { access, mkdir, readFile, rm, writeFile } from 'node:fs/promises' import path from 'node:path' import { waitForUrl } from '../support/process' import { @@ -6,9 +6,10 @@ import { apiEnvExampleFile, dockerDir, e2eDir, + e2eWebEnvOverrides, ensureFileExists, ensureLineInFile, - ensureWebEnvLocal, + getWebEnvLocalHash, isMainModule, isTcpPortReachable, middlewareComposeFile, @@ -23,6 +24,7 @@ import { } from './common' const buildIdPath = path.join(webDir, '.next', 'BUILD_ID') +const webBuildEnvStampPath = path.join(webDir, '.next', 'e2e-web-env.sha256') const middlewareDataPaths = [ path.join(dockerDir, 'volumes', 'db', 'data'), @@ -77,7 +79,8 @@ const getContainerHealth = async (containerId: string) => { stdio: 'pipe', }) - if (result.exitCode !== 0) return '' + if (result.exitCode !== 0) + return '' return result.stdout.trim() } @@ -103,34 +106,56 @@ const waitForDependency = async ({ try { await wait() - } catch (error) { + } + catch (error) { await printComposeLogs(services) throw error } } export const ensureWebBuild = async () => { - await ensureWebEnvLocal() + const envHash = await getWebEnvLocalHash() + const buildEnv = { + ...e2eWebEnvOverrides, + } if (process.env.E2E_FORCE_WEB_BUILD === '1') { await runCommandOrThrow({ command: 'pnpm', args: ['run', 'build'], cwd: webDir, + env: buildEnv, }) + await writeFile(webBuildEnvStampPath, `${envHash}\n`, 'utf8') return } try { - await access(buildIdPath) - console.log('Reusing existing web build artifact.') - } catch { - await runCommandOrThrow({ - command: 'pnpm', - args: ['run', 'build'], - cwd: webDir, - }) + const [buildExists, previousEnvHash] = await Promise.all([ + access(buildIdPath) + .then(() => true) + .catch(() => false), + readFile(webBuildEnvStampPath, 'utf8') + .then(value => value.trim()) + .catch(() => ''), + ]) + + if (buildExists && previousEnvHash === envHash) { + console.log('Reusing existing web build artifact.') + return + } } + catch { + // Fall through to rebuild when the existing build cannot be verified. + } + + await runCommandOrThrow({ + command: 'pnpm', + args: ['run', 'build'], + cwd: webDir, + env: buildEnv, + }) + await writeFile(webBuildEnvStampPath, `${envHash}\n`, 'utf8') } export const startWeb = async () => { @@ -141,6 +166,7 @@ export const startWeb = async () => { args: ['run', 'start'], cwd: webDir, env: { + ...e2eWebEnvOverrides, HOSTNAME: '127.0.0.1', PORT: '3000', }, @@ -152,14 +178,25 @@ export const startApi = async () => { await runCommandOrThrow({ command: 'uv', - args: ['run', '--project', '.', 'flask', 'upgrade-db'], + args: ['run', '--project', '.', '--no-sync', 'flask', 'upgrade-db'], cwd: apiDir, env, }) await runForegroundProcess({ command: 'uv', - args: ['run', '--project', '.', 'flask', 'run', '--host', '127.0.0.1', '--port', '5001'], + args: [ + 'run', + '--project', + '.', + '--no-sync', + 'flask', + 'run', + '--host', + '127.0.0.1', + '--port', + '5001', + ], cwd: apiDir, env, }) @@ -177,7 +214,8 @@ export const resetState = async () => { console.log('Stopping middleware services...') try { await stopMiddleware() - } catch { + } + catch { // Reset should continue even if middleware is already stopped. } @@ -191,7 +229,7 @@ export const resetState = async () => { console.log('Removing E2E local state...') await Promise.all( - e2eStatePaths.map((targetPath) => rm(targetPath, { force: true, recursive: true })), + e2eStatePaths.map(targetPath => rm(targetPath, { force: true, recursive: true })), ) console.log('E2E state reset complete.') diff --git a/e2e/support/process.ts b/e2e/support/process.ts index 96273ef931..4de1161b08 100644 --- a/e2e/support/process.ts +++ b/e2e/support/process.ts @@ -1,6 +1,7 @@ import type { ChildProcess } from 'node:child_process' +import type { WriteStream } from 'node:fs' import { spawn } from 'node:child_process' -import { createWriteStream, type WriteStream } from 'node:fs' +import { createWriteStream } from 'node:fs' import { mkdir } from 'node:fs/promises' import net from 'node:net' import { dirname } from 'node:path' @@ -63,11 +64,14 @@ export const waitForUrl = async ( const response = await fetch(url, { signal: controller.signal, }) - if (response.ok) return - } finally { + if (response.ok) + return + } + finally { clearTimeout(timeout) } - } catch { + } + catch { // Keep polling until timeout. } @@ -138,7 +142,8 @@ const waitForProcessExit = (childProcess: ChildProcess, timeoutMs: number) => const signalManagedProcess = (childProcess: ChildProcess, signal: NodeJS.Signals) => { const { pid } = childProcess - if (!pid) return + if (!pid) + return try { if (process.platform !== 'win32') { @@ -147,13 +152,15 @@ const signalManagedProcess = (childProcess: ChildProcess, signal: NodeJS.Signals } childProcess.kill(signal) - } catch { + } + catch { // Best-effort shutdown. Cleanup continues even when the process is already gone. } } export const stopManagedProcess = async (managedProcess?: ManagedProcess) => { - if (!managedProcess) return + if (!managedProcess) + return const { childProcess, logStream } = managedProcess diff --git a/e2e/support/web-server.ts b/e2e/support/web-server.ts index ad5d5d916a..819f7effe3 100644 --- a/e2e/support/web-server.ts +++ b/e2e/support/web-server.ts @@ -34,7 +34,8 @@ export const startWebServer = async ({ }: WebServerStartOptions) => { const { host, port } = getUrlHostAndPort(baseURL) - if (reuseExistingServer && (await isPortReachable(host, port))) return + if (reuseExistingServer && (await isPortReachable(host, port))) + return activeProcess = await startLoggedProcess({ command, @@ -49,7 +50,8 @@ export const startWebServer = async ({ startupError = error }) activeProcess.childProcess.once('exit', (code, signal) => { - if (startupError) return + if (startupError) + return startupError = new Error( `Web server exited before readiness (code: ${code ?? 'unknown'}, signal: ${signal ?? 'none'}).`, @@ -67,7 +69,8 @@ export const startWebServer = async ({ try { await waitForUrl(baseURL, 1_000, 250, 1_000) return - } catch { + } + catch { // Continue polling until timeout or child exit. } } diff --git a/e2e/tsconfig.json b/e2e/tsconfig.json index 3976c12b66..3e72e790cf 100644 --- a/e2e/tsconfig.json +++ b/e2e/tsconfig.json @@ -1,16 +1,9 @@ { + "extends": "@dify/tsconfig/node.json", "compilerOptions": { - "target": "ES2023", "lib": ["ES2023", "DOM"], - "module": "ESNext", - "moduleResolution": "Bundler", - "allowJs": false, - "resolveJsonModule": true, - "noEmit": true, - "strict": true, - "skipLibCheck": true, "types": ["node", "@playwright/test", "@cucumber/cucumber"], - "isolatedModules": true, + "allowJs": false, "verbatimModuleSyntax": true }, "include": ["./**/*.ts"], diff --git a/e2e/vite.config.ts b/e2e/vite.config.ts index 98400d5b9b..2329b534b4 100644 --- a/e2e/vite.config.ts +++ b/e2e/vite.config.ts @@ -1,15 +1,5 @@ import { defineConfig } from 'vite-plus' export default defineConfig({ - lint: { - options: { - typeAware: true, - typeCheck: true, - denyWarnings: true, - }, - }, - fmt: { - singleQuote: true, - semi: false, - }, + }) diff --git a/eslint-suppressions.json b/eslint-suppressions.json new file mode 100644 index 0000000000..e4831c4e98 --- /dev/null +++ b/eslint-suppressions.json @@ -0,0 +1,7000 @@ +{ + "e2e/features/support/hooks.ts": { + "no-console": { + "count": 3 + }, + "node/prefer-global/buffer": { + "count": 1 + } + }, + "e2e/scripts/common.ts": { + "node/prefer-global/buffer": { + "count": 2 + } + }, + "e2e/support/process.ts": { + "ts/no-use-before-define": { + "count": 2 + } + }, + "packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/migrate.ts": { + "no-console": { + "count": 11 + } + }, + "packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/normalize.ts": { + "no-console": { + "count": 1 + } + }, + "packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/run.ts": { + "no-console": { + "count": 9 + } + }, + "web/.storybook/main.ts": { + "storybook/no-uninstalled-addons": { + "count": 3 + } + }, + "web/__mocks__/zustand.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/__tests__/document-detail-navigation-fix.test.tsx": { + "no-console": { + "count": 10 + } + }, + "web/__tests__/document-list-sorting.test.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/__tests__/embedded-user-id-auth.test.tsx": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/__tests__/embedded-user-id-store.test.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/__tests__/goto-anything/command-selector.test.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/__tests__/i18n-upload-features.test.ts": { + "no-console": { + "count": 3 + } + }, + "web/__tests__/navigation-utils.test.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/__tests__/plugin-tool-workflow-error.test.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/__tests__/real-browser-flicker.test.tsx": { + "no-console": { + "count": 16 + }, + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/__tests__/unified-tags-logic.test.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/__tests__/workflow-onboarding-integration.test.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/layout-main.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/long-time-range-picker.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/time-range-picker/range-selector.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/__tests__/svg-attribute-error-reproduction.spec.tsx": { + "no-console": { + "count": 19 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/config-button.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/config-popup.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/provider-config-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/provider-panel.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(commonLayout)/app/(appDetailLayout)/[appId]/overview/tracing/type.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/(commonLayout)/datasets/(datasetDetailLayout)/[datasetId]/layout-main.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(humanInputLayout)/form/[token]/form.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/(shareLayout)/components/splash.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/(shareLayout)/webapp-reset-password/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/(shareLayout)/webapp-signin/components/mail-and-password-auth.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/account/(commonLayout)/account-page/email-change-modal.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/account/(commonLayout)/account-page/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/account/(commonLayout)/delete-account/components/feed-back.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/account/(commonLayout)/delete-account/components/verify-email.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/account/(commonLayout)/delete-account/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/account/oauth/authorize/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/account/oauth/authorize/page.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app-sidebar/app-info/app-operations.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/app/components/app-sidebar/app-sidebar-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app-sidebar/basic.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app-sidebar/dataset-info/dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app-sidebar/dataset-sidebar-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app-sidebar/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app-sidebar/toggle-button.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/annotation/add-annotation-modal/edit-item/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/annotation/batch-add-annotation-modal/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/annotation/edit-annotation-modal/edit-item/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/annotation/header-opts/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/annotation/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/app/annotation/type.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/app/annotation/view-annotation-modal/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 5 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/app-access-control/specific-groups-or-members.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/app-publisher/features-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/app/app-publisher/index.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/app/app-publisher/publish-with-multiple-model.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/app-publisher/version-info-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/base/var-highlight/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-prompt/advanced-prompt-input.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/config-prompt/conversation-history/edit-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-prompt/simple-prompt-input.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/app/configuration/config-var/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/config-var/config-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/app/configuration/config-var/config-modal/type-select.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-var/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-var/select-var-type.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-vision/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config-vision/param-config-content.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/agent/agent-setting/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/agent/agent-setting/item-panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/agent/agent-tools/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/app/configuration/config/agent/agent-tools/setting-built-in-tool.tsx": { + "react-hooks/exhaustive-deps": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/app/configuration/config/assistant-type-picker/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/get-automatic-res.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/instruction-editor.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/config/automatic/prompt-res-in-workflow.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/prompt-res.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/automatic/version-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/code-generator/get-code-generator-res.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/config/config-audio.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/config/config-document.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/context-var/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/context-var/var-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/params-config/__tests__/config-content.spec.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/params-config/config-content.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/params-config/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/select-dataset/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/dataset-config/settings-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/app/configuration/dataset-config/settings-modal/retrieval-section.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/debug/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/debug/chat-user-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/chat-item.tsx": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/debug-item.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/model-parameter-trigger.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/configuration/debug/debug-with-multiple-model/text-generation-item.tsx": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/app/configuration/debug/debug-with-single-model/index.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/app/configuration/debug/hooks.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/app/configuration/debug/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 11 + } + }, + "web/app/components/app/configuration/debug/types.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/configuration/prompt-value-panel/index.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/app/configuration/prompt-value-panel/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/create-app-dialog/app-list/sidebar.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/create-app-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/create-from-dsl-modal/dsl-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/create-from-dsl-modal/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/app/duplicate-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/log/filter.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/log/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/log/list.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 6 + }, + "style/multiline-ternary": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/app/log/model-info.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/overview/app-card.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/overview/customize/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/overview/embedded/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/overview/settings/index.tsx": { + "no-restricted-imports": { + "count": 3 + }, + "react/set-state-in-effect": { + "count": 3 + }, + "regexp/no-unused-capturing-group": { + "count": 1 + } + }, + "web/app/components/app/overview/trigger-card.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/app/switch-app-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/app/text-generate/item/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/app/text-generate/item/result-tab.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/app/workflow-log/detail.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/app/workflow-log/filter.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/app/workflow-log/list.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/app/workflow-log/trigger-by-display.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/apps/app-card.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/apps/list.tsx": { + "react-hooks/exhaustive-deps": { + "count": 1 + }, + "react/unsupported-syntax": { + "count": 2 + } + }, + "web/app/components/apps/new-app-card.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/action-button/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/agent-log-modal/detail.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/agent-log-modal/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/agent-log-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/agent-log-modal/result.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/agent-log-modal/tool-call.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/amplitude/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/amplitude/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/app-icon-picker/ImageInput.tsx": { + "react/no-create-ref": { + "count": 1 + } + }, + "web/app/components/base/audio-btn/audio.ts": { + "node/prefer-global/buffer": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/audio-btn/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/audio-gallery/AudioPlayer.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/auto-height-textarea/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/auto-height-textarea/index.tsx": { + "react-hooks/rules-of-hooks": { + "count": 1 + }, + "react/rules-of-hooks": { + "count": 1 + } + }, + "web/app/components/base/badge/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/base/block-input/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/block-input/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "react/component-hook-factories": { + "count": 1 + }, + "react/no-nested-component-definitions": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/carousel/index.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/app/components/base/chat/chat-with-history/chat-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/base/chat/chat-with-history/context.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/base/chat/chat-with-history/header-in-mobile.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/chat/chat-with-history/header/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/chat/chat-with-history/hooks.tsx": { + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 18 + } + }, + "web/app/components/base/chat/chat-with-history/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/chat/chat-with-history/inputs-form/content.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/chat/chat-with-history/sidebar/operation.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/chat/chat-with-history/sidebar/rename-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/answer/agent-content.tsx": { + "style/multiline-ternary": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/answer/human-input-content/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/answer/index.tsx": { + "react/set-state-in-effect": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/answer/operation.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/base/chat/chat/answer/workflow-process.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/chat-input-area/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/chat/chat/check-input-forms-hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/citation/index.tsx": { + "react-hooks/exhaustive-deps": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/hooks.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 17 + } + }, + "web/app/components/base/chat/chat/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/base/chat/chat/type.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/base/chat/chat/utils.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/chat/embedded-chatbot/chat-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/base/chat/embedded-chatbot/context.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/base/chat/embedded-chatbot/header/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/chat/embedded-chatbot/hooks.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 3 + }, + "react/set-state-in-effect": { + "count": 6 + } + }, + "web/app/components/base/chat/embedded-chatbot/inputs-form/content.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/chat/types.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/chat/utils.ts": { + "ts/no-explicit-any": { + "count": 10 + } + }, + "web/app/components/base/checkbox/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/chip/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/content-dialog/index.stories.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/copy-feedback/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/date-and-time-picker/date-picker/index.tsx": { + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/app/components/base/date-and-time-picker/hooks.ts": { + "react/no-unnecessary-use-prefix": { + "count": 2 + } + }, + "web/app/components/base/date-and-time-picker/time-picker/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/base/date-and-time-picker/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/base/date-and-time-picker/utils/dayjs.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/dialog/index.stories.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/drawer-plus/index.stories.tsx": { + "react/component-hook-factories": { + "count": 1 + } + }, + "web/app/components/base/emoji-picker/Inner.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/emoji-picker/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/error-boundary/index.tsx": { + "react-refresh/only-export-components": { + "count": 3 + }, + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/features/context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/features/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/annotation-ctrl-button.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/config-param-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/config-param.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/type.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/annotation-reply/use-annotation-config.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/features/new-feature-panel/conversation-opener/modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/feature-bar.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/feature-card.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/base/features/new-feature-panel/file-upload/setting-modal.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/moderation/form-generation.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/moderation/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/features/new-feature-panel/moderation/moderation-setting-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/features/new-feature-panel/text-to-speech/param-config-content.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/base/features/new-feature-panel/text-to-speech/voice-settings.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/features/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/dynamic-pdf-preview.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/file-list-in-log.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/no-missing-key": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/hooks.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/file-uploader/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/base/file-uploader/pdf-highlighter-adapter.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/file-uploader/pdf-preview.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/store.tsx": { + "react-refresh/only-export-components": { + "count": 4 + } + }, + "web/app/components/base/file-uploader/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/base/file-uploader/utils.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/form/components/base/base-field.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/form/components/base/base-form.tsx": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/base/form/components/base/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/form/components/field/mixed-variable-text-input/placeholder.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/form/components/field/variable-or-constant-input.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/form/components/field/variable-selector.tsx": { + "no-console": { + "count": 1 + } + }, + "web/app/components/base/form/form-scenarios/base/field.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/form/form-scenarios/base/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/form/form-scenarios/demo/index.tsx": { + "no-console": { + "count": 2 + } + }, + "web/app/components/base/form/form-scenarios/input-field/field.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/form/form-scenarios/input-field/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/form/form-scenarios/node-panel/field.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/form/form-scenarios/node-panel/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/form/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/form/hooks/use-check-validated.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/form/hooks/use-get-validators.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/form/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 15 + } + }, + "web/app/components/base/form/utils/secret-input/index.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/ga/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/icons/src/public/avatar/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/public/billing/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/base/icons/src/public/common/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/public/education/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/public/files/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 11 + } + }, + "web/app/components/base/icons/src/public/knowledge/dataset-card/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/base/icons/src/public/knowledge/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/public/knowledge/online-drive/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/public/llm/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 9 + } + }, + "web/app/components/base/icons/src/public/other/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/public/thought/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/public/tracing/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 21 + } + }, + "web/app/components/base/icons/src/vender/features/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 10 + } + }, + "web/app/components/base/icons/src/vender/knowledge/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 15 + } + }, + "web/app/components/base/icons/src/vender/line/alertsAndFeedback/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/line/arrows/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/vender/line/communication/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/line/development/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/line/editor/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/line/education/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/line/files/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/vender/line/financeAndECommerce/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/base/icons/src/vender/line/general/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 12 + } + }, + "web/app/components/base/icons/src/vender/line/images/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/line/layout/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/line/mediaAndDevices/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/line/others/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 7 + } + }, + "web/app/components/base/icons/src/vender/line/time/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/other/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 8 + } + }, + "web/app/components/base/icons/src/vender/pipeline/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/plugin/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/solid/FinanceAndECommerce/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/alertsAndFeedback/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/arrows/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/solid/communication/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/base/icons/src/vender/solid/development/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/base/icons/src/vender/solid/editor/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/education/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/solid/files/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/icons/src/vender/solid/general/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 11 + } + }, + "web/app/components/base/icons/src/vender/solid/mediaAndDevices/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/base/icons/src/vender/solid/security/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/shapes/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/solid/users/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/system/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/icons/src/vender/workflow/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 30 + } + }, + "web/app/components/base/icons/utils.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/image-uploader/__tests__/image-preview.spec.tsx": { + "erasable-syntax-only/parameter-properties": { + "count": 1 + } + }, + "web/app/components/base/image-uploader/__tests__/utils.spec.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/image-uploader/hooks.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/image-uploader/image-link-input.tsx": { + "regexp/no-unused-capturing-group": { + "count": 1 + } + }, + "web/app/components/base/image-uploader/image-list.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/image-uploader/image-preview.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/image-uploader/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/inline-delete-confirm/index.stories.tsx": { + "no-console": { + "count": 2 + } + }, + "web/app/components/base/input-with-copy/index.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/base/input/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/input/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/logo/dify-logo.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/markdown-blocks/audio-block.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/base/markdown-blocks/button.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/markdown-blocks/code-block.tsx": { + "react/set-state-in-effect": { + "count": 7 + }, + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/base/markdown-blocks/form.tsx": { + "erasable-syntax-only/enums": { + "count": 3 + } + }, + "web/app/components/base/markdown-blocks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 10 + } + }, + "web/app/components/base/markdown-blocks/link.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/markdown-blocks/paragraph.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/markdown-blocks/plugin-img.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/base/markdown-blocks/plugin-paragraph.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/base/markdown-blocks/think-block.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/markdown-blocks/video-block.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/base/markdown/error-boundary.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/markdown/markdown-utils.ts": { + "regexp/no-unused-capturing-group": { + "count": 1 + } + }, + "web/app/components/base/mermaid/index.tsx": { + "react/set-state-in-effect": { + "count": 7 + }, + "regexp/no-super-linear-backtracking": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/mermaid/utils.ts": { + "regexp/no-unused-capturing-group": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/message-log-modal/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/message-log-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/modal-like-wrap/index.stories.tsx": { + "no-console": { + "count": 3 + } + }, + "web/app/components/base/modal/index.stories.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/modal/modal.stories.tsx": { + "no-console": { + "count": 4 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/new-audio-button/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/node-status/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/notion-connector/index.stories.tsx": { + "no-console": { + "count": 1 + } + }, + "web/app/components/base/notion-page-selector/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/base/pagination/index.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/base/pagination/type.ts": { + "ts/no-empty-object-type": { + "count": 1 + } + }, + "web/app/components/base/param-item/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/portal-to-follow-elem/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/index.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/prompt-editor/plugins/component-picker-block/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/component-picker-block/menu.tsx": { + "erasable-syntax-only/parameter-properties": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/context-block/component.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/context-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/current-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/draggable-plugin/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/error-message-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/history-block/component.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/history-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/hitl-input-block/component-ui.tsx": { + "react-hooks/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/hitl-input-block/hitl-input-block-replacement-block.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/hitl-input-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/hitl-input-block/variable-block.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/prompt-editor/plugins/last-run-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/query-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/request-url-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/shortcuts-popup-plugin/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/update-block.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/variable-block/index.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/plugins/workflow-variable-block/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "react-refresh/only-export-components": { + "count": 3 + } + }, + "web/app/components/base/prompt-editor/plugins/workflow-variable-block/node.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/prompt-editor/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/prompt-log-modal/index.stories.tsx": { + "no-console": { + "count": 1 + } + }, + "web/app/components/base/prompt-log-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/qrcode/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/radio-card/index.stories.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/radio/component/group/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/radio/context/index.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/radio/index.stories.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/search-input/index.stories.tsx": { + "no-console": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/select/index.stories.tsx": { + "no-console": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/select/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "style/multiline-ternary": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/sort/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/svg-gallery/index.tsx": { + "node/prefer-global/buffer": { + "count": 1 + } + }, + "web/app/components/base/switch/index.stories.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/tab-slider/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/base/tag-input/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/tag-management/__tests__/panel.spec.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/base/tag-management/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/tag-management/tag-item-editor.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/tag-management/tag-remove-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/base/text-generation/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/text-generation/types.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/base/textarea/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/textarea/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/base/video-gallery/VideoPlayer.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/base/voice-input/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/base/voice-input/index.stories.tsx": { + "no-console": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/voice-input/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/base/with-input-validation/index.stories.tsx": { + "no-console": { + "count": 1 + }, + "react/component-hook-factories": { + "count": 1 + } + }, + "web/app/components/base/with-input-validation/index.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/base/zendesk/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/billing/annotation-full/modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/billing/billing-page/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/billing/plan-upgrade-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/billing/plan/assets/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/billing/plan/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/billing/pricing/assets/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 12 + } + }, + "web/app/components/billing/pricing/plan-switcher/plan-range-switcher.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/billing/pricing/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/billing/priority-label/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/billing/type.ts": { + "erasable-syntax-only/enums": { + "count": 4 + } + }, + "web/app/components/billing/upgrade-btn/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/billing/usage-info/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/common/document-picker/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/common/document-picker/preview-document-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/common/image-previewer/index.tsx": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/app/components/datasets/common/image-uploader/__tests__/store.spec.tsx": { + "react/error-boundaries": { + "count": 1 + } + }, + "web/app/components/datasets/common/image-uploader/hooks/use-upload.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/datasets/common/image-uploader/image-uploader-in-retrieval-testing/image-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/common/image-uploader/store.tsx": { + "react-refresh/only-export-components": { + "count": 3 + } + }, + "web/app/components/datasets/common/retrieval-method-info/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/datasets/common/retrieval-param-config/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/create-options/create-from-dsl-modal/dsl-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/create-options/create-from-dsl-modal/hooks/use-dsl-import.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/create-options/create-from-dsl-modal/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/list/template-card/details/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/list/template-card/details/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/create-from-pipeline/list/template-card/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/embedding-process/indexing-progress-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/empty-dataset-creation-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/file-preview/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/create/notion-page-preview/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-one/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/datasets/create/step-one/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-two/components/general-chunking-options.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-two/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/datasets/create/step-two/components/indexing-mode-section.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/datasets/create/step-two/components/inputs.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-two/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/datasets/create/step-two/hooks/use-indexing-config.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/app/components/datasets/create/step-two/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + }, + "react-hooks/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/datasets/create/step-two/preview-item/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/datasets/create/stop-embedding-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/base/checkbox-with-label.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/base/field.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/firecrawl/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-console": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/datasets/create/website/firecrawl/options.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/jina-reader/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-console": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/datasets/create/website/jina-reader/options.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/create/website/watercrawl/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-console": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/datasets/create/website/watercrawl/options.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/documents/components/document-list/components/document-table-row.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/components/document-list/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/datasets/documents/components/document-list/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/datasets/documents/components/documents-header.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/components/operations.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/components/rename-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/base/credential-selector/__tests__/index.spec.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/base/credential-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/base/header.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-documents/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/file-list/header/breadcrumbs/bucket.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/file-list/header/breadcrumbs/dropdown/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/file-list/list/item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/index.tsx": { + "react/set-state-in-effect": { + "count": 5 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/online-drive/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/store/provider.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/store/slices/online-drive.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/website-crawl/base/checkbox-with-label.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/website-crawl/base/options/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/data-source/website-crawl/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/process-documents/form.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/process-documents/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/processing/embedding-process/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/steps/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/datasets/documents/create-from-pipeline/types.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/batch-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/common/chunk-content.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/common/regeneration-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/common/summary-status.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/datasets/documents/detail/completed/components/menu-bar.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/datasets/documents/detail/completed/components/segment-list-content.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/display-toggle.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/datasets/documents/detail/completed/hooks/use-search-filter.ts": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "react/use-memo": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/segment-card/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/completed/status-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/context.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/embedding/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/datasets/documents/detail/embedding/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/metadata/components/doc-type-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/metadata/components/field-info.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/metadata/components/metadata-field-list.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/metadata/hooks/use-metadata-state.ts": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 4 + }, + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/app/components/datasets/documents/detail/metadata/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/segment-add/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/datasets/documents/detail/settings/pipeline-settings/index.tsx": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/datasets/documents/detail/settings/pipeline-settings/process-documents/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/datasets/documents/status-item/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/external-api/external-api-modal/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/external-knowledge-base/create/ExternalApiSelect.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/external-knowledge-base/create/ExternalApiSelection.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/extra-info/statistics.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/formatted-text/flavours/type.ts": { + "ts/no-empty-object-type": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/components/chunk-detail-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/components/query-input/textarea.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/components/result-item-external.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/components/score.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/datasets/hit-testing/index.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/datasets/list/dataset-card/components/dataset-card-footer.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/list/dataset-card/hooks/use-dataset-card-state.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/edit-metadata-batch/edited-beacon.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/edit-metadata-batch/input-combined.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/datasets/metadata/edit-metadata-batch/modal.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/datasets/metadata/hooks/use-edit-dataset-metadata.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/hooks/use-metadata-document.ts": { + "ts/no-explicit-any": { + "count": 1 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 2 + } + }, + "web/app/components/datasets/metadata/metadata-dataset/create-content.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/metadata-dataset/create-metadata-modal.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/metadata-dataset/dataset-metadata-drawer.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/datasets/metadata/metadata-dataset/select-metadata-modal.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/metadata-document/info-group.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/metadata/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/datasets/rename-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/settings/chunk-structure/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/datasets/settings/index-method/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/settings/index-method/keyword-number.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/datasets/settings/permission-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/no-missing-key": { + "count": 1 + } + }, + "web/app/components/datasets/settings/summary-index-setting.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/develop/code.tsx": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/develop/md.tsx": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/develop/secret-key/input-copy.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/develop/secret-key/secret-key-generate.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/develop/secret-key/secret-key-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/explore/banner/banner-item.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/app/components/explore/banner/indicator-button.tsx": { + "react-hooks-extra/no-direct-set-state-in-use-effect": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/explore/create-app-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + }, + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/explore/item-operation/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/explore/try-app/app/chat.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/explore/try-app/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/explore/try-app/tab.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/goto-anything/actions/commands/command-bus.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/goto-anything/actions/commands/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/goto-anything/actions/commands/registry.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/goto-anything/actions/commands/slash.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/goto-anything/actions/commands/types.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/goto-anything/actions/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/goto-anything/actions/types.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/goto-anything/command-selector.tsx": { + "react/unsupported-syntax": { + "count": 2 + } + }, + "web/app/components/goto-anything/components/footer.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/goto-anything/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/goto-anything/context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/app/components/goto-anything/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 4 + } + }, + "web/app/components/goto-anything/hooks/use-goto-anything-results.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/header/account-about/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-dropdown/compliance.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/header/account-setting/api-based-extension-page/modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/api-based-extension-page/selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/card.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/data-source-page-new/configure.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/header/account-setting/data-source-page-new/hooks/use-marketplace-all-plugins.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/install-from-marketplace.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/data-source-page-new/operator.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/header/account-setting/data-source-page-new/types.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/key-validator/declarations.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/language-page/index.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/header/account-setting/members-page/invite-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/app/components/header/account-setting/members-page/operation/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/members-page/transfer-ownership-modal/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/header/account-setting/members-page/transfer-ownership-modal/member-selector.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/declarations.ts": { + "erasable-syntax-only/enums": { + "count": 11 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/header/account-setting/model-provider-page/hooks.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/add-credential-in-load-balancing.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/add-custom-model.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/authorized/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/config-provider.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/credential-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 6 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/hooks/use-auth.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/hooks/use-custom-models.ts": { + "react/no-unnecessary-use-prefix": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/hooks/use-model-form-schemas.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 7 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-auth/switch-credential-in-load-balancing.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-modal/Form.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-modal/Input.tsx": { + "unicorn/prefer-number-properties": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-modal/index.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-parameter-modal/configuration-button.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-parameter-modal/model-display.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-parameter-modal/status-indicators.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/model-selector/feature-icon.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/cooldown-timer.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/model-auth-dropdown/__tests__/use-activate-credential.spec.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/model-list-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/model-load-balancing-configs.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/model-load-balancing-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/header/account-setting/model-provider-page/provider-added-card/priority-use-tip.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/header/account-setting/model-provider-page/utils.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/header/account-setting/plugin-page/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/header/app-nav/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/header/header-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/base/badges/icon-with-tooltip.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/base/key-value-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/card/index.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/hooks.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/plugins/install-plugin/hooks/use-fold-anim-into.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-bundle/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-bundle/item/github-item.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-bundle/steps/hooks/use-install-multi-state.ts": { + "react-hooks/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-from-github/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/plugins/install-plugin/install-from-github/steps/selectPackage.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-from-local-package/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/install-plugin/install-from-local-package/steps/uploading.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/install-plugin/install-from-marketplace/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/marketplace/hooks.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/app/components/plugins/marketplace/search-box/tags-filter.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/marketplace/sort-dropdown/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/authorize/add-oauth-button.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-auth/authorize/api-key-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-auth/authorize/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/authorize/oauth-client-settings.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-auth/authorized-in-node.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/authorized/index.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-auth/authorized/item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/hooks/use-get-api.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 12 + }, + "react-refresh/only-export-components": { + "count": 3 + } + }, + "web/app/components/plugins/plugin-auth/plugin-auth-in-agent.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "no-barrel-files/no-barrel-files": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-auth/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/agent-strategy-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/app-selector/app-inputs-form.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/plugins/plugin-detail-panel/app-selector/app-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/app-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/datasource-action-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/detail-header.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/detail-header/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/detail-header/components/plugin-source-badge.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/detail-header/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/plugins/plugin-detail-panel/endpoint-card.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/endpoint-list.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/endpoint-modal.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/plugins/plugin-detail-panel/model-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/model-selector/index.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/plugins/plugin-detail-panel/model-selector/tts-params-panel.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/multiple-tool-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/strategy-detail.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/__tests__/index.spec.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/common-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/hooks/use-common-modal-state.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/hooks/use-oauth-client-state.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + }, + "no-restricted-imports": { + "count": 3 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/oauth-client.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/create/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/edit/apikey-edit-modal.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/edit/manual-edit-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/edit/oauth-edit-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/list-view.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/log-viewer.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/selector-entry.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/selector-view.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/subscription-card.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/subscription-list/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/components/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 7 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/components/reasoning-config-form.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/components/schema-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/components/tool-item.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-detail-panel/tool-selector/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-detail-panel/trigger/event-detail-drawer.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/plugins/plugin-item/action.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-item/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-mutation-model/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/context.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/debug-info.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/empty/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-page/filter-management/category-filter.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/filter-management/tag-filter.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/install-plugin-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/plugins/plugin-page/plugin-info.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/plugin-tasks/components/task-status-indicator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/plugin-page/plugin-tasks/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/readme-panel/index.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/plugins/readme-panel/store.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/plugins/reference-setting-modal/auto-update-setting/strategy-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/reference-setting-modal/auto-update-setting/tool-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/reference-setting-modal/auto-update-setting/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/plugins/reference-setting-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/plugins/types.ts": { + "erasable-syntax-only/enums": { + "count": 7 + }, + "ts/no-explicit-any": { + "count": 25 + } + }, + "web/app/components/plugins/update-plugin/from-market-place.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/hidden-fields.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/hooks.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/initial-fields.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/show-all-settings.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/editor/form/types.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/hooks.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/input-field/label-right-content/global-inputs.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/preparation/document-processing/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/preparation/document-processing/options.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/preparation/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/result/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/result/result-preview/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/panel/test-run/result/result-preview/utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/rag-pipeline/components/publish-as-knowledge-pipeline-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/rag-pipeline-children.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/rag-pipeline-header/publisher/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/rag-pipeline-header/run-mode.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/rag-pipeline-main.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/components/update-dsl-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/components/version-mismatch-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 9 + } + }, + "web/app/components/rag-pipeline/hooks/use-DSL.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/hooks/use-input-fields.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/hooks/use-nodes-sync-draft.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/hooks/use-pipeline-config.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/rag-pipeline/hooks/use-pipeline-init.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/hooks/use-pipeline-run.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/store/index.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/rag-pipeline/utils/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/rag-pipeline/utils/nodes.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/share/text-generation/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/share/text-generation/info-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/share/text-generation/menu-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/share/text-generation/no-data/index.tsx": { + "ts/no-empty-object-type": { + "count": 1 + } + }, + "web/app/components/share/text-generation/result/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/share/text-generation/run-batch/csv-reader/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/share/text-generation/run-once/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/share/text-generation/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/share/utils.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/tools/edit-custom-collection-modal/config-credentials.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/edit-custom-collection-modal/get-schema.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/edit-custom-collection-modal/index.tsx": { + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/tools/edit-custom-collection-modal/test-api.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/labels/selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/create-card.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/mcp/detail/content.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/tools/mcp/detail/operation-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/detail/tool-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/mcp-server-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/tools/mcp/mcp-server-param-item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/mcp/mcp-service-card.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/mcp/provider-card.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/tools/provider-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/provider/empty.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/tools/setting/build-in/config-credentials.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/tools/types.ts": { + "erasable-syntax-only/enums": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/tools/workflow-tool/confirm-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/workflow-tool/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/tools/workflow-tool/method-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow-app/components/workflow-children.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow-app/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 13 + } + }, + "web/app/components/workflow-app/hooks/use-DSL.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow-app/hooks/use-nodes-sync-draft.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow-app/hooks/use-workflow-init.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow-app/hooks/use-workflow-refresh-draft.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow-app/hooks/use-workflow-run.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow-app/hooks/use-workflow-template.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow-app/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow-app/store/workflow/workflow-slice.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/block-selector/all-start-blocks.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/blocks.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/featured-tools.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/block-selector/featured-triggers.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/block-selector/hooks.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/index-bar.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/main.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/market-place-plugin/action.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/market-place-plugin/item.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/rag-tool-recommendations/index.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/start-blocks.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tabs.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tool-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tool/action-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tool/tool-list-flat-view/list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/tool/tool.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/workflow/block-selector/trigger-plugin/action-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/trigger-plugin/item.tsx": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/types.ts": { + "erasable-syntax-only/enums": { + "count": 4 + } + }, + "web/app/components/workflow/block-selector/use-check-vertical-scrollbar.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/use-sticky-scroll.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/block-selector/view-type-select.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/candidate-node-main.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/datasets-detail-store/provider.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/dsl-export-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/header/run-mode.tsx": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/header/test-run-menu.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/header/version-history-button.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/header/view-history.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/workflow/header/view-workflow-history.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks-store/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/hooks-store/provider.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/hooks-store/store.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/hooks/__tests__/use-checklist.spec.ts": { + "react/error-boundaries": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 27 + } + }, + "web/app/components/workflow/hooks/use-checklist.ts": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/hooks/use-dynamic-test-run-options.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/hooks/use-helpline.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-inspect-vars-crud-common.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-nodes-interactions.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/hooks/use-serial-async-callback.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-workflow-interactions.ts": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/workflow/hooks/use-workflow-run-event/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 19 + } + }, + "web/app/components/workflow/hooks/use-workflow-run-event/use-workflow-agent-log.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-workflow-run-event/use-workflow-finished.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/hooks/use-workflow-search.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/hooks/use-workflow-variables.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/add-variable-popup-with-position.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/_base/components/agent-strategy-selector.tsx": { + "no-restricted-imports": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/_base/components/agent-strategy.tsx": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/_base/components/before-run-form/form-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 11 + } + }, + "web/app/components/workflow/nodes/_base/components/before-run-form/form.tsx": { + "ts/no-explicit-any": { + "count": 3 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/before-run-form/index.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/_base/components/collapse/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/config-vision.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/editor/code-editor/editor-support-vars.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/_base/components/editor/code-editor/index.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/_base/components/entry-node-container.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/error-handle/default-value.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/error-handle/error-handle-on-panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/error-handle/error-handle-type-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/error-handle/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/field.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/form-input-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/_base/components/form-input-type-switch.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/help-link.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/input-support-select-var.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/input-var-type-icon.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/layout/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 7 + }, + "react-refresh/only-export-components": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/_base/components/mcp-tool-availability.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/mcp-tool-not-support-tooltip.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/memory-config.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/mixed-variable-text-input/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/mixed-variable-text-input/placeholder.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/next-step/operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/node-control.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/node-handle.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/option-card.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/panel-operator/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/prompt/editor.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/_base/components/readonly-input-with-select-var.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/selector.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/_base/components/setting-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/switch-plugin-version.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/constant-field.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/match-schema-type.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/object-child-tree-panel/picker/field.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/output-var-list.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/utils.ts": { + "ts/no-explicit-any": { + "count": 32 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/var-list.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/var-reference-picker.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/var-reference-vars.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/var-type-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/variable-label/hooks.ts": { + "react/no-unnecessary-use-prefix": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/_base/components/variable/variable-label/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/_base/components/workflow-panel/index.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/_base/components/workflow-panel/last-run/index.tsx": { + "react/set-state-in-effect": { + "count": 7 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/_base/components/workflow-panel/last-run/use-last-run.ts": { + "react/no-unnecessary-use-prefix": { + "count": 2 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/_base/components/workflow-panel/tab.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/hooks/use-one-step-run.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 22 + } + }, + "web/app/components/workflow/nodes/_base/hooks/use-output-var-list.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/nodes/_base/hooks/use-toggle-expend.ts": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/hooks/use-var-list.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/_base/node.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/_base/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/components/model-bar.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-empty-object-type": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/components/tool-icon.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/default.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/agent/node.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/agent/panel.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/agent/use-config.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/agent/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/answer/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/components/operation-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/assigner/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/assigner/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/code/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/code/dependency-picker.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/code/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/code/use-config.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "regexp/no-useless-assertions": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/code/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/components.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/data-source-empty/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/data-source/default.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/data-source/hooks/use-before-run-form.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/data-source/hooks/use-config.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/data-source/panel.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/data-source/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-barrel-files/no-barrel-files": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/data-source/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/document-extractor/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/document-extractor/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/end/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/end/node.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/components/authorization/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/components/curl-panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/components/key-value/key-value-edit/index.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/http/components/key-value/key-value-edit/item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/types.ts": { + "erasable-syntax-only/enums": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/http/use-config.ts": { + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/http/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/human-input/components/button-style-dropdown.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/email-configure-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/method-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/method-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/recipient/email-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/recipient/member-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/test-email-sender.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/delivery-method/upgrade-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/form-content-preview.tsx": { + "react/unsupported-syntax": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/components/form-content.tsx": { + "react/component-hook-factories": { + "count": 1 + }, + "react/no-nested-component-definitions": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/human-input/components/variable-in-markdown.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/human-input/panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/human-input/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-add.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-list/condition-input.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-list/condition-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-list/condition-operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-list/condition-var-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-number-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/components/condition-wrap.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/if-else/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/if-else/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/iteration-start/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/node.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/use-config.ts": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/iteration/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/chunk-structure/selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/index-method.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/retrieval-setting/hooks.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/retrieval-setting/search-method-option.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/retrieval-setting/top-k-and-score-threshold.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-base/components/retrieval-setting/type.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/knowledge-base/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-barrel-files/no-barrel-files": { + "count": 8 + } + }, + "web/app/components/workflow/nodes/knowledge-base/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/knowledge-base/utils.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/add-condition.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-common-variable-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-value-method.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/condition-list/condition-variable-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/metadata-filter/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/metadata-filter/metadata-filter-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/metadata/metadata-trigger.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/components/retrieval-config.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/node.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/types.ts": { + "erasable-syntax-only/enums": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/knowledge-retrieval/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/list-operator/components/filter-condition.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/list-operator/components/sub-variable-picker.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/list-operator/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/list-operator/types.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/config-prompt-item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/llm/components/config-prompt.tsx": { + "react/unsupported-syntax": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/code-editor.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-importer.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-config.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-generator/assets/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-generator/generated-result.tsx": { + "style/multiline-ternary": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-generator/index.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/json-schema-generator/prompt-editor.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/context.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/edit-card/actions.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/edit-card/auto-width-input.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/edit-card/type-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/json-schema-config-modal/visual-editor/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/components/structure-output.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/llm/use-config.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/llm/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/workflow/nodes/llm/utils.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/loop-start/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-add.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-list/condition-input.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-list/condition-item.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-list/condition-operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-list/condition-var-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-number-input.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/condition-wrap.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/loop-variables/form-item.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/loop/components/loop-variables/input-mode-selec.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/components/loop-variables/item.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/loop/components/loop-variables/variable-type-select.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/use-config.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/loop/use-single-run-form-params.helpers.ts": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/loop/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/components/extract-parameter/__tests__/list.spec.tsx": { + "unused-imports/no-unused-vars": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/components/extract-parameter/update.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/panel.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/types.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/use-config.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/parameter-extractor/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 9 + } + }, + "web/app/components/workflow/nodes/question-classifier/components/advanced-setting.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/question-classifier/components/class-item.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/question-classifier/components/class-list.tsx": { + "react/set-state-in-effect": { + "count": 1 + }, + "react/unsupported-syntax": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/question-classifier/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/question-classifier/node.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/question-classifier/use-config.ts": { + "react/set-state-in-effect": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/question-classifier/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/nodes/start/panel.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/start/use-config.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/start/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/template-transform/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/template-transform/use-config.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/template-transform/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/nodes/tool/components/copy-id.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/components/input-var-list.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/tool/components/mixed-variable-text-input/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/components/mixed-variable-text-input/placeholder.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/components/tool-form/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/components/tool-form/item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/tool/default.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/tool/hooks/use-config.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/tool/hooks/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/nodes/tool/output-schema-utils.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/tool/panel.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/tool/types.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/components/trigger-form/index.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/components/trigger-form/item.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/default.ts": { + "ts/no-explicit-any": { + "count": 11 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/node.tsx": { + "react/unsupported-syntax": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/panel.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/types.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/use-config.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-plugin/utils/form-helpers.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/nodes/trigger-schedule/components/frequency-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-schedule/components/monthly-days-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-schedule/default.ts": { + "regexp/no-unused-capturing-group": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 10 + } + }, + "web/app/components/workflow/nodes/trigger-webhook/components/generic-table.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-webhook/components/parameter-table.tsx": { + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-webhook/default.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/trigger-webhook/panel.tsx": { + "no-restricted-imports": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/utils.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/variable-assigner/components/var-group-item.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/nodes/variable-assigner/default.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/nodes/variable-assigner/use-single-run-form-params.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/note-node/note-editor/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 3 + } + }, + "web/app/components/workflow/note-node/note-editor/plugins/link-editor-plugin/component.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/toolbar/color-picker.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/toolbar/command.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/toolbar/font-size-selector.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/toolbar/operator.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/note-node/note-editor/utils.ts": { + "regexp/no-useless-quantifier": { + "count": 1 + } + }, + "web/app/components/workflow/operator/add-block.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/operator/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/operator/more-actions.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/operator/tip-popup.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/operator/zoom-in-out.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/chat-record/index.tsx": { + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/panel/chat-record/user-input.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/array-bool-list.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/array-value-list.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/object-value-item.tsx": { + "react-refresh/only-export-components": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 5 + }, + "unicorn/prefer-number-properties": { + "count": 2 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/object-value-list.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/components/variable-type-select.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/panel/chat-variable-panel/type.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/app/components/workflow/panel/debug-and-preview/chat-wrapper.tsx": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/app/components/workflow/panel/debug-and-preview/conversation-variable-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/panel/debug-and-preview/hooks.ts": { + "ts/no-explicit-any": { + "count": 12 + } + }, + "web/app/components/workflow/panel/debug-and-preview/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/env-panel/variable-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/panel/human-input-form-list.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/panel/inputs-panel.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/panel/version-history-panel/context-menu/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/version-history-panel/delete-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/version-history-panel/filter/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/version-history-panel/restore-confirm-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/panel/workflow-preview.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/run/agent-log/agent-log-nav-more.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/run/agent-log/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/run/hooks.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/run/index.tsx": { + "react/set-state-in-effect": { + "count": 2 + } + }, + "web/app/components/workflow/run/iteration-log/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/run/iteration-log/iteration-log-trigger.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/workflow/run/loop-log/__tests__/loop-log-trigger.spec.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/run/loop-log/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/run/loop-log/loop-log-trigger.tsx": { + "unicorn/prefer-number-properties": { + "count": 1 + } + }, + "web/app/components/workflow/run/node.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/components/workflow/run/output-panel.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/run/result-panel.tsx": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/app/components/workflow/run/result-text.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/run/retry-log/index.tsx": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/run/utils/format-log/agent/index.ts": { + "ts/no-explicit-any": { + "count": 11 + } + }, + "web/app/components/workflow/run/utils/format-log/graph-to-log-struct.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/components/workflow/run/utils/format-log/index.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/run/utils/format-log/iteration/index.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/run/utils/format-log/loop/index.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/run/utils/format-log/parallel/index.ts": { + "no-console": { + "count": 4 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/store/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/app/components/workflow/store/workflow/debug/inspect-vars-slice.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/store/workflow/workflow-draft-slice.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/store/workflow/workflow-slice.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/types.ts": { + "erasable-syntax-only/enums": { + "count": 16 + }, + "ts/no-empty-object-type": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/app/components/workflow/update-dsl-modal.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/utils/__tests__/node-navigation.spec.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/utils/data-source.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/utils/debug.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/utils/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 9 + } + }, + "web/app/components/workflow/utils/node-navigation.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/utils/node.ts": { + "regexp/no-super-linear-backtracking": { + "count": 1 + } + }, + "web/app/components/workflow/utils/tool.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/utils/workflow-init.ts": { + "ts/no-explicit-any": { + "count": 12 + } + }, + "web/app/components/workflow/utils/workflow.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/display-content.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/group.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/variable-inspect/left.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/listening.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/variable-inspect/panel.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/right.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/app/components/workflow/variable-inspect/trigger.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/variable-inspect/utils.tsx": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/app/components/workflow/variable-inspect/value-content.tsx": { + "react/set-state-in-effect": { + "count": 5 + }, + "regexp/no-super-linear-backtracking": { + "count": 1 + }, + "regexp/no-unused-capturing-group": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/components/workflow/workflow-history-store.tsx": { + "react-refresh/only-export-components": { + "count": 2 + } + }, + "web/app/components/workflow/workflow-preview/components/nodes/base.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/workflow-preview/components/nodes/constants.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/components/workflow/workflow-preview/components/nodes/iteration-start/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/workflow-preview/components/nodes/loop-start/index.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/components/workflow/workflow-preview/components/zoom-in-out.tsx": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/education-apply/expire-notice-modal.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/education-apply/hooks.ts": { + "react/set-state-in-effect": { + "count": 5 + } + }, + "web/app/education-apply/search-input.tsx": { + "no-restricted-imports": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/education-apply/verify-state-modal.tsx": { + "react/set-state-in-effect": { + "count": 1 + } + }, + "web/app/forgot-password/ForgotPasswordForm.spec.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/app/init/InitPasswordPopup.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/install/installForm.spec.tsx": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/app/layout.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/app/reset-password/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/signin/_header.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/signin/components/mail-and-password-auth.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/signin/invite-settings/page.tsx": { + "no-restricted-imports": { + "count": 1 + } + }, + "web/app/signin/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/signin/one-more-step.tsx": { + "no-restricted-imports": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/app/signup/layout.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/context/external-api-panel-context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/context/external-knowledge-api-context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/context/global-public-context.tsx": { + "react-refresh/only-export-components": { + "count": 3 + } + }, + "web/context/hooks/use-trigger-events-limit-modal.ts": { + "react/set-state-in-effect": { + "count": 3 + } + }, + "web/context/modal-context-provider.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/context/modal-context.test.tsx": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/context/modal-context.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/context/provider-context-provider.tsx": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/context/web-app-context.tsx": { + "react-refresh/only-export-components": { + "count": 1 + } + }, + "web/hooks/use-async-window-open.spec.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/hooks/use-format-time-from-now.spec.ts": { + "regexp/no-dupe-disjunctions": { + "count": 5 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/hooks/use-metadata.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/hooks/use-mitt.ts": { + "react/component-hook-factories": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/hooks/use-oauth.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/hooks/use-pay.tsx": { + "react/set-state-in-effect": { + "count": 4 + } + }, + "web/i18n-config/index.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/i18n-config/lib.client.ts": { + "no-barrel-files/no-barrel-files": { + "count": 1 + } + }, + "web/i18n/de-DE/billing.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/en-US/app-debug.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/app-debug.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/app.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/plugin-trigger.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/tools.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/fr-FR/workflow.json": { + "no-irregular-whitespace": { + "count": 2 + } + }, + "web/i18n/pt-BR/common.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/i18n/ru-RU/common.json": { + "no-irregular-whitespace": { + "count": 2 + } + }, + "web/i18n/uk-UA/app-debug.json": { + "no-irregular-whitespace": { + "count": 1 + } + }, + "web/models/access-control.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/models/app.ts": { + "erasable-syntax-only/enums": { + "count": 2 + } + }, + "web/models/common.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/models/datasets.ts": { + "erasable-syntax-only/enums": { + "count": 8 + }, + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/models/debug.ts": { + "erasable-syntax-only/enums": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/models/log.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/models/pipeline.ts": { + "erasable-syntax-only/enums": { + "count": 3 + }, + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/models/share.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/plugins/dev-proxy/server.spec.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/scripts/component-analyzer.js": { + "regexp/no-unused-capturing-group": { + "count": 6 + } + }, + "web/service/access-control.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/service/annotation.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/service/apps.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/service/base.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/service/client.spec.ts": { + "next/no-assign-module-variable": { + "count": 1 + } + }, + "web/service/common.ts": { + "ts/no-explicit-any": { + "count": 29 + } + }, + "web/service/datasets.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/service/debug.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/service/fetch.ts": { + "regexp/no-unused-capturing-group": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/service/knowledge/use-dataset.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/service/share.ts": { + "erasable-syntax-only/enums": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/service/try-app.ts": { + "no-barrel-files/no-barrel-files": { + "count": 2 + } + }, + "web/service/use-apps.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/service/use-common.ts": { + "ts/no-empty-object-type": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/service/use-endpoints.ts": { + "ts/no-explicit-any": { + "count": 7 + } + }, + "web/service/use-flow.ts": { + "react/no-unnecessary-use-prefix": { + "count": 1 + } + }, + "web/service/use-pipeline.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + } + }, + "web/service/use-plugins-auth.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/service/use-plugins.ts": { + "react/set-state-in-effect": { + "count": 1 + }, + "regexp/no-unused-capturing-group": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + }, + "ts/no-non-null-asserted-optional-chain": { + "count": 1 + } + }, + "web/service/use-tools.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/service/use-workflow.ts": { + "@tanstack/query/exhaustive-deps": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/service/utils.spec.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/types/app.ts": { + "erasable-syntax-only/enums": { + "count": 9 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/types/assets.d.ts": { + "ts/no-explicit-any": { + "count": 5 + } + }, + "web/types/common.ts": { + "erasable-syntax-only/enums": { + "count": 1 + } + }, + "web/types/feature.ts": { + "erasable-syntax-only/enums": { + "count": 3 + } + }, + "web/types/lamejs.d.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/types/pipeline.tsx": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/types/react-18-input-autosize.d.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/types/workflow.ts": { + "ts/no-explicit-any": { + "count": 17 + } + }, + "web/utils/clipboard.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/completion-params.spec.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/utils/completion-params.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/context.ts": { + "react/component-hook-factories": { + "count": 1 + } + }, + "web/utils/error-parser.ts": { + "no-console": { + "count": 1 + }, + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/get-icon.spec.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/utils/gtag.ts": { + "ts/no-explicit-any": { + "count": 2 + } + }, + "web/utils/index.spec.ts": { + "test/no-identical-title": { + "count": 2 + }, + "ts/no-explicit-any": { + "count": 8 + } + }, + "web/utils/index.ts": { + "ts/no-explicit-any": { + "count": 3 + } + }, + "web/utils/mcp.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/model-config.spec.ts": { + "ts/no-explicit-any": { + "count": 13 + } + }, + "web/utils/model-config.ts": { + "ts/no-explicit-any": { + "count": 6 + } + }, + "web/utils/navigation.spec.ts": { + "ts/no-explicit-any": { + "count": 4 + } + }, + "web/utils/tool-call.spec.ts": { + "ts/no-explicit-any": { + "count": 1 + } + }, + "web/utils/validators.ts": { + "ts/no-explicit-any": { + "count": 2 + } + } +} \ No newline at end of file diff --git a/eslint.config.mjs b/eslint.config.mjs new file mode 100644 index 0000000000..5e81e95f2f --- /dev/null +++ b/eslint.config.mjs @@ -0,0 +1,65 @@ +// @ts-check + +import antfu, { GLOB_MARKDOWN } from '@antfu/eslint-config' +import md from 'eslint-markdown' +import markdownPreferences from 'eslint-plugin-markdown-preferences' + +export default antfu( + { + ignores: original => [ + '**', + '!packages/**', + '!web/**', + '!e2e/**', + '!eslint.config.mjs', + '!package.json', + '!vite.config.ts', + ...original, + ], + typescript: { + overrides: { + 'ts/consistent-type-definitions': ['error', 'type'], + 'ts/no-explicit-any': 'error', + 'ts/no-redeclare': 'off', + }, + erasableOnly: true, + }, + test: { + overrides: { + 'test/prefer-lowercase-title': 'off', + }, + }, + stylistic: { + overrides: { + 'antfu/top-level-function': 'off', + }, + }, + e18e: false, + pnpm: false, + }, + markdownPreferences.configs.standard, + { + files: [GLOB_MARKDOWN], + plugins: { md }, + rules: { + 'md/no-url-trailing-slash': 'error', + 'markdown-preferences/prefer-link-reference-definitions': [ + 'error', + { + minLinks: 1, + }, + ], + 'markdown-preferences/ordered-list-marker-sequence': [ + 'error', + { increment: 'never' }, + ], + 'markdown-preferences/definitions-last': 'error', + 'markdown-preferences/sort-definitions': 'error', + }, + }, + { + rules: { + 'node/prefer-global/process': 'off', + }, + }, +) diff --git a/package.json b/package.json index 736a354ef7..5a67b66a9c 100644 --- a/package.json +++ b/package.json @@ -1,15 +1,26 @@ { "name": "dify", + "type": "module", "private": true, - "scripts": { - "prepare": "vp config" - }, - "devDependencies": { - "vite": "catalog:", - "vite-plus": "catalog:" - }, + "packageManager": "pnpm@10.33.0", "engines": { "node": "^22.22.1" }, - "packageManager": "pnpm@10.33.0" + "scripts": { + "prepare": "vp config", + "type-check": "vp run -r type-check", + "lint": "eslint --cache --concurrency=auto", + "lint:ci": "eslint --cache --cache-strategy content --concurrency 2", + "lint:fix": "vp run lint --fix", + "lint:quiet": "vp run lint --quiet" + }, + "devDependencies": { + "@antfu/eslint-config": "catalog:", + "eslint": "catalog:", + "eslint-markdown": "catalog:", + "eslint-plugin-markdown-preferences": "catalog:", + "eslint-plugin-no-barrel-files": "catalog:", + "vite": "catalog:", + "vite-plus": "catalog:" + } } diff --git a/packages/dify-ui/AGENTS.md b/packages/dify-ui/AGENTS.md new file mode 100644 index 0000000000..ecc968e130 --- /dev/null +++ b/packages/dify-ui/AGENTS.md @@ -0,0 +1,27 @@ +# @langgenius/dify-ui + +This package provides shared design tokens (colors, shadows, typography), the `cn()` utility, and a Tailwind CSS preset consumed by `web/`. + +## Border Radius: Figma Token → Tailwind Class Mapping + +The Figma design system uses `--radius/*` tokens whose scale is **offset by one step** from Tailwind CSS v4 defaults. When translating Figma specs to code, always use this mapping — never use `radius-*` as a CSS class, and never extend `borderRadius` in the preset. + +| Figma Token | Value | Tailwind Class | +| --------------- | ----- | ---------------- | +| `--radius/2xs` | 2px | `rounded-xs` | +| `--radius/xs` | 4px | `rounded-sm` | +| `--radius/sm` | 6px | `rounded-md` | +| `--radius/md` | 8px | `rounded-lg` | +| `--radius/lg` | 10px | `rounded-[10px]` | +| `--radius/xl` | 12px | `rounded-xl` | +| `--radius/2xl` | 16px | `rounded-2xl` | +| `--radius/3xl` | 20px | `rounded-[20px]` | +| `--radius/6xl` | 28px | `rounded-[28px]` | +| `--radius/full` | 999px | `rounded-full` | + +### Rules + +- **Do not** add custom `borderRadius` values to `tailwind-preset.ts`. We use Tailwind v4 defaults and arbitrary values (`rounded-[Npx]`) for sizes without a standard equivalent. +- **Do not** use `radius-*` as CSS class names. The old `@utility radius-*` definitions have been removed. +- When the Figma MCP returns `rounded-[var(--radius/sm, 6px)]`, convert it to the standard Tailwind class from the table above (e.g. `rounded-md`). +- For values without a standard Tailwind equivalent (10px, 20px, 28px), use arbitrary values like `rounded-[10px]`. diff --git a/packages/dify-ui/package.json b/packages/dify-ui/package.json new file mode 100644 index 0000000000..b54fde9b89 --- /dev/null +++ b/packages/dify-ui/package.json @@ -0,0 +1,29 @@ +{ + "name": "@langgenius/dify-ui", + "type": "module", + "version": "0.0.1", + "private": true, + "exports": { + "./styles.css": "./src/styles/styles.css", + "./tailwind-preset": { + "types": "./src/tailwind-preset.ts", + "import": "./src/tailwind-preset.ts" + }, + "./cn": { + "types": "./src/cn.ts", + "import": "./src/cn.ts" + } + }, + "scripts": { + "type-check": "tsc" + }, + "dependencies": { + "clsx": "catalog:", + "tailwind-merge": "catalog:" + }, + "devDependencies": { + "@dify/tsconfig": "workspace:*", + "tailwindcss": "catalog:", + "typescript": "catalog:" + } +} diff --git a/web/utils/classnames.ts b/packages/dify-ui/src/cn.ts similarity index 100% rename from web/utils/classnames.ts rename to packages/dify-ui/src/cn.ts diff --git a/packages/dify-ui/src/styles/components.css b/packages/dify-ui/src/styles/components.css new file mode 100644 index 0000000000..ab02be97fb --- /dev/null +++ b/packages/dify-ui/src/styles/components.css @@ -0,0 +1,69 @@ +[data-dify-scrollbar]::before, +[data-dify-scrollbar]::after { + content: ''; + position: absolute; + z-index: 1; + border-radius: 9999px; + pointer-events: none; + opacity: 0; + transition: opacity 150ms ease; +} + +[data-dify-scrollbar][data-orientation='vertical']::before { + left: 50%; + top: 4px; + width: 4px; + height: 12px; + transform: translateX(-50%); + background: linear-gradient(to bottom, var(--color-components-panel-bg), transparent); +} + +[data-dify-scrollbar][data-orientation='vertical']::after { + left: 50%; + bottom: 4px; + width: 4px; + height: 12px; + transform: translateX(-50%); + background: linear-gradient(to top, var(--color-components-panel-bg), transparent); +} + +[data-dify-scrollbar][data-orientation='horizontal']::before { + top: 50%; + left: 4px; + width: 12px; + height: 4px; + transform: translateY(-50%); + background: linear-gradient(to right, var(--color-components-panel-bg), transparent); +} + +[data-dify-scrollbar][data-orientation='horizontal']::after { + top: 50%; + right: 4px; + width: 12px; + height: 4px; + transform: translateY(-50%); + background: linear-gradient(to left, var(--color-components-panel-bg), transparent); +} + +[data-dify-scrollbar][data-orientation='vertical']:not([data-overflow-y-start])::before { + opacity: 1; +} + +[data-dify-scrollbar][data-orientation='vertical']:not([data-overflow-y-end])::after { + opacity: 1; +} + +[data-dify-scrollbar][data-orientation='horizontal']:not([data-overflow-x-start])::before { + opacity: 1; +} + +[data-dify-scrollbar][data-orientation='horizontal']:not([data-overflow-x-end])::after { + opacity: 1; +} + +@media (prefers-reduced-motion: reduce) { + [data-dify-scrollbar]::before, + [data-dify-scrollbar]::after { + transition: none; + } +} diff --git a/packages/dify-ui/src/styles/styles.css b/packages/dify-ui/src/styles/styles.css new file mode 100644 index 0000000000..fb410b2d5f --- /dev/null +++ b/packages/dify-ui/src/styles/styles.css @@ -0,0 +1,4 @@ +@import '../themes/light.css' layer(base); +@import '../themes/dark.css' layer(base); +@import './utilities.css'; +@import './components.css'; diff --git a/packages/dify-ui/src/styles/utilities.css b/packages/dify-ui/src/styles/utilities.css new file mode 100644 index 0000000000..69b15d4c10 --- /dev/null +++ b/packages/dify-ui/src/styles/utilities.css @@ -0,0 +1,272 @@ +@utility system-kbd { + font-size: 12px; + font-weight: 500; + line-height: 16px; +} + +@utility system-2xs-regular-uppercase { + font-size: 10px; + font-weight: 400; + text-transform: uppercase; + line-height: 12px; +} + +@utility system-2xs-regular { + font-size: 10px; + font-weight: 400; + line-height: 12px; +} + +@utility system-2xs-medium { + font-size: 10px; + font-weight: 500; + line-height: 12px; +} + +@utility system-2xs-medium-uppercase { + font-size: 10px; + font-weight: 500; + text-transform: uppercase; + line-height: 12px; +} + +@utility system-2xs-semibold-uppercase { + font-size: 10px; + font-weight: 600; + text-transform: uppercase; + line-height: 12px; +} + +@utility system-xs-regular { + font-size: 12px; + font-weight: 400; + line-height: 16px; +} + +@utility system-xs-regular-uppercase { + font-size: 12px; + font-weight: 400; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-xs-medium { + font-size: 12px; + font-weight: 500; + line-height: 16px; +} + +@utility system-xs-medium-uppercase { + font-size: 12px; + font-weight: 500; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-xs-semibold { + font-size: 12px; + font-weight: 600; + line-height: 16px; +} + +@utility system-xs-semibold-uppercase { + font-size: 12px; + font-weight: 600; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-sm-regular { + font-size: 13px; + font-weight: 400; + line-height: 16px; +} + +@utility system-sm-medium { + font-size: 13px; + font-weight: 500; + line-height: 16px; +} + +@utility system-sm-medium-uppercase { + font-size: 13px; + font-weight: 500; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-sm-semibold { + font-size: 13px; + font-weight: 600; + line-height: 16px; +} + +@utility system-sm-semibold-uppercase { + font-size: 13px; + font-weight: 600; + text-transform: uppercase; + line-height: 16px; +} + +@utility system-md-regular { + font-size: 14px; + font-weight: 400; + line-height: 20px; +} + +@utility system-md-medium { + font-size: 14px; + font-weight: 500; + line-height: 20px; +} + +@utility system-md-semibold { + font-size: 14px; + font-weight: 600; + line-height: 20px; +} + +@utility system-md-semibold-uppercase { + font-size: 14px; + font-weight: 600; + text-transform: uppercase; + line-height: 20px; +} + +@utility system-xl-medium { + font-size: 16px; + font-weight: 500; + line-height: 24px; +} + +@utility system-xl-semibold { + font-size: 16px; + font-weight: 600; + line-height: 24px; +} + +@utility code-xs-regular { + font-size: 12px; + font-weight: 400; + line-height: 1.5; +} + +@utility code-sm-regular { + font-size: 13px; + font-weight: 400; + line-height: 1.5; +} + +@utility code-sm-semibold { + font-size: 13px; + font-weight: 600; + line-height: 1.5; +} + +@utility body-xs-regular { + font-size: 12px; + font-weight: 400; + line-height: 16px; +} + +@utility body-xs-medium { + font-size: 12px; + font-weight: 500; + line-height: 16px; +} + +@utility body-sm-regular { + font-size: 13px; + font-weight: 400; + line-height: 16px; +} + +@utility body-sm-medium { + font-size: 13px; + font-weight: 500; + line-height: 16px; +} + +@utility body-md-regular { + font-size: 14px; + font-weight: 400; + line-height: 20px; +} + +@utility body-md-medium { + font-size: 14px; + font-weight: 500; + line-height: 20px; +} + +@utility body-lg-regular { + font-size: 15px; + font-weight: 400; + line-height: 20px; +} + +@utility body-2xl-regular { + font-size: 18px; + font-weight: 400; + line-height: 1.5; +} + +@utility title-xs-semi-bold { + font-size: 12px; + font-weight: 600; + line-height: 16px; +} + +@utility title-sm-semi-bold { + font-size: 13px; + font-weight: 600; + line-height: 16px; +} + +@utility title-md-semi-bold { + font-size: 14px; + font-weight: 600; + line-height: 20px; +} + +@utility title-lg-bold { + font-size: 15px; + font-weight: 700; + line-height: 1.2; +} + +@utility title-xl-semi-bold { + font-size: 16px; + font-weight: 600; + line-height: 1.2; +} + +@utility title-2xl-semi-bold { + font-size: 18px; + font-weight: 600; + line-height: 1.2; +} + +@utility title-3xl-semi-bold { + font-size: 20px; + font-weight: 600; + line-height: 1.2; +} + +@utility title-3xl-bold { + font-size: 20px; + font-weight: 700; + line-height: 1.2; +} + +@utility title-4xl-semi-bold { + font-size: 24px; + font-weight: 600; + line-height: 1.2; +} + +@utility title-5xl-bold { + font-size: 30px; + font-weight: 700; + line-height: 1.2; +} diff --git a/packages/dify-ui/src/tailwind-preset.ts b/packages/dify-ui/src/tailwind-preset.ts new file mode 100644 index 0000000000..2dbf4781b0 --- /dev/null +++ b/packages/dify-ui/src/tailwind-preset.ts @@ -0,0 +1,87 @@ +import tailwindThemeVarDefine from './themes/tailwind-theme-var-define' + +const difyUIPreset = { + theme: { + extend: { + colors: { + gray: { + 25: '#fcfcfd', + 50: '#f9fafb', + 100: '#f2f4f7', + 200: '#eaecf0', + 300: '#d0d5dd', + 400: '#98a2b3', + 500: '#667085', + 600: '#344054', + 700: '#475467', + 800: '#1d2939', + 900: '#101828', + }, + primary: { + 25: '#f5f8ff', + 50: '#eff4ff', + 100: '#d1e0ff', + 200: '#b2ccff', + 300: '#84adff', + 400: '#528bff', + 500: '#2970ff', + 600: '#155eef', + 700: '#004eeb', + 800: '#0040c1', + 900: '#00359e', + }, + blue: { + 500: '#E1EFFE', + }, + green: { + 50: '#F3FAF7', + 100: '#DEF7EC', + 800: '#03543F', + }, + yellow: { + 100: '#FDF6B2', + 800: '#723B13', + }, + purple: { + 50: '#F6F5FF', + 200: '#DCD7FE', + }, + indigo: { + 25: '#F5F8FF', + 50: '#EEF4FF', + 100: '#E0EAFF', + 300: '#A4BCFD', + 400: '#8098F9', + 600: '#444CE7', + 800: '#2D31A6', + }, + ...tailwindThemeVarDefine, + }, + boxShadow: { + 'xs': '0px 1px 2px 0px rgba(16, 24, 40, 0.05)', + 'sm': '0px 1px 2px 0px rgba(16, 24, 40, 0.06), 0px 1px 3px 0px rgba(16, 24, 40, 0.10)', + 'sm-no-bottom': '0px -1px 2px 0px rgba(16, 24, 40, 0.06), 0px -1px 3px 0px rgba(16, 24, 40, 0.10)', + 'md': '0px 2px 4px -2px rgba(16, 24, 40, 0.06), 0px 4px 8px -2px rgba(16, 24, 40, 0.10)', + 'lg': '0px 4px 6px -2px rgba(16, 24, 40, 0.03), 0px 12px 16px -4px rgba(16, 24, 40, 0.08)', + 'xl': '0px 8px 8px -4px rgba(16, 24, 40, 0.03), 0px 20px 24px -4px rgba(16, 24, 40, 0.08)', + '2xl': '0px 24px 48px -12px rgba(16, 24, 40, 0.18)', + '3xl': '0px 32px 64px -12px rgba(16, 24, 40, 0.14)', + 'status-indicator-green-shadow': '0px 2px 6px 0px var(--color-components-badge-status-light-success-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + 'status-indicator-warning-shadow': '0px 2px 6px 0px var(--color-components-badge-status-light-warning-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + 'status-indicator-red-shadow': '0px 2px 6px 0px var(--color-components-badge-status-light-error-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + 'status-indicator-blue-shadow': '0px 2px 6px 0px var(--color-components-badge-status-light-normal-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + 'status-indicator-gray-shadow': '0px 1px 2px 0px var(--color-components-badge-status-light-disabled-halo), 0px 0px 0px 1px var(--color-components-badge-status-light-border-outer)', + }, + opacity: { + 2: '0.02', + 8: '0.08', + }, + fontSize: { + '2xs': '0.625rem', + }, + }, + }, + plugins: [], +} + +export default difyUIPreset diff --git a/web/themes/dark.css b/packages/dify-ui/src/themes/dark.css similarity index 100% rename from web/themes/dark.css rename to packages/dify-ui/src/themes/dark.css diff --git a/web/themes/light.css b/packages/dify-ui/src/themes/light.css similarity index 100% rename from web/themes/light.css rename to packages/dify-ui/src/themes/light.css diff --git a/web/themes/tailwind-theme-var-define.ts b/packages/dify-ui/src/themes/tailwind-theme-var-define.ts similarity index 100% rename from web/themes/tailwind-theme-var-define.ts rename to packages/dify-ui/src/themes/tailwind-theme-var-define.ts diff --git a/packages/dify-ui/tsconfig.json b/packages/dify-ui/tsconfig.json new file mode 100644 index 0000000000..b31c48ead6 --- /dev/null +++ b/packages/dify-ui/tsconfig.json @@ -0,0 +1,14 @@ +{ + "extends": "@dify/tsconfig/base.json", + "compilerOptions": { + "rootDir": "src", + "declaration": true, + "declarationMap": true, + "noEmit": false, + "outDir": "dist", + "sourceMap": true, + "isolatedModules": true, + "verbatimModuleSyntax": true + }, + "include": ["src"] +} diff --git a/packages/iconify-collections/assets/public/common/enter-key.svg b/packages/iconify-collections/assets/public/common/enter-key.svg new file mode 100644 index 0000000000..edfddfc188 --- /dev/null +++ b/packages/iconify-collections/assets/public/common/enter-key.svg @@ -0,0 +1,4 @@ + + + + diff --git a/packages/iconify-collections/assets/public/other/comment.svg b/packages/iconify-collections/assets/public/other/comment.svg new file mode 100644 index 0000000000..0f0609f0b6 --- /dev/null +++ b/packages/iconify-collections/assets/public/other/comment.svg @@ -0,0 +1,3 @@ + + + diff --git a/packages/iconify-collections/custom-public/index.d.ts b/packages/iconify-collections/custom-public/index.d.ts index ecca5633d4..be2442726c 100644 --- a/packages/iconify-collections/custom-public/index.d.ts +++ b/packages/iconify-collections/custom-public/index.d.ts @@ -1,4 +1,4 @@ -export interface IconifyJSON { +export type IconifyJSON = { prefix: string icons: Record aliases?: Record @@ -7,7 +7,7 @@ export interface IconifyJSON { lastModified?: number } -export interface IconifyIcon { +export type IconifyIcon = { body: string left?: number top?: number @@ -18,11 +18,11 @@ export interface IconifyIcon { vFlip?: boolean } -export interface IconifyAlias extends Omit { +export type IconifyAlias = { parent: string -} +} & Omit -export interface IconifyInfo { +export type IconifyInfo = { prefix: string name: string total: number @@ -40,11 +40,11 @@ export interface IconifyInfo { palette?: boolean } -export interface IconifyMetaData { +export type IconifyMetaData = { [key: string]: unknown } -export interface IconifyChars { +export type IconifyChars = { [key: string]: string } @@ -52,4 +52,3 @@ export declare const icons: IconifyJSON export declare const info: IconifyInfo export declare const metadata: IconifyMetaData export declare const chars: IconifyChars - diff --git a/packages/iconify-collections/custom-public/index.js b/packages/iconify-collections/custom-public/index.js index 81c1d0f5c4..aa7f8d5058 100644 --- a/packages/iconify-collections/custom-public/index.js +++ b/packages/iconify-collections/custom-public/index.js @@ -1,9 +1,8 @@ 'use strict' +const chars = require('./chars.json') const icons = require('./icons.json') const info = require('./info.json') const metadata = require('./metadata.json') -const chars = require('./chars.json') module.exports = { icons, info, metadata, chars } - diff --git a/packages/iconify-collections/custom-public/index.mjs b/packages/iconify-collections/custom-public/index.mjs index 6c1108a92d..8e1c022130 100644 --- a/packages/iconify-collections/custom-public/index.mjs +++ b/packages/iconify-collections/custom-public/index.mjs @@ -1,7 +1,6 @@ +import chars from './chars.json' with { type: 'json' } import icons from './icons.json' with { type: 'json' } import info from './info.json' with { type: 'json' } import metadata from './metadata.json' with { type: 'json' } -import chars from './chars.json' with { type: 'json' } - -export { icons, info, metadata, chars } +export { chars, icons, info, metadata } diff --git a/packages/iconify-collections/custom-vender/index.d.ts b/packages/iconify-collections/custom-vender/index.d.ts index ecca5633d4..be2442726c 100644 --- a/packages/iconify-collections/custom-vender/index.d.ts +++ b/packages/iconify-collections/custom-vender/index.d.ts @@ -1,4 +1,4 @@ -export interface IconifyJSON { +export type IconifyJSON = { prefix: string icons: Record aliases?: Record @@ -7,7 +7,7 @@ export interface IconifyJSON { lastModified?: number } -export interface IconifyIcon { +export type IconifyIcon = { body: string left?: number top?: number @@ -18,11 +18,11 @@ export interface IconifyIcon { vFlip?: boolean } -export interface IconifyAlias extends Omit { +export type IconifyAlias = { parent: string -} +} & Omit -export interface IconifyInfo { +export type IconifyInfo = { prefix: string name: string total: number @@ -40,11 +40,11 @@ export interface IconifyInfo { palette?: boolean } -export interface IconifyMetaData { +export type IconifyMetaData = { [key: string]: unknown } -export interface IconifyChars { +export type IconifyChars = { [key: string]: string } @@ -52,4 +52,3 @@ export declare const icons: IconifyJSON export declare const info: IconifyInfo export declare const metadata: IconifyMetaData export declare const chars: IconifyChars - diff --git a/packages/iconify-collections/custom-vender/index.js b/packages/iconify-collections/custom-vender/index.js index 81c1d0f5c4..aa7f8d5058 100644 --- a/packages/iconify-collections/custom-vender/index.js +++ b/packages/iconify-collections/custom-vender/index.js @@ -1,9 +1,8 @@ 'use strict' +const chars = require('./chars.json') const icons = require('./icons.json') const info = require('./info.json') const metadata = require('./metadata.json') -const chars = require('./chars.json') module.exports = { icons, info, metadata, chars } - diff --git a/packages/iconify-collections/custom-vender/index.mjs b/packages/iconify-collections/custom-vender/index.mjs index 6c1108a92d..8e1c022130 100644 --- a/packages/iconify-collections/custom-vender/index.mjs +++ b/packages/iconify-collections/custom-vender/index.mjs @@ -1,7 +1,6 @@ +import chars from './chars.json' with { type: 'json' } import icons from './icons.json' with { type: 'json' } import info from './info.json' with { type: 'json' } import metadata from './metadata.json' with { type: 'json' } -import chars from './chars.json' with { type: 'json' } - -export { icons, info, metadata, chars } +export { chars, icons, info, metadata } diff --git a/packages/iconify-collections/package.json b/packages/iconify-collections/package.json index 3bd7285f1a..07c29f0a07 100644 --- a/packages/iconify-collections/package.json +++ b/packages/iconify-collections/package.json @@ -1,12 +1,12 @@ { "name": "@dify/iconify-collections", - "private": true, "version": "0.0.0-private", + "private": true, "exports": { "./custom-public": { "types": "./custom-public/index.d.ts", - "require": "./custom-public/index.js", - "import": "./custom-public/index.mjs" + "import": "./custom-public/index.mjs", + "require": "./custom-public/index.js" }, "./custom-public/icons.json": "./custom-public/icons.json", "./custom-public/info.json": "./custom-public/info.json", @@ -14,8 +14,8 @@ "./custom-public/chars.json": "./custom-public/chars.json", "./custom-vender": { "types": "./custom-vender/index.d.ts", - "require": "./custom-vender/index.js", - "import": "./custom-vender/index.mjs" + "import": "./custom-vender/index.mjs", + "require": "./custom-vender/index.js" }, "./custom-vender/icons.json": "./custom-vender/icons.json", "./custom-vender/info.json": "./custom-vender/info.json", diff --git a/packages/migrate-no-unchecked-indexed-access/bin/migrate-no-unchecked-indexed-access.js b/packages/migrate-no-unchecked-indexed-access/bin/migrate-no-unchecked-indexed-access.js new file mode 100755 index 0000000000..2f2b0d72a9 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/bin/migrate-no-unchecked-indexed-access.js @@ -0,0 +1,28 @@ +#!/usr/bin/env node + +import { spawnSync } from 'node:child_process' +import fs from 'node:fs' +import path from 'node:path' +import process from 'node:process' +import { fileURLToPath } from 'node:url' + +const packageRoot = path.resolve(path.dirname(fileURLToPath(import.meta.url)), '..') +const entryFile = path.join(packageRoot, 'dist', 'cli.mjs') + +if (!fs.existsSync(entryFile)) + throw new Error(`Built CLI entry not found at ${entryFile}. Run "pnpm --filter migrate-no-unchecked-indexed-access build" first.`) + +const result = spawnSync( + process.execPath, + [entryFile, ...process.argv.slice(2)], + { + cwd: process.cwd(), + env: process.env, + stdio: 'inherit', + }, +) + +if (result.error) + throw result.error + +process.exit(result.status ?? 1) diff --git a/packages/migrate-no-unchecked-indexed-access/package.json b/packages/migrate-no-unchecked-indexed-access/package.json new file mode 100644 index 0000000000..5da8d4cb50 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/package.json @@ -0,0 +1,22 @@ +{ + "name": "migrate-no-unchecked-indexed-access", + "type": "module", + "version": "0.0.0-private", + "private": true, + "bin": { + "migrate-no-unchecked-indexed-access": "./bin/migrate-no-unchecked-indexed-access.js" + }, + "scripts": { + "build": "vp pack", + "type-check": "tsc" + }, + "dependencies": { + "typescript": "catalog:" + }, + "devDependencies": { + "@dify/tsconfig": "workspace:*", + "@types/node": "catalog:", + "vite": "catalog:", + "vite-plus": "catalog:" + } +} diff --git a/packages/migrate-no-unchecked-indexed-access/src/cli.ts b/packages/migrate-no-unchecked-indexed-access/src/cli.ts new file mode 100644 index 0000000000..99142c388f --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/src/cli.ts @@ -0,0 +1,46 @@ +import process from 'node:process' +import { runBatchMigrationCommand } from './no-unchecked-indexed-access/run' + +function printUsage() { + console.log(`Usage: + migrate-no-unchecked-indexed-access [options] + +Options: + --project + --batch-size + --batch-iterations + --max-rounds + --verbose`) +} + +async function flushStandardStreams() { + await Promise.all([ + new Promise(resolve => process.stdout.write('', () => resolve())), + new Promise(resolve => process.stderr.write('', () => resolve())), + ]) +} + +async function main() { + const argv = process.argv.slice(2) + if (argv.includes('help') || argv.includes('--help') || argv.includes('-h')) { + printUsage() + return + } + + await runBatchMigrationCommand(argv) +} + +let exitCode = 0 + +try { + await main() + const currentExitCode = process.exitCode + exitCode = typeof currentExitCode === 'number' ? currentExitCode : 0 +} +catch (error) { + console.error(error instanceof Error ? error.message : error) + exitCode = 1 +} + +await flushStandardStreams() +process.exit(exitCode) diff --git a/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/migrate.ts b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/migrate.ts new file mode 100644 index 0000000000..6b79e214bb --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/migrate.ts @@ -0,0 +1,1835 @@ +import fs from 'node:fs/promises' +import path from 'node:path' +import process from 'node:process' +import ts from 'typescript' + +const SUPPORTED_EXTENSIONS = new Set(['.ts', '.tsx', '.mts', '.cts']) +export const SUPPORTED_DIAGNOSTIC_CODES = new Set([2322, 2339, 2345, 2488, 2532, 2538, 2604, 2722, 2769, 2786, 7006, 18047, 18048]) +const DEFAULT_MAX_ITERATIONS = 10 +const ACCESS_DIAGNOSTIC_CODES = new Set([2339, 2532, 18047, 18048]) +const ASSIGNABILITY_DIAGNOSTIC_CODES = new Set([2322, 2345, 2769]) +const parsedConfigCache = new Map() + +type CliOptions = { + files: string[] + maxIterations: number + project: string + useFullProjectRoots?: boolean + verbose: boolean + write: boolean +} + +type TextEdit = { + end: number + expectedText?: string + replacement: string + start: number +} + +type EditTarget + = { expression: ts.Expression, kind: 'expression', sourceFile: ts.SourceFile } + | { end: number, kind: 'direct-edit', replacement: string, sourceFile: ts.SourceFile, start: number } + | { kind: 'shorthand-property', property: ts.ShorthandPropertyAssignment, sourceFile: ts.SourceFile } + +export function parseArgs(argv: string[]): CliOptions { + const options: CliOptions = { + files: [], + maxIterations: DEFAULT_MAX_ITERATIONS, + project: 'tsconfig.json', + verbose: false, + write: false, + } + + for (let i = 0; i < argv.length; i += 1) { + const arg = argv[i] + if (!arg) + continue + + if (arg === '--') + continue + + if (arg === '--write') { + options.write = true + continue + } + + if (arg === '--verbose') { + options.verbose = true + continue + } + + if (arg === '--project') { + const value = argv[i + 1] + if (!value) + throw new Error('Missing value for --project') + + options.project = value + i += 1 + continue + } + + if (arg === '--max-iterations') { + const value = argv[i + 1] + if (!value) + throw new Error('Missing value for --max-iterations') + + const parsed = Number(value) + if (!Number.isInteger(parsed) || parsed <= 0) + throw new Error(`Invalid --max-iterations value: ${value}`) + + options.maxIterations = parsed + i += 1 + continue + } + + if (arg === '--files') { + const value = argv[i + 1] + if (!value) + throw new Error('Missing value for --files') + + options.files.push(...splitFilesArgument(value)) + i += 1 + continue + } + + if (arg.startsWith('--')) + throw new Error(`Unknown option: ${arg}`) + + options.files.push(...splitFilesArgument(arg)) + } + + return options +} + +function splitFilesArgument(value: string): string[] { + return value + .split(',') + .map(item => item.trim()) + .filter(Boolean) +} + +function parseTsConfig(projectPath: string): ts.ParsedCommandLine { + const cached = parsedConfigCache.get(projectPath) + if (cached) + return cached + + const configFile = ts.readConfigFile(projectPath, ts.sys.readFile) + if (configFile.error) + throw new Error(formatDiagnostic(configFile.error)) + + const configDirectory = path.dirname(projectPath) + const parsedConfig = ts.parseJsonConfigFileContent( + configFile.config, + ts.sys, + configDirectory, + undefined, + projectPath, + ) + parsedConfigCache.set(projectPath, parsedConfig) + return parsedConfig +} + +function createMigrationProgram( + rootNames: string[], + parsedConfig: ts.ParsedCommandLine, + fileTexts: Map, + oldProgram?: ts.Program, +): ts.Program { + const compilerHost = ts.createCompilerHost(parsedConfig.options, true) + const originalGetSourceFile = compilerHost.getSourceFile.bind(compilerHost) + + compilerHost.readFile = (fileName) => { + return fileTexts.get(fileName) ?? ts.sys.readFile(fileName) + } + + compilerHost.getSourceFile = (fileName, languageVersion, onError, shouldCreateNewSourceFile) => { + const text = fileTexts.get(fileName) + if (text !== undefined) + return ts.createSourceFile(fileName, text, languageVersion, true) + + return originalGetSourceFile(fileName, languageVersion, onError, shouldCreateNewSourceFile) + } + + return ts.createProgram({ + oldProgram, + host: compilerHost, + options: parsedConfig.options, + projectReferences: parsedConfig.projectReferences, + rootNames, + }) +} + +function isTargetFile(fileName: string): boolean { + const extension = path.extname(fileName) + if (!SUPPORTED_EXTENSIONS.has(extension)) + return false + + if (fileName.endsWith('.d.ts')) + return false + + return !fileName.includes(`${path.sep}.next${path.sep}`) +} + +function normalizeFileName(fileName: string): string { + return path.resolve(fileName) +} + +function isDeclarationSupportFile(fileName: string): boolean { + return fileName.endsWith('.d.ts') +} + +function isSetupSupportFile(fileName: string): boolean { + const baseName = path.basename(fileName) + return baseName === 'vitest.setup.ts' + || baseName === 'vitest.setup.tsx' + || baseName === 'jest.setup.ts' + || baseName === 'jest.setup.tsx' + || baseName === 'setupTests.ts' + || baseName === 'setupTests.tsx' + || baseName === 'test.setup.ts' + || baseName === 'test.setup.tsx' +} + +function getMigrationRootNames( + parsedConfig: ts.ParsedCommandLine, + targetFiles: string[], +): string[] { + const rootNames = new Set(targetFiles) + + for (const fileName of parsedConfig.fileNames.map(normalizeFileName)) { + if (isDeclarationSupportFile(fileName) || isSetupSupportFile(fileName)) + rootNames.add(fileName) + } + + return Array.from(rootNames) +} + +function createFileMatcher(filePatterns: string[]): (fileName: string) => boolean { + if (filePatterns.length === 0) + return () => true + + const patterns = filePatterns.map(pattern => ({ + absolute: normalizeFileName(pattern), + raw: pattern.split(path.sep).join('/'), + })) + return (fileName: string) => { + const normalized = normalizeFileName(fileName) + const unixStyle = normalized.split(path.sep).join('/') + return patterns.some(pattern => normalized === pattern.absolute || unixStyle.endsWith(pattern.raw)) + } +} + +function formatDiagnostic(diagnostic: ts.Diagnostic): string { + const message = ts.flattenDiagnosticMessageText(diagnostic.messageText, '\n') + if (!diagnostic.file || diagnostic.start === undefined) + return message + + const position = diagnostic.file.getLineAndCharacterOfPosition(diagnostic.start) + return `${diagnostic.file.fileName}:${position.line + 1}:${position.character + 1} TS${diagnostic.code}: ${message}` +} + +function ensureTrailingNonNullAssertion(expression: string): string { + const trimmedExpression = expression.trimEnd() + return trimmedExpression.endsWith('!') + ? trimmedExpression + : `${trimmedExpression}!` +} + +function hasOptionalChainDescendant(node: ts.Node): boolean { + let found = false + + const visit = (current: ts.Node) => { + if (found) + return + + if (ts.isOptionalChain(current)) { + found = true + return + } + + current.forEachChild(visit) + } + + visit(node) + return found +} + +function shouldPrintInlineNonNullAssertion(expression: ts.Expression): boolean { + return ts.isOptionalChain(expression) + || (ts.isParenthesizedExpression(expression) && hasOptionalChainDescendant(expression.expression)) +} + +function normalizeOptionalChainNonNullContinuations(text: string): string { + const sourceFile = ts.createSourceFile('normalize.tsx', text, ts.ScriptTarget.Latest, true, ts.ScriptKind.TSX) + const edits: TextEdit[] = [] + + const visit = (node: ts.Node) => { + if ( + ts.isNonNullExpression(node) + && ts.isParenthesizedExpression(node.expression) + && hasOptionalChainDescendant(node.expression.expression) + ) { + edits.push({ + end: node.getEnd(), + replacement: `${node.expression.expression.getText(sourceFile)}!`, + start: node.getStart(sourceFile), + }) + return + } + + node.forEachChild(visit) + } + + visit(sourceFile) + + if (edits.length === 0) + return text + + return applyEdits(text, edits).text +} + +function collapseRepeatedInlineComments(text: string): string { + return text + .split('\n') + .map((line) => { + const commentIndex = line.indexOf('//') + if (commentIndex < 0) + return line + + const prefix = line.slice(0, commentIndex).trimEnd() + const comment = line.slice(commentIndex + 2).trim() + const segments = comment + .split(/\s+\/\/\s+/) + .map(item => item.trim()) + .filter(Boolean) + + if (segments.length < 2) + return line + + const lastSegment = segments[segments.length - 1]! + const stableSegments = segments.slice(0, -1) + const repeatedSameComment = stableSegments.length > 0 + && stableSegments.every(segment => segment === segments[0]) + && (lastSegment === segments[0] || segments[0]!.startsWith(lastSegment) || lastSegment.startsWith(segments[0]!)) + + if (!repeatedSameComment) + return line.replace(/!{2,}$/g, '!') + + const normalizedComment = segments[0]!.replace(/!{2,}$/g, '!') + return prefix ? `${prefix} // ${normalizedComment}` : `// ${normalizedComment}` + }) + .join('\n') +} + +export function normalizeMalformedAssertions(text: string): string { + const normalizedText = text + .replace(/\n(\s*)! (\s*\/\/[^\n]*)\n/g, '! $2\n') + .replace(/\.not!+(?=[.(])/g, '.not') + .replace(/(\(|,\s*)([A-Za-z_$][\w$]*)\s*:\s*any\s*=>/g, '$1($2: any) =>') + .replace(/([,{]\s*)([A-Z_$][\w$]*)!=\{/g, '$1$2={') + .replace(/\b([A-Z_$][\w$]*)!!,/gi, '$1: $1!,') + .replace(/\b([A-Z_$][\w$]*)!!:/gi, '$1:') + .replace(/([,{]\s*)([A-Z_$][\w$]*)!:/gi, '$1$2:') + .replace(/\b(const|let|var)\s+\{([^=\n]+)\}\s*=\s*([^\n;]+)/g, (fullMatch, keyword: string, bindings: string, expression: string) => { + if (!bindings.includes('!')) + return fullMatch + + const normalizedBindings = bindings.replace(/!([,\s}:])/g, '$1') + return `${keyword} {${normalizedBindings}} = ${ensureTrailingNonNullAssertion(expression)}` + }) + + return collapseRepeatedInlineComments(normalizeOptionalChainNonNullContinuations(normalizedText)) +} + +function isExpressionTarget(target: EditTarget): target is Extract { + return target.kind === 'expression' +} + +function createExpressionTarget(expression: ts.Expression): EditTarget { + return { + expression, + kind: 'expression', + sourceFile: expression.getSourceFile(), + } +} + +function createShorthandPropertyTarget(property: ts.ShorthandPropertyAssignment): EditTarget { + return { + kind: 'shorthand-property', + property, + sourceFile: property.getSourceFile(), + } +} + +function createDirectEditTarget( + sourceFile: ts.SourceFile, + start: number, + end: number, + replacement: string, +): EditTarget { + return { + end, + kind: 'direct-edit', + replacement, + sourceFile, + start, + } +} + +function createIterableFallbackReplacement( + expression: ts.Expression, + sourceFile: ts.SourceFile, +): string { + return `(${expression.getText(sourceFile)} ?? [])` +} + +function createIterableFallbackTarget(expression: ts.Expression): EditTarget { + return createDirectEditTarget( + expression.getSourceFile(), + expression.getStart(expression.getSourceFile()), + expression.getEnd(), + createIterableFallbackReplacement(expression, expression.getSourceFile()), + ) +} + +function createArrayLiteralIterableFallbackTarget( + arrayLiteral: ts.ArrayLiteralExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const sourceFile = arrayLiteral.getSourceFile() + const start = arrayLiteral.getStart(sourceFile) + const end = arrayLiteral.getEnd() + const originalText = sourceFile.text.slice(start, end) + const edits: TextEdit[] = [] + + for (const element of arrayLiteral.elements) { + if (!ts.isSpreadElement(element)) + continue + + if (isAlreadyNonNull(element.expression)) + continue + + if (!typeIncludesUndefined(checker.getTypeAtLocation(element.expression))) + continue + + edits.push({ + end: element.expression.getEnd() - start, + replacement: createIterableFallbackReplacement(element.expression, sourceFile), + start: element.expression.getStart(sourceFile) - start, + }) + } + + if (edits.length === 0) + return undefined + + return createDirectEditTarget( + sourceFile, + start, + end, + applyEdits(originalText, edits).text, + ) +} + +function getTokenAtPosition(sourceFile: ts.SourceFile, position: number): ts.Node { + let current: ts.Node = sourceFile + + while (true) { + let next: ts.Node | undefined + current.forEachChild((child) => { + if (!next && position >= child.getFullStart() && position < child.getEnd()) + next = child + }) + + if (!next) + return current + + current = next + } +} + +function findAncestor( + node: ts.Node | undefined, + predicate: (candidate: ts.Node) => candidate is NodeType, +): NodeType | undefined { + let current = node + + while (current) { + if (predicate(current)) + return current + + current = current.parent + } + + return undefined +} + +function findTightestExpression(sourceFile: ts.SourceFile, start: number, end: number): ts.Expression | undefined { + let node: ts.Node | undefined = getTokenAtPosition(sourceFile, start) + + while (node) { + if (ts.isExpression(node)) { + const nodeStart = node.getStart(sourceFile) + const nodeEnd = node.getEnd() + if (nodeStart <= start && end <= nodeEnd) + return node + } + + node = node.parent + } + + return undefined +} + +function isAssignmentOperator(token: ts.SyntaxKind): boolean { + return token >= ts.SyntaxKind.FirstAssignment && token <= ts.SyntaxKind.LastAssignment +} + +function typeIncludesUndefined(type: ts.Type): boolean { + if ((type.flags & ts.TypeFlags.Undefined) !== 0) + return true + + if (!type.isUnion()) + return false + + return type.types.some(typeIncludesUndefined) +} + +function skipOuterExpressions(expression: ts.Expression): ts.Expression { + let current = expression + + while (ts.isParenthesizedExpression(current) || ts.isNonNullExpression(current)) + current = current.expression + + return current +} + +function isAlreadyNonNull(expression: ts.Expression): boolean { + let current = expression + + while (ts.isParenthesizedExpression(current)) + current = current.expression + + return ts.isNonNullExpression(current) +} + +function findAssignmentLikeCandidate( + token: ts.Node, + sourceFile: ts.SourceFile, + start: number, + end: number, +): ts.Expression | undefined { + let current: ts.Node | undefined = token + + while (current) { + if (ts.isVariableDeclaration(current) && current.initializer) + return current.initializer + + if (ts.isPropertyDeclaration(current) && current.initializer) + return current.initializer + + if (ts.isPropertyAssignment(current)) + return current.initializer + + if (ts.isShorthandPropertyAssignment(current)) + return current.name + + if (ts.isParameter(current) && current.initializer) + return current.initializer + + if (ts.isReturnStatement(current) && current.expression) + return current.expression + + if (ts.isBinaryExpression(current) && isAssignmentOperator(current.operatorToken.kind)) + return current.right + + if (ts.isJsxAttribute(current) && current.initializer && ts.isJsxExpression(current.initializer) && current.initializer.expression) + return current.initializer.expression + + if (ts.isJsxSpreadAttribute(current)) + return current.expression + + current = current.parent + } + + return findTightestExpression(sourceFile, start, end) +} + +function findArgumentCandidate( + token: ts.Node, + sourceFile: ts.SourceFile, + start: number, + end: number, +): ts.Expression | undefined { + let current: ts.Node | undefined = token + + while (current) { + if ((ts.isCallExpression(current) || ts.isNewExpression(current)) && current.arguments) { + const argument = current.arguments.find((item) => { + const itemStart = item.getStart(sourceFile) + const itemEnd = item.getEnd() + return itemStart <= start && end <= itemEnd + }) + if (argument) + return argument + } + + current = current.parent + } + + return findTightestExpression(sourceFile, start, end) +} + +function getExpressionFromJsxAttribute(attribute: ts.JsxAttribute): ts.Expression | undefined { + return attribute.initializer && ts.isJsxExpression(attribute.initializer) + ? attribute.initializer.expression + : undefined +} + +function findTargetFromExpression( + expression: ts.Expression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const referencedDeclarationTarget = findReferencedDeclarationInitializerTarget(expression, checker) + if (referencedDeclarationTarget) + return referencedDeclarationTarget + + const nestedTarget = findNestedContainerTarget(expression, checker) + if (nestedTarget) + return nestedTarget + + const innerExpression = skipOuterExpressions(expression) + if (ts.isConditionalExpression(innerExpression)) { + return findTargetFromExpression(innerExpression.whenTrue, checker) + ?? findTargetFromExpression(innerExpression.whenFalse, checker) + } + + if ( + ts.isBinaryExpression(innerExpression) + && ( + innerExpression.operatorToken.kind === ts.SyntaxKind.BarBarToken + || innerExpression.operatorToken.kind === ts.SyntaxKind.QuestionQuestionToken + || innerExpression.operatorToken.kind === ts.SyntaxKind.AmpersandAmpersandToken + ) + ) { + return findTargetFromExpression(innerExpression.left, checker) + ?? findTargetFromExpression(innerExpression.right, checker) + } + + if (ts.isArrowFunction(innerExpression) || ts.isFunctionExpression(innerExpression)) { + const functionTarget = findFunctionLikeReturnTarget(innerExpression, checker) + if (functionTarget) + return functionTarget + } + + if (ts.isPropertyAccessExpression(innerExpression)) { + const namedPropertyTarget = findNamedPropertyTarget(innerExpression.expression, innerExpression.name.text, checker) + if (namedPropertyTarget) + return namedPropertyTarget + } + + if (ts.isCallExpression(innerExpression)) { + const collectionCallbackTarget = findCollectionCallbackTarget(innerExpression, checker) + if (collectionCallbackTarget) + return collectionCallbackTarget + + const callbackArgumentTarget = findCallbackArgumentTarget(innerExpression, checker) + if (callbackArgumentTarget) + return callbackArgumentTarget + + const callExpressionTarget = findCallExpressionDeclarationTarget(innerExpression, checker) + if (callExpressionTarget) + return callExpressionTarget + } + + if (!typeIncludesUndefined(checker.getTypeAtLocation(expression))) + return undefined + + return createExpressionTarget(expression) +} + +function findJsxSpreadAttributeTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const spreadAttribute = findAncestor(token, ts.isJsxSpreadAttribute) + if (spreadAttribute) + return findTargetFromExpression(spreadAttribute.expression, checker) + + const openingLikeElement = findAncestor(token, node => + ts.isJsxOpeningElement(node) || ts.isJsxSelfClosingElement(node)) + + if (!openingLikeElement) + return undefined + + for (const attribute of openingLikeElement.attributes.properties) { + if (!ts.isJsxSpreadAttribute(attribute)) + continue + + const target = findTargetFromExpression(attribute.expression, checker) + if (target) + return target + } + + return undefined +} + +function findShorthandPropertyTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const property = findAncestor(token, ts.isShorthandPropertyAssignment) + if (!property) + return undefined + + return typeIncludesUndefined(checker.getTypeAtLocation(property.name)) + ? createShorthandPropertyTarget(property) + : undefined +} + +function findPropertyAssignmentInitializerTarget( + token: ts.Node, + start: number, + checker: ts.TypeChecker, +): EditTarget | undefined { + const propertyAssignment = findAncestor(token, ts.isPropertyAssignment) + if (!propertyAssignment) + return undefined + + const propertyNameStart = propertyAssignment.name.getStart() + const propertyNameEnd = propertyAssignment.name.getEnd() + if (start < propertyNameStart || start >= propertyNameEnd) + return undefined + + const directTarget = findTargetFromExpression(propertyAssignment.initializer, checker) + if (directTarget) + return directTarget + + const nestedTarget = findNestedContainerTarget(propertyAssignment.initializer, checker) + if (nestedTarget) + return nestedTarget + + if (!typeIncludesUndefined(checker.getTypeAtLocation(propertyAssignment.initializer))) + return undefined + + return createExpressionTarget(propertyAssignment.initializer) +} + +function findPropertyAccessExpressionTarget( + token: ts.Node, + start: number, +): EditTarget | undefined { + const propertyAccess = findAncestor(token, ts.isPropertyAccessExpression) + if (!propertyAccess) + return undefined + + if (start >= propertyAccess.name.getStart() && start < propertyAccess.name.getEnd()) + return createExpressionTarget(propertyAccess.expression) + + return undefined +} + +function findUndefinedAccessTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + let current: ts.Node | undefined = token + let bestTarget: EditTarget | undefined + + while (current) { + if (ts.isPropertyAccessExpression(current)) { + const expression = current.expression + if (typeIncludesUndefined(checker.getTypeAtLocation(expression)) && !isAlreadyNonNull(expression)) + bestTarget = createExpressionTarget(expression) + } + + if (ts.isElementAccessExpression(current)) { + const expression = current.expression + if (typeIncludesUndefined(checker.getTypeAtLocation(expression)) && !isAlreadyNonNull(expression)) + bestTarget = createExpressionTarget(expression) + } + + current = current.parent + } + + return bestTarget +} + +function findElementAccessArgumentTarget(token: ts.Node): EditTarget | undefined { + let current = token + let matchingElementAccess: ts.ElementAccessExpression | undefined + + while (current) { + if (ts.isElementAccessExpression(current) && current.argumentExpression) + matchingElementAccess = current + + current = current.parent + } + + if (!matchingElementAccess?.argumentExpression) + return undefined + + return createExpressionTarget(matchingElementAccess.argumentExpression) +} + +function findIterableTarget( + sourceFile: ts.SourceFile, + token: ts.Node, + start: number, + end: number, + checker: ts.TypeChecker, +): EditTarget | undefined { + const arrayLiteral = findAncestor(token, ts.isArrayLiteralExpression) + if (arrayLiteral) { + const arrayLiteralTarget = createArrayLiteralIterableFallbackTarget(arrayLiteral, checker) + if (arrayLiteralTarget) + return arrayLiteralTarget + } + + const spreadElement = findAncestor(token, ts.isSpreadElement) + if (spreadElement && !isAlreadyNonNull(spreadElement.expression)) + return createIterableFallbackTarget(spreadElement.expression) + + const variableDeclaration = findAncestor(token, ts.isVariableDeclaration) + if ( + variableDeclaration?.initializer + && typeIncludesUndefined(checker.getTypeAtLocation(variableDeclaration.initializer)) + && !isAlreadyNonNull(variableDeclaration.initializer) + ) { + return createExpressionTarget(variableDeclaration.initializer) + } + + const binaryExpression = findAncestor(token, ts.isBinaryExpression) + if ( + binaryExpression + && isAssignmentOperator(binaryExpression.operatorToken.kind) + && typeIncludesUndefined(checker.getTypeAtLocation(binaryExpression.right)) + && !isAlreadyNonNull(binaryExpression.right) + ) { + return createExpressionTarget(binaryExpression.right) + } + + return undefined +} + +function findImplicitAnyParameterTarget(token: ts.Node): EditTarget | undefined { + const parameter = findAncestor(token, ts.isParameter) + if (!parameter || parameter.type || !ts.isIdentifier(parameter.name)) + return undefined + + const sourceFile = parameter.getSourceFile() + const replacement = ts.isArrowFunction(parameter.parent) && parameter.parent.parameters.length === 1 + ? `(${parameter.name.getText(sourceFile)}: any)` + : `${parameter.name.getText(sourceFile)}: any` + + return createDirectEditTarget( + sourceFile, + parameter.getStart(sourceFile), + parameter.getEnd(), + replacement, + ) +} + +function getArrayPatternElementTypeText( + element: ts.ArrayBindingElement | ts.Expression, + checker: ts.TypeChecker, +): string { + if (ts.isOmittedExpression(element)) + return 'unknown' + + const targetNode = ts.isBindingElement(element) + ? element.name + : element + + const targetType = checker.getNonNullableType(checker.getTypeAtLocation(targetNode)) + const typeText = checker.typeToString(targetType) + return typeText === 'never' ? 'unknown' : typeText +} + +function createArrayDestructuringReplacement( + sourceFile: ts.SourceFile, + expression: ts.Expression, + elements: readonly (ts.ArrayBindingElement | ts.Expression)[], + checker: ts.TypeChecker, + options?: { + fallbackToEmptyArray?: boolean + }, +): string | undefined { + if (elements.length === 0) + return undefined + + const tupleTypes = elements.map(element => getArrayPatternElementTypeText(element, checker)) + const expressionText = options?.fallbackToEmptyArray + ? `(${expression.getText(sourceFile)} ?? [])` + : `(${expression.getText(sourceFile)})` + return `${expressionText} as [${tupleTypes.join(', ')}]` +} + +function findArrayDestructuringTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const binaryExpression = findAncestor(token, ts.isBinaryExpression) + if (binaryExpression && isAssignmentOperator(binaryExpression.operatorToken.kind) && ts.isArrayLiteralExpression(binaryExpression.left)) { + const replacement = createArrayDestructuringReplacement( + binaryExpression.getSourceFile(), + binaryExpression.right, + binaryExpression.left.elements, + checker, + { + fallbackToEmptyArray: typeIncludesUndefined(checker.getTypeAtLocation(binaryExpression.right)), + }, + ) + if (replacement) { + return createDirectEditTarget( + binaryExpression.getSourceFile(), + binaryExpression.right.getStart(binaryExpression.getSourceFile()), + binaryExpression.right.getEnd(), + replacement, + ) + } + } + + const variableDeclaration = findAncestor(token, ts.isVariableDeclaration) + if (variableDeclaration?.initializer && ts.isArrayBindingPattern(variableDeclaration.name)) { + const replacement = createArrayDestructuringReplacement( + variableDeclaration.getSourceFile(), + variableDeclaration.initializer, + variableDeclaration.name.elements, + checker, + { + fallbackToEmptyArray: typeIncludesUndefined(checker.getTypeAtLocation(variableDeclaration.initializer)), + }, + ) + if (replacement) { + return createDirectEditTarget( + variableDeclaration.getSourceFile(), + variableDeclaration.initializer.getStart(variableDeclaration.getSourceFile()), + variableDeclaration.initializer.getEnd(), + replacement, + ) + } + } + + return undefined +} + +function findVariableDeclarationInitializerTarget( + sourceFile: ts.SourceFile, + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const variableDeclaration = findAncestor(token, ts.isVariableDeclaration) + if (!variableDeclaration?.initializer) + return undefined + + const nestedTarget = findNestedContainerTarget(variableDeclaration.initializer, checker) + if (nestedTarget) + return nestedTarget + + if (!typeIncludesUndefined(checker.getTypeAtLocation(variableDeclaration.initializer))) + return undefined + + return createExpressionTarget(variableDeclaration.initializer) +} + +function getResolvedValueDeclaration( + symbol: ts.Symbol | undefined, + checker: ts.TypeChecker, +): ts.Declaration | undefined { + if (!symbol) + return undefined + + const resolvedSymbol = symbol.flags & ts.SymbolFlags.Alias + ? checker.getAliasedSymbol(symbol) + : symbol + + return resolvedSymbol.valueDeclaration ?? resolvedSymbol.declarations?.[0] +} + +function getFunctionLikeDeclaration( + declaration: ts.Declaration, +): ts.FunctionLikeDeclarationBase | undefined { + if ( + ts.isFunctionDeclaration(declaration) + || ts.isMethodDeclaration(declaration) + || ts.isFunctionExpression(declaration) + || ts.isArrowFunction(declaration) + ) { + return declaration + } + + if ( + ts.isVariableDeclaration(declaration) + && declaration.initializer + && (ts.isArrowFunction(declaration.initializer) || ts.isFunctionExpression(declaration.initializer)) + ) { + return declaration.initializer + } + + return undefined +} + +function getPropertyNameText(name: ts.PropertyName | ts.BindingName): string | undefined { + if (ts.isIdentifier(name) || ts.isStringLiteral(name) || ts.isNumericLiteral(name)) + return name.text + + return undefined +} + +function getCallExpressionPropertyAccess(callExpression: ts.CallExpression): ts.PropertyAccessExpression | undefined { + const callee = skipOuterExpressions(callExpression.expression) + return ts.isPropertyAccessExpression(callee) ? callee : undefined +} + +function getFunctionExpressionArgument(callExpression: ts.CallExpression, index = 0): ts.ArrowFunction | ts.FunctionExpression | undefined { + const callback = callExpression.arguments[index] + return callback && (ts.isArrowFunction(callback) || ts.isFunctionExpression(callback)) + ? callback + : undefined +} + +function findTargetInFunctionBody( + body: ts.ConciseBody, + resolveExpression: (expression: ts.Expression) => EditTarget | undefined, +): EditTarget | undefined { + if (ts.isBlock(body)) { + for (const expression of findReturnStatementExpressions(body)) { + const target = resolveExpression(expression) + if (target) + return target + } + + return undefined + } + + return resolveExpression(body) +} + +function getParameterCollectionExpression( + declaration: ts.ParameterDeclaration, +): ts.Expression | undefined { + const functionLikeDeclaration = declaration.parent + if ( + !(ts.isArrowFunction(functionLikeDeclaration) || ts.isFunctionExpression(functionLikeDeclaration)) + || !ts.isCallExpression(functionLikeDeclaration.parent) + || functionLikeDeclaration.parent.arguments[0] !== functionLikeDeclaration + ) { + return undefined + } + + const callee = getCallExpressionPropertyAccess(functionLikeDeclaration.parent) + return callee?.expression +} + +function findObjectLiteralNamedPropertyTarget( + objectLiteral: ts.ObjectLiteralExpression, + propertyName: string, + checker: ts.TypeChecker, +): EditTarget | undefined { + for (const property of objectLiteral.properties) { + if (ts.isSpreadAssignment(property)) + continue + + if (ts.isShorthandPropertyAssignment(property) && property.name.text === propertyName) + return createShorthandPropertyTarget(property) + + if (ts.isPropertyAssignment(property)) { + const currentPropertyName = getPropertyNameText(property.name) + if (currentPropertyName !== propertyName) + continue + + return findTargetFromExpression(property.initializer, checker) + ?? createExpressionTarget(property.initializer) + } + } + + return undefined +} + +function findFunctionLikeNamedReturnTarget( + declaration: ts.FunctionLikeDeclarationBase, + propertyName: string, + checker: ts.TypeChecker, +): EditTarget | undefined { + if (!declaration.body) + return undefined + + return findTargetInFunctionBody( + declaration.body, + expression => findNamedPropertyTarget(expression, propertyName, checker), + ) +} + +function findCollectionPropertyTarget( + expression: ts.Expression, + propertyName: string, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + + if (ts.isIdentifier(innerExpression)) + return findNamedPropertyTarget(innerExpression, propertyName, checker) + + if (!ts.isCallExpression(innerExpression)) + return undefined + + const callee = getCallExpressionPropertyAccess(innerExpression) + if (!callee) + return undefined + + if (callee.name.text === 'map' || callee.name.text === 'flatMap') { + const callback = getFunctionExpressionArgument(innerExpression) + if (!callback) + return undefined + + return findTargetInFunctionBody( + callback.body, + returnedExpression => findNamedPropertyTarget(returnedExpression, propertyName, checker), + ) + } + + if (callee.name.text === 'filter') + return findCollectionPropertyTarget(callee.expression, propertyName, checker) + + return undefined +} + +function findNamedPropertyTarget( + expression: ts.Expression, + propertyName: string, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + + if (ts.isObjectLiteralExpression(innerExpression)) + return findObjectLiteralNamedPropertyTarget(innerExpression, propertyName, checker) + + if (ts.isIdentifier(innerExpression)) { + const declaration = getResolvedValueDeclaration(checker.getSymbolAtLocation(innerExpression), checker) + if (!declaration) + return undefined + + if (ts.isParameter(declaration)) { + const collectionExpression = getParameterCollectionExpression(declaration) + if (collectionExpression) + return findCollectionPropertyTarget(collectionExpression, propertyName, checker) + } + + const functionLikeDeclaration = getFunctionLikeDeclaration(declaration) + if (functionLikeDeclaration) + return findFunctionLikeNamedReturnTarget(functionLikeDeclaration, propertyName, checker) + + if (ts.isVariableDeclaration(declaration) && declaration.initializer) + return findNamedPropertyTarget(declaration.initializer, propertyName, checker) + + return undefined + } + + if (ts.isCallExpression(innerExpression)) { + const collectionPropertyTarget = findCollectionPropertyTarget(innerExpression, propertyName, checker) + if (collectionPropertyTarget) + return collectionPropertyTarget + + const declaration = getResolvedValueDeclaration(checker.getSymbolAtLocation(skipOuterExpressions(innerExpression.expression)), checker) + if (!declaration) + return undefined + + const functionLikeDeclaration = getFunctionLikeDeclaration(declaration) + if (!functionLikeDeclaration) + return undefined + + return findFunctionLikeNamedReturnTarget(functionLikeDeclaration, propertyName, checker) + } + + return undefined +} + +function findReturnStatementExpressions(node: ts.Node): ts.Expression[] { + const expressions: ts.Expression[] = [] + + const visit = (current: ts.Node) => { + if ( + current !== node + && ( + ts.isArrowFunction(current) + || ts.isFunctionExpression(current) + || ts.isFunctionDeclaration(current) + || ts.isMethodDeclaration(current) + ) + ) { + return + } + + if (ts.isReturnStatement(current) && current.expression) + expressions.push(current.expression) + + current.forEachChild(visit) + } + + visit(node) + return expressions +} + +function findFunctionLikeReturnTarget( + declaration: ts.FunctionLikeDeclarationBase, + checker: ts.TypeChecker, +): EditTarget | undefined { + if (!declaration.body) + return undefined + + return findTargetInFunctionBody( + declaration.body, + expression => findTargetFromExpression(expression, checker), + ) +} + +function findCallExpressionDeclarationTarget( + callExpression: ts.CallExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const declaration = getResolvedValueDeclaration(checker.getSymbolAtLocation(skipOuterExpressions(callExpression.expression)), checker) + if (!declaration) + return undefined + + const functionLikeDeclaration = getFunctionLikeDeclaration(declaration) + if (!functionLikeDeclaration) + return undefined + + return findFunctionLikeReturnTarget(functionLikeDeclaration, checker) +} + +function findCallbackArgumentTarget( + callExpression: ts.CallExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const callee = skipOuterExpressions(callExpression.expression) + const calleeName = ts.isIdentifier(callee) ? callee.text : getCallExpressionPropertyAccess(callExpression)?.name.text + + if (calleeName !== 'useCallback' && calleeName !== 'useMemo') + return undefined + + const callback = getFunctionExpressionArgument(callExpression) + if (!callback) + return undefined + + return findFunctionLikeReturnTarget(callback, checker) +} + +function findReferencedDeclarationInitializerTarget( + expression: ts.Expression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + if (!ts.isIdentifier(innerExpression)) + return undefined + + const declaration = getResolvedValueDeclaration(checker.getSymbolAtLocation(innerExpression), checker) + if (!declaration) + return undefined + + if (ts.isBindingElement(declaration)) { + const propertyName = declaration.propertyName + ? getPropertyNameText(declaration.propertyName) + : getPropertyNameText(declaration.name) + + const variableDeclaration = declaration.parent.parent + if (propertyName && ts.isVariableDeclaration(variableDeclaration) && variableDeclaration.initializer) { + const namedPropertyTarget = findNamedPropertyTarget(variableDeclaration.initializer, propertyName, checker) + if (namedPropertyTarget) + return namedPropertyTarget + } + } + + if (ts.isParameter(declaration)) { + const collectionExpression = getParameterCollectionExpression(declaration) + if (collectionExpression) { + const collectionTarget = findTargetFromExpression(collectionExpression, checker) + if (collectionTarget) + return collectionTarget + } + } + + const functionLikeDeclaration = getFunctionLikeDeclaration(declaration) + if (functionLikeDeclaration) { + const functionTarget = findFunctionLikeReturnTarget(functionLikeDeclaration, checker) + if (functionTarget) + return functionTarget + } + + if (!ts.isVariableDeclaration(declaration) || !declaration.initializer) + return undefined + + const collectionCallbackTarget = findCollectionCallbackTarget(declaration.initializer, checker) + if (collectionCallbackTarget) + return collectionCallbackTarget + + const initializerTarget = findTargetFromExpression(declaration.initializer, checker) + if (initializerTarget) + return initializerTarget + + const nestedTarget = findNestedContainerTarget(declaration.initializer, checker) + if (nestedTarget) + return nestedTarget + + if (!typeIncludesUndefined(checker.getTypeAtLocation(declaration.initializer))) + return undefined + + return createExpressionTarget(declaration.initializer) +} + +function findCollectionCallbackTarget( + expression: ts.Expression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + if (!ts.isCallExpression(innerExpression)) + return undefined + + const callee = getCallExpressionPropertyAccess(innerExpression) + if (!callee) + return undefined + + if (callee.name.text !== 'map' && callee.name.text !== 'flatMap') + return undefined + + const callback = getFunctionExpressionArgument(innerExpression) + if (!callback) + return undefined + + return findFunctionLikeReturnTarget(callback, checker) +} + +function findJsxComponentDeclarationTarget( + token: ts.Node, + checker: ts.TypeChecker, +): EditTarget | undefined { + const openingLikeElement = findAncestor(token, node => + ts.isJsxOpeningElement(node) || ts.isJsxSelfClosingElement(node)) + if (!openingLikeElement) + return undefined + + const tagName = openingLikeElement.tagName + if (!ts.isIdentifier(tagName)) + return undefined + + const symbol = checker.getSymbolAtLocation(tagName) + const declaration = symbol?.valueDeclaration + if (!declaration || !ts.isVariableDeclaration(declaration) || !declaration.initializer) + return undefined + + if (!typeIncludesUndefined(checker.getTypeAtLocation(declaration.initializer))) + return undefined + + return createExpressionTarget(declaration.initializer) +} + +function findObjectLiteralPropertyTarget( + objectLiteral: ts.ObjectLiteralExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + for (const property of objectLiteral.properties) { + if (ts.isSpreadAssignment(property)) { + const directTarget = findTargetFromExpression(property.expression, checker) + if (directTarget) + return directTarget + + if (typeIncludesUndefined(checker.getTypeAtLocation(property.expression))) + return createExpressionTarget(property.expression) + continue + } + + if (ts.isShorthandPropertyAssignment(property)) { + if (typeIncludesUndefined(checker.getTypeAtLocation(property.name))) + return createShorthandPropertyTarget(property) + continue + } + + if (ts.isPropertyAssignment(property)) { + const directTarget = findTargetFromExpression(property.initializer, checker) + if (directTarget) + return directTarget + + const nestedTarget = findNestedContainerTarget(property.initializer, checker) + if (nestedTarget) + return nestedTarget + + if (typeIncludesUndefined(checker.getTypeAtLocation(property.initializer))) + return createExpressionTarget(property.initializer) + } + } + + return undefined +} + +function findArrayLiteralElementTarget( + arrayLiteral: ts.ArrayLiteralExpression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const iterableFallbackTarget = createArrayLiteralIterableFallbackTarget(arrayLiteral, checker) + if (iterableFallbackTarget) + return iterableFallbackTarget + + for (const element of arrayLiteral.elements) { + if (ts.isSpreadElement(element)) { + const directTarget = findTargetFromExpression(element.expression, checker) + if (directTarget) + return directTarget + + if (typeIncludesUndefined(checker.getTypeAtLocation(element.expression))) + return createExpressionTarget(element.expression) + continue + } + + const directTarget = findTargetFromExpression(element, checker) + if (directTarget) + return directTarget + + const nestedTarget = findNestedContainerTarget(element, checker) + if (nestedTarget) + return nestedTarget + + if (typeIncludesUndefined(checker.getTypeAtLocation(element))) + return createExpressionTarget(element) + } + + for (let index = arrayLiteral.elements.length - 1; index >= 0; index -= 1) { + const element = arrayLiteral.elements[index] + if (!element) + continue + + if (ts.isSpreadElement(element)) + continue + + if (!isAlreadyNonNull(element)) + return createExpressionTarget(element) + } + + return undefined +} + +function findNestedContainerTarget( + expression: ts.Expression, + checker: ts.TypeChecker, +): EditTarget | undefined { + const innerExpression = skipOuterExpressions(expression) + if (ts.isObjectLiteralExpression(innerExpression)) + return findObjectLiteralPropertyTarget(innerExpression, checker) + + if (ts.isArrayLiteralExpression(innerExpression)) + return findArrayLiteralElementTarget(innerExpression, checker) + + return undefined +} + +function findAccessDiagnosticTarget( + sourceFile: ts.SourceFile, + token: ts.Node, + start: number, + end: number, + checker: ts.TypeChecker, +): EditTarget | undefined { + const directExpression = findTightestExpression(sourceFile, start, end) + if (directExpression) { + if (typeIncludesUndefined(checker.getTypeAtLocation(directExpression)) && !isAlreadyNonNull(directExpression)) + return createExpressionTarget(directExpression) + + const referencedDeclarationTarget = findReferencedDeclarationInitializerTarget(directExpression, checker) + if (referencedDeclarationTarget && isExpressionTarget(referencedDeclarationTarget) && !isAlreadyNonNull(referencedDeclarationTarget.expression)) + return referencedDeclarationTarget + } + + const bindingPatternTarget = findVariableDeclarationInitializerTarget(sourceFile, token, checker) + if (bindingPatternTarget && isExpressionTarget(bindingPatternTarget) && !isAlreadyNonNull(bindingPatternTarget.expression)) + return bindingPatternTarget + + const accessTarget = findUndefinedAccessTarget(token, checker) + if (accessTarget && isExpressionTarget(accessTarget) && !isAlreadyNonNull(accessTarget.expression)) + return accessTarget + + const propertyAccessTarget = findPropertyAccessExpressionTarget(token, start) + if (propertyAccessTarget && isExpressionTarget(propertyAccessTarget) && !isAlreadyNonNull(propertyAccessTarget.expression)) + return propertyAccessTarget + + return undefined +} + +function findDiagnosticCandidate( + sourceFile: ts.SourceFile, + token: ts.Node, + start: number, + end: number, + diagnosticCode: number, + checker: ts.TypeChecker, +): ts.Expression | undefined { + if (diagnosticCode === 2322) { + const directExpression = findTightestExpression(sourceFile, start, end) + if (directExpression && typeIncludesUndefined(checker.getTypeAtLocation(directExpression))) + return directExpression + + return findAssignmentLikeCandidate(token, sourceFile, start, end) + } + + if (diagnosticCode === 2345) + return findArgumentCandidate(token, sourceFile, start, end) + + if (diagnosticCode === 2722) { + const current = findTightestExpression(sourceFile, start, end) + if (current && ts.isCallExpression(current)) + return current.expression + + return findTightestExpression(sourceFile, start, end) + } + + return findTightestExpression(sourceFile, start, end) +} + +function resolveEditTarget( + sourceFile: ts.SourceFile, + diagnostic: ts.DiagnosticWithLocation, + checker: ts.TypeChecker, +): EditTarget | undefined { + const start = diagnostic.start + const end = diagnostic.start + diagnostic.length + const token = getTokenAtPosition(sourceFile, start) + + const shorthandTarget = findShorthandPropertyTarget(token, checker) + if (shorthandTarget) + return shorthandTarget + + const propertyAssignmentTarget = findPropertyAssignmentInitializerTarget(token, start, checker) + if (propertyAssignmentTarget) + return propertyAssignmentTarget + + const jsxSpreadTarget = findJsxSpreadAttributeTarget(token, checker) + if (jsxSpreadTarget && isExpressionTarget(jsxSpreadTarget) && !isAlreadyNonNull(jsxSpreadTarget.expression)) + return jsxSpreadTarget + + const jsxAttribute = findAncestor(token, ts.isJsxAttribute) + const jsxExpression = jsxAttribute ? getExpressionFromJsxAttribute(jsxAttribute) : undefined + + if ( + ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code) + && jsxExpression + && typeIncludesUndefined(checker.getTypeAtLocation(jsxExpression)) + && !isAlreadyNonNull(jsxExpression) + ) { + return findTargetFromExpression(jsxExpression, checker) + ?? createExpressionTarget(jsxExpression) + } + + if (ACCESS_DIAGNOSTIC_CODES.has(diagnostic.code)) + return findAccessDiagnosticTarget(sourceFile, token, start, end, checker) + + if (diagnostic.code === 2322 || diagnostic.code === 2488) { + const arrayDestructuringTarget = findArrayDestructuringTarget(token, checker) + if (arrayDestructuringTarget) + return arrayDestructuringTarget + } + + if (diagnostic.code === 2538) { + const elementAccessTarget = findElementAccessArgumentTarget(token) + if (elementAccessTarget && isExpressionTarget(elementAccessTarget) && !isAlreadyNonNull(elementAccessTarget.expression)) + return elementAccessTarget + } + + if (diagnostic.code === 7006) + return findImplicitAnyParameterTarget(token) + + if (diagnostic.code === 2488) { + const iterableTarget = findIterableTarget(sourceFile, token, start, end, checker) + if (iterableTarget && (!isExpressionTarget(iterableTarget) || !isAlreadyNonNull(iterableTarget.expression))) + return iterableTarget + } + + if (diagnostic.code === 2604 || diagnostic.code === 2786) { + const jsxComponentTarget = findJsxComponentDeclarationTarget(token, checker) + if (jsxComponentTarget && isExpressionTarget(jsxComponentTarget) && !isAlreadyNonNull(jsxComponentTarget.expression)) + return jsxComponentTarget + } + + const candidate = findDiagnosticCandidate(sourceFile, token, start, end, diagnostic.code, checker) + + if (!candidate) { + return jsxExpression && !isAlreadyNonNull(jsxExpression) + ? createExpressionTarget(jsxExpression) + : undefined + } + + if (ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code)) { + if ( + diagnostic.code === 2345 + && typeIncludesUndefined(checker.getTypeAtLocation(candidate)) + && !isAlreadyNonNull(candidate) + && ( + ts.isIdentifier(candidate) + || ts.isElementAccessExpression(candidate) + || ts.isPropertyAccessExpression(candidate) + ) + ) { + return createExpressionTarget(candidate) + } + + const referencedDeclarationTarget = findReferencedDeclarationInitializerTarget(candidate, checker) + if (referencedDeclarationTarget && isExpressionTarget(referencedDeclarationTarget) && !isAlreadyNonNull(referencedDeclarationTarget.expression)) + return referencedDeclarationTarget + + const collectionCallbackTarget = findCollectionCallbackTarget(candidate, checker) + if (collectionCallbackTarget && isExpressionTarget(collectionCallbackTarget) && !isAlreadyNonNull(collectionCallbackTarget.expression)) + return collectionCallbackTarget + } + + const targetFromCandidate = findTargetFromExpression(candidate, checker) + if (targetFromCandidate && (!isExpressionTarget(targetFromCandidate) || !isAlreadyNonNull(targetFromCandidate.expression))) + return targetFromCandidate + + if (ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code) && (ts.isArrowFunction(candidate) || ts.isFunctionExpression(candidate))) { + const functionTarget = findFunctionLikeReturnTarget(candidate, checker) + if (functionTarget && isExpressionTarget(functionTarget) && !isAlreadyNonNull(functionTarget.expression)) + return functionTarget + } + + const nestedContainerTarget = findNestedContainerTarget(candidate, checker) + if (nestedContainerTarget) + return nestedContainerTarget + + if (isAlreadyNonNull(candidate)) + return undefined + + if (ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code) && ts.isObjectLiteralExpression(candidate)) { + const objectLiteralTarget = findObjectLiteralPropertyTarget(candidate, checker) + if (objectLiteralTarget) + return objectLiteralTarget + } + + if (diagnostic.code === 2322) { + const declarationInitializerTarget = findVariableDeclarationInitializerTarget(sourceFile, token, checker) + if (declarationInitializerTarget && isExpressionTarget(declarationInitializerTarget) && !isAlreadyNonNull(declarationInitializerTarget.expression)) + return declarationInitializerTarget + } + + if (ASSIGNABILITY_DIAGNOSTIC_CODES.has(diagnostic.code) && !typeIncludesUndefined(checker.getTypeAtLocation(candidate))) + return undefined + + return createExpressionTarget(candidate) +} + +function createEditForTarget( + target: EditTarget, + printer: ts.Printer, +): TextEdit { + const sourceFile = target.sourceFile + + if (target.kind === 'direct-edit') { + return { + end: target.end, + expectedText: sourceFile.text.slice(target.start, target.end), + replacement: target.replacement, + start: target.start, + } + } + + if (target.kind === 'shorthand-property') { + const name = target.property.name + const nonNullName = printer.printNode( + ts.EmitHint.Expression, + ts.factory.createNonNullExpression(name), + sourceFile, + ) + return { + end: target.property.getEnd(), + expectedText: sourceFile.text.slice(target.property.getStart(sourceFile), target.property.getEnd()), + replacement: `${name.getText(sourceFile)}: ${nonNullName}`, + start: target.property.getStart(sourceFile), + } + } + + const replacement = shouldPrintInlineNonNullAssertion(target.expression) + ? `${target.expression.getText(sourceFile)}!` + : printer.printNode( + ts.EmitHint.Expression, + ts.factory.createNonNullExpression(target.expression), + sourceFile, + ) + + return { + end: target.expression.getEnd(), + expectedText: sourceFile.text.slice(target.expression.getStart(sourceFile), target.expression.getEnd()), + replacement, + start: target.expression.getStart(sourceFile), + } +} + +function hasOverlap(existingEdits: TextEdit[], nextEdit: TextEdit): boolean { + return existingEdits.some(edit => nextEdit.start < edit.end && edit.start < nextEdit.end) +} + +function applyEdits(text: string, edits: TextEdit[]): { appliedEditCount: number, text: string } { + let currentText = text + let appliedEditCount = 0 + + for (const edit of edits.sort((left, right) => right.start - left.start)) { + if (edit.replacement.length > currentText.length * 4) + continue + + try { + currentText = `${currentText.slice(0, edit.start)}${edit.replacement}${currentText.slice(edit.end)}` + appliedEditCount += 1 + } + catch { + continue + } + } + + return { + appliedEditCount, + text: currentText, + } +} + +function isValidEditRange(text: string, edit: TextEdit): boolean { + return Number.isInteger(edit.start) + && Number.isInteger(edit.end) + && edit.start >= 0 + && edit.end >= edit.start + && edit.end <= text.length +} + +function filterApplicableEdits(text: string, edits: TextEdit[]): TextEdit[] { + return edits.filter(edit => isValidEditRange(text, edit) && (!edit.expectedText || text.slice(edit.start, edit.end) === edit.expectedText)) +} + +export async function runMigration(options: CliOptions) { + const projectPath = path.resolve(process.cwd(), options.project) + const parsedConfig = parseTsConfig(projectPath) + const matchesRequestedFile = createFileMatcher(options.files) + const targetFiles = parsedConfig.fileNames + .map(normalizeFileName) + .filter(isTargetFile) + .filter(matchesRequestedFile) + + if (targetFiles.length === 0) { + console.error('No matching TypeScript source files found.') + process.exitCode = 1 + return { converged: false, totalEdits: 0 } + } + + const fileTexts = new Map() + const printer = ts.createPrinter() + const migrationRootNames = options.useFullProjectRoots + ? parsedConfig.fileNames.map(normalizeFileName) + : getMigrationRootNames(parsedConfig, targetFiles) + + let totalEdits = 0 + let converged = false + let previousProgram: ts.Program | undefined + + for (let iteration = 1; iteration <= options.maxIterations; iteration += 1) { + const program = createMigrationProgram(migrationRootNames, parsedConfig, fileTexts, previousProgram) + const checker = program.getTypeChecker() + const editsByFile = new Map() + + for (const fileName of targetFiles) { + const sourceFile = program.getSourceFile(fileName) + if (!sourceFile) + continue + + const diagnostics = program + .getSemanticDiagnostics(sourceFile) + .filter((diagnostic): diagnostic is ts.DiagnosticWithLocation => { + return diagnostic.file !== undefined + && diagnostic.start !== undefined + && diagnostic.length !== undefined + && SUPPORTED_DIAGNOSTIC_CODES.has(diagnostic.code) + }) + + if (options.verbose && diagnostics.length > 0) + console.log(`file ${path.relative(process.cwd(), fileName)}: ${diagnostics.length} supported diagnostic(s)`) + + for (const diagnostic of diagnostics) { + const target = resolveEditTarget(sourceFile, diagnostic, checker) + if (!target) { + if (options.verbose) + console.log(`unresolved ${formatDiagnostic(diagnostic)}`) + continue + } + + const editFileName = target.sourceFile.fileName + const edit = createEditForTarget(target, printer) + const existing = editsByFile.get(editFileName) ?? [] + if (hasOverlap(existing, edit)) + continue + + existing.push(edit) + editsByFile.set(editFileName, existing) + + if (options.verbose) { + const position = target.sourceFile.getLineAndCharacterOfPosition(edit.start) + console.log(`iter ${iteration}: ${path.relative(process.cwd(), editFileName)}:${position.line + 1}:${position.character + 1} -> add !`) + } + } + } + + if (editsByFile.size === 0) { + console.log(`No more supported diagnostics after ${iteration - 1} iteration(s).`) + converged = true + break + } + + let iterationEditCount = 0 + + for (const [fileName, edits] of editsByFile) { + const currentText = fileTexts.get(fileName) ?? await fs.readFile(fileName, 'utf8') + const applicableEdits = filterApplicableEdits(currentText, edits) + if (applicableEdits.length === 0) + continue + + const { appliedEditCount, text: editedText } = applyEdits(currentText, applicableEdits) + if (appliedEditCount === 0) + continue + + const nextText = normalizeMalformedAssertions(editedText) + if (nextText === currentText) { + if (options.verbose) { + const firstEdit = applicableEdits[0] + console.log(`iter ${iteration}: no-op after normalization for ${path.relative(process.cwd(), fileName)}:${firstEdit?.start ?? 0} ${JSON.stringify(firstEdit ? currentText.slice(firstEdit.start, firstEdit.end) : '')} -> ${JSON.stringify(firstEdit?.replacement ?? '')}`) + } + continue + } + + fileTexts.set(fileName, nextText) + iterationEditCount += appliedEditCount + } + + totalEdits += iterationEditCount + console.log(`Iteration ${iteration}: ${iterationEditCount} edit(s) across ${editsByFile.size} file(s).`) + previousProgram = program + } + + if (totalEdits === 0) { + console.log('No supported noUncheckedIndexedAccess-style diagnostics were migrated.') + return { converged, totalEdits } + } + + if (!options.write) { + if (!converged) + console.log(`Stopped after reaching --max-iterations=${options.maxIterations}.`) + + console.log(`Dry run complete. ${totalEdits} edit(s) are ready. Re-run with --write to apply them.`) + return { converged, totalEdits } + } + + const changedFiles = Array.from(fileTexts.entries()) + await Promise.all(changedFiles.map(async ([fileName, text]) => { + await fs.writeFile(fileName, text) + })) + + if (!converged) + console.log(`Stopped after reaching --max-iterations=${options.maxIterations}.`) + + console.log(`Wrote ${totalEdits} edit(s) to ${changedFiles.length} file(s).`) + return { converged, totalEdits } +} + +export async function runMigrationCommand(argv: string[]) { + await runMigration(parseArgs(argv)) +} diff --git a/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/normalize.ts b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/normalize.ts new file mode 100644 index 0000000000..d3b88736fc --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/normalize.ts @@ -0,0 +1,51 @@ +import fs from 'node:fs/promises' +import path from 'node:path' +import process from 'node:process' +import { normalizeMalformedAssertions } from './migrate' + +const ROOT = process.cwd() +const EXTENSIONS = new Set(['.ts', '.tsx']) + +async function collectFiles(directory: string): Promise { + const entries = await fs.readdir(directory, { withFileTypes: true }) + const files: string[] = [] + + for (const entry of entries) { + if (entry.name === 'node_modules' || entry.name === '.next') + continue + + const absolutePath = path.join(directory, entry.name) + if (entry.isDirectory()) { + files.push(...await collectFiles(absolutePath)) + continue + } + + if (!EXTENSIONS.has(path.extname(entry.name))) + continue + + files.push(absolutePath) + } + + return files +} + +async function main() { + const files = await collectFiles(ROOT) + let changedFileCount = 0 + + await Promise.all(files.map(async (fileName) => { + const currentText = await fs.readFile(fileName, 'utf8') + const nextText = normalizeMalformedAssertions(currentText) + if (nextText === currentText) + return + + await fs.writeFile(fileName, nextText) + changedFileCount += 1 + })) + + console.log(`Normalized malformed assertion syntax in ${changedFileCount} file(s).`) +} + +export async function runNormalizeCommand(_argv: string[]) { + await main() +} diff --git a/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/run.ts b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/run.ts new file mode 100644 index 0000000000..ad655e4f11 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/src/no-unchecked-indexed-access/run.ts @@ -0,0 +1,325 @@ +import { execFile } from 'node:child_process' +import { createHash } from 'node:crypto' +import fs from 'node:fs/promises' +import os from 'node:os' +import path from 'node:path' +import process from 'node:process' +import { promisify } from 'node:util' +import { runMigration, SUPPORTED_DIAGNOSTIC_CODES } from './migrate' + +const execFileAsync = promisify(execFile) +const DIAGNOSTIC_PATTERN = /^(.+?\.(?:ts|tsx))\((\d+),(\d+)\): error TS(\d+): (.+)$/ +const DEFAULT_BATCH_SIZE = 100 +const DEFAULT_BATCH_ITERATIONS = 5 +const DEFAULT_MAX_ROUNDS = 20 +const TYPECHECK_CACHE_DIR = path.join(os.tmpdir(), 'migrate-no-unchecked-indexed-access') + +type CliOptions = { + batchIterations: number + batchSize: number + maxRounds: number + project: string + verbose: boolean +} + +type DiagnosticEntry = { + code: number + fileName: string + line: number + message: string +} + +function parseArgs(argv: string[]): CliOptions { + const options: CliOptions = { + batchIterations: DEFAULT_BATCH_ITERATIONS, + batchSize: DEFAULT_BATCH_SIZE, + maxRounds: DEFAULT_MAX_ROUNDS, + project: 'tsconfig.json', + verbose: false, + } + + for (let i = 0; i < argv.length; i += 1) { + const arg = argv[i] + + if (arg === '--') + continue + + if (arg === '--verbose') { + options.verbose = true + continue + } + + if (arg === '--project') { + const value = argv[i + 1] + if (!value) + throw new Error('Missing value for --project') + + options.project = value + i += 1 + continue + } + + if (arg === '--batch-size') { + const value = Number(argv[i + 1]) + if (!Number.isInteger(value) || value <= 0) + throw new Error('Invalid value for --batch-size') + + options.batchSize = value + i += 1 + continue + } + + if (arg === '--batch-iterations') { + const value = Number(argv[i + 1]) + if (!Number.isInteger(value) || value <= 0) + throw new Error('Invalid value for --batch-iterations') + + options.batchIterations = value + i += 1 + continue + } + + if (arg === '--max-rounds') { + const value = Number(argv[i + 1]) + if (!Number.isInteger(value) || value <= 0) + throw new Error('Invalid value for --max-rounds') + + options.maxRounds = value + i += 1 + continue + } + + throw new Error(`Unknown option: ${arg}`) + } + + return options +} + +function getTypeCheckBuildInfoPath(projectPath: string): string { + const hash = createHash('sha1') + .update(projectPath) + .digest('hex') + .slice(0, 16) + + return path.join(TYPECHECK_CACHE_DIR, `${hash}.tsbuildinfo`) +} + +async function runTypeCheck( + project: string, + options?: { + incremental?: boolean + }, +): Promise<{ diagnostics: DiagnosticEntry[], exitCode: number, rawOutput: string }> { + const projectPath = path.resolve(process.cwd(), project) + const projectDirectory = path.dirname(projectPath) + const buildInfoPath = getTypeCheckBuildInfoPath(projectPath) + const incremental = options?.incremental ?? true + + await fs.mkdir(TYPECHECK_CACHE_DIR, { recursive: true }) + + const tscArgs = ['exec', 'tsc', '--noEmit', '--pretty', 'false'] + if (incremental) { + tscArgs.push('--incremental', '--tsBuildInfoFile', buildInfoPath) + } + else { + tscArgs.push('--incremental', 'false') + } + tscArgs.push('--project', projectPath) + + try { + const { stdout, stderr } = await execFileAsync('pnpm', tscArgs, { + cwd: projectDirectory, + env: { + ...process.env, + NODE_OPTIONS: process.env.NODE_OPTIONS ?? '--max-old-space-size=8192', + }, + maxBuffer: 1024 * 1024 * 32, + }) + + const rawOutput = `${stdout}${stderr}`.trim() + return { + diagnostics: parseDiagnostics(rawOutput, projectDirectory), + exitCode: 0, + rawOutput, + } + } + catch (error) { + const exitCode = typeof error === 'object' && error && 'code' in error && typeof error.code === 'number' + ? error.code + : 1 + const stdout = typeof error === 'object' && error && 'stdout' in error && typeof error.stdout === 'string' + ? error.stdout + : '' + const stderr = typeof error === 'object' && error && 'stderr' in error && typeof error.stderr === 'string' + ? error.stderr + : '' + const rawOutput = `${stdout}${stderr}`.trim() + + return { + diagnostics: parseDiagnostics(rawOutput, projectDirectory), + exitCode, + rawOutput, + } + } +} + +function parseDiagnostics(rawOutput: string, projectDirectory: string): DiagnosticEntry[] { + return rawOutput + .split('\n') + .map(line => line.trim()) + .flatMap((line) => { + const match = line.match(DIAGNOSTIC_PATTERN) + if (!match) + return [] + + return [{ + code: Number(match[4]), + fileName: path.resolve(projectDirectory, match[1]!), + line: Number(match[2]), + message: match[5] ?? '', + }] + }) +} + +function summarizeCodes(diagnostics: DiagnosticEntry[]): string { + const counts = new Map() + for (const diagnostic of diagnostics) + counts.set(diagnostic.code, (counts.get(diagnostic.code) ?? 0) + 1) + + return Array.from(counts.entries()) + .sort((left, right) => right[1] - left[1]) + .slice(0, 8) + .map(([code, count]) => `TS${code}:${count}`) + .join(', ') +} + +function chunk(items: T[], size: number): T[][] { + const batches: T[][] = [] + for (let i = 0; i < items.length; i += size) + batches.push(items.slice(i, i + size)) + + return batches +} + +async function runBatchMigration(options: CliOptions) { + for (let round = 1; round <= options.maxRounds; round += 1) { + const { diagnostics, exitCode, rawOutput } = await runTypeCheck(options.project) + if (exitCode === 0) { + const finalCheck = await runTypeCheck(options.project, { incremental: false }) + if (finalCheck.exitCode !== 0) { + const finalDiagnostics = finalCheck.diagnostics + console.log(`Final cold type check found ${finalDiagnostics.length} diagnostic(s). ${summarizeCodes(finalDiagnostics)}`) + + if (options.verbose) { + for (const diagnostic of finalDiagnostics.slice(0, 40)) + console.log(`${path.relative(process.cwd(), diagnostic.fileName)}:${diagnostic.line} TS${diagnostic.code} ${diagnostic.message}`) + } + + const finalSupportedFiles = Array.from(new Set( + finalDiagnostics + .filter(diagnostic => SUPPORTED_DIAGNOSTIC_CODES.has(diagnostic.code)) + .map(diagnostic => diagnostic.fileName), + )) + + if (finalSupportedFiles.length > 0) { + console.log(` Final pass batch: ${finalSupportedFiles.length} file(s)`) + let finalResult = await runMigration({ + files: finalSupportedFiles, + maxIterations: options.batchIterations, + project: options.project, + verbose: options.verbose, + write: true, + }) + + if (finalResult.totalEdits === 0) { + console.log(' No edits produced; retrying final pass with full project roots.') + finalResult = await runMigration({ + files: finalSupportedFiles, + maxIterations: options.batchIterations, + project: options.project, + useFullProjectRoots: true, + verbose: options.verbose, + write: true, + }) + } + + if (finalResult.totalEdits > 0) + continue + } + + if (finalCheck.rawOutput) + process.stderr.write(`${finalCheck.rawOutput}\n`) + process.exitCode = 1 + return + } + + console.log(`Type check passed after ${round - 1} migration round(s).`) + return + } + + const supportedDiagnostics = diagnostics.filter(diagnostic => SUPPORTED_DIAGNOSTIC_CODES.has(diagnostic.code)) + const unsupportedDiagnostics = diagnostics.filter(diagnostic => !SUPPORTED_DIAGNOSTIC_CODES.has(diagnostic.code)) + const supportedFiles = Array.from(new Set(supportedDiagnostics.map(diagnostic => diagnostic.fileName))) + + console.log(`Round ${round}: ${diagnostics.length} diagnostic(s). ${summarizeCodes(diagnostics)}`) + + if (options.verbose) { + for (const diagnostic of diagnostics.slice(0, 40)) + console.log(`${path.relative(process.cwd(), diagnostic.fileName)}:${diagnostic.line} TS${diagnostic.code} ${diagnostic.message}`) + } + + if (supportedFiles.length === 0) { + console.error('No supported diagnostics remain to migrate.') + if (unsupportedDiagnostics.length > 0) { + console.error('Remaining unsupported diagnostics:') + for (const diagnostic of unsupportedDiagnostics.slice(0, 40)) + console.error(`${path.relative(process.cwd(), diagnostic.fileName)}:${diagnostic.line} TS${diagnostic.code} ${diagnostic.message}`) + } + if (rawOutput) + process.stderr.write(`${rawOutput}\n`) + process.exitCode = 1 + return + } + + let roundEdits = 0 + const batches = chunk(supportedFiles, options.batchSize) + + for (const [index, batch] of batches.entries()) { + console.log(` Batch ${index + 1}/${batches.length}: ${batch.length} file(s)`) + let result = await runMigration({ + files: batch, + maxIterations: options.batchIterations, + project: options.project, + verbose: options.verbose, + write: true, + }) + + if (result.totalEdits === 0) { + console.log(' No edits produced; retrying batch with full project roots.') + result = await runMigration({ + files: batch, + maxIterations: options.batchIterations, + project: options.project, + useFullProjectRoots: true, + verbose: options.verbose, + write: true, + }) + } + + roundEdits += result.totalEdits + } + + if (roundEdits === 0) { + console.error('Migration script made no edits in this round; stopping to avoid an infinite loop.') + process.exitCode = 1 + return + } + } + + console.error(`Reached --max-rounds=${options.maxRounds} before type check passed.`) + process.exitCode = 1 +} + +export async function runBatchMigrationCommand(argv: string[]) { + await runBatchMigration(parseArgs(argv)) +} diff --git a/packages/migrate-no-unchecked-indexed-access/tsconfig.json b/packages/migrate-no-unchecked-indexed-access/tsconfig.json new file mode 100644 index 0000000000..aeb24e1df5 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/tsconfig.json @@ -0,0 +1,3 @@ +{ + "extends": "@dify/tsconfig/node.json" +} diff --git a/packages/migrate-no-unchecked-indexed-access/vite.config.ts b/packages/migrate-no-unchecked-indexed-access/vite.config.ts new file mode 100644 index 0000000000..ac4aed1a06 --- /dev/null +++ b/packages/migrate-no-unchecked-indexed-access/vite.config.ts @@ -0,0 +1,17 @@ +import { defineConfig } from 'vite-plus' + +export default defineConfig({ + pack: { + clean: true, + deps: { + neverBundle: ['typescript'], + }, + entry: ['src/cli.ts'], + format: ['esm'], + outDir: 'dist', + platform: 'node', + sourcemap: true, + target: 'node22', + treeshake: true, + }, +}) diff --git a/packages/tsconfig/base.json b/packages/tsconfig/base.json new file mode 100644 index 0000000000..707f1aff56 --- /dev/null +++ b/packages/tsconfig/base.json @@ -0,0 +1,19 @@ +{ + "compilerOptions": { + "esModuleInterop": true, + "skipLibCheck": true, + "target": "es2022", + "allowJs": true, + "resolveJsonModule": true, + "moduleDetection": "force", + "isolatedModules": true, + "verbatimModuleSyntax": true, + + "strict": true, + "noUncheckedIndexedAccess": true, + "noImplicitOverride": true, + + "module": "preserve", + "noEmit": true + } +} diff --git a/packages/tsconfig/nextjs.json b/packages/tsconfig/nextjs.json new file mode 100644 index 0000000000..81c6436a97 --- /dev/null +++ b/packages/tsconfig/nextjs.json @@ -0,0 +1,10 @@ +{ + "extends": "./web.json", + "compilerOptions": { + "plugins": [ + { + "name": "next" + } + ] + } +} diff --git a/packages/tsconfig/node.json b/packages/tsconfig/node.json new file mode 100644 index 0000000000..832dab2b09 --- /dev/null +++ b/packages/tsconfig/node.json @@ -0,0 +1,7 @@ +{ + "extends": "./base.json", + "compilerOptions": { + "lib": ["es2022"], + "types": ["node"] + } +} diff --git a/packages/tsconfig/package.json b/packages/tsconfig/package.json new file mode 100644 index 0000000000..52cafc5bb3 --- /dev/null +++ b/packages/tsconfig/package.json @@ -0,0 +1,11 @@ +{ + "name": "@dify/tsconfig", + "version": "0.0.0-private", + "private": true, + "exports": { + "./base.json": "./base.json", + "./nextjs.json": "./nextjs.json", + "./node.json": "./node.json", + "./web.json": "./web.json" + } +} diff --git a/packages/tsconfig/web.json b/packages/tsconfig/web.json new file mode 100644 index 0000000000..9f3ba7c121 --- /dev/null +++ b/packages/tsconfig/web.json @@ -0,0 +1,7 @@ +{ + "extends": "./base.json", + "compilerOptions": { + "jsx": "react-jsx", + "lib": ["es2022", "dom", "dom.iterable"] + } +} diff --git a/pnpm-lock.yaml b/pnpm-lock.yaml index f7985dac7c..914bc342e2 100644 --- a/pnpm-lock.yaml +++ b/pnpm-lock.yaml @@ -7,23 +7,23 @@ settings: catalogs: default: '@amplitude/analytics-browser': - specifier: 2.38.1 - version: 2.38.1 + specifier: 2.39.0 + version: 2.39.0 '@amplitude/plugin-session-replay-browser': - specifier: 1.27.6 - version: 1.27.6 + specifier: 1.27.7 + version: 1.27.7 '@antfu/eslint-config': - specifier: 8.1.1 - version: 8.1.1 + specifier: 8.2.0 + version: 8.2.0 '@base-ui/react': - specifier: 1.3.0 - version: 1.3.0 + specifier: 1.4.0 + version: 1.4.0 '@chromatic-com/storybook': - specifier: 5.1.1 - version: 5.1.1 + specifier: 5.1.2 + version: 5.1.2 '@cucumber/cucumber': - specifier: 12.7.0 - version: 12.7.0 + specifier: 12.8.0 + version: 12.8.0 '@egoist/tailwindcss-icons': specifier: 1.9.2 version: 1.9.2 @@ -40,8 +40,8 @@ catalogs: specifier: 0.27.19 version: 0.27.19 '@formatjs/intl-localematcher': - specifier: 0.8.2 - version: 0.8.2 + specifier: 0.8.3 + version: 0.8.3 '@headlessui/react': specifier: 2.2.10 version: 2.2.10 @@ -49,8 +49,8 @@ catalogs: specifier: 2.2.0 version: 2.2.0 '@hono/node-server': - specifier: 1.19.13 - version: 1.19.13 + specifier: 1.19.14 + version: 1.19.14 '@iconify-json/heroicons': specifier: 1.2.3 version: 1.2.3 @@ -58,23 +58,23 @@ catalogs: specifier: 1.2.10 version: 1.2.10 '@lexical/link': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/list': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/react': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/selection': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/text': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@lexical/utils': - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 '@mdx-js/loader': specifier: 3.1.1 version: 3.1.1 @@ -94,17 +94,17 @@ catalogs: specifier: 16.2.3 version: 16.2.3 '@orpc/client': - specifier: 1.13.13 - version: 1.13.13 + specifier: 1.13.14 + version: 1.13.14 '@orpc/contract': - specifier: 1.13.13 - version: 1.13.13 + specifier: 1.13.14 + version: 1.13.14 '@orpc/openapi-client': - specifier: 1.13.13 - version: 1.13.13 + specifier: 1.13.14 + version: 1.13.14 '@orpc/tanstack-query': - specifier: 1.13.13 - version: 1.13.13 + specifier: 1.13.14 + version: 1.13.14 '@playwright/test': specifier: 1.59.1 version: 1.59.1 @@ -115,8 +115,8 @@ catalogs: specifier: 4.2.0 version: 4.2.0 '@sentry/react': - specifier: 10.47.0 - version: 10.47.0 + specifier: 10.48.0 + version: 10.48.0 '@storybook/addon-docs': specifier: 10.3.5 version: 10.3.5 @@ -154,23 +154,23 @@ catalogs: specifier: 4.2.2 version: 4.2.2 '@tanstack/eslint-plugin-query': - specifier: 5.96.2 - version: 5.96.2 + specifier: 5.99.0 + version: 5.99.0 '@tanstack/react-devtools': specifier: 0.10.2 version: 0.10.2 '@tanstack/react-form': - specifier: 1.28.6 - version: 1.28.6 + specifier: 1.29.0 + version: 1.29.0 '@tanstack/react-form-devtools': - specifier: 0.2.20 - version: 0.2.20 + specifier: 0.2.21 + version: 0.2.21 '@tanstack/react-query': - specifier: 5.96.2 - version: 5.96.2 + specifier: 5.99.0 + version: 5.99.0 '@tanstack/react-query-devtools': - specifier: 5.96.2 - version: 5.96.2 + specifier: 5.99.0 + version: 5.99.0 '@tanstack/react-virtual': specifier: 3.13.23 version: 3.13.23 @@ -187,14 +187,14 @@ catalogs: specifier: 14.6.1 version: 14.6.1 '@tsslint/cli': - specifier: 3.0.2 - version: 3.0.2 + specifier: 3.0.3 + version: 3.0.3 '@tsslint/compat-eslint': - specifier: 3.0.2 - version: 3.0.2 + specifier: 3.0.3 + version: 3.0.3 '@tsslint/config': - specifier: 3.0.2 - version: 3.0.2 + specifier: 3.0.3 + version: 3.0.3 '@types/js-cookie': specifier: 3.0.6 version: 3.0.6 @@ -205,8 +205,8 @@ catalogs: specifier: 0.6.4 version: 0.6.4 '@types/node': - specifier: 25.5.2 - version: 25.5.2 + specifier: 25.6.0 + version: 25.6.0 '@types/qs': specifier: 6.15.0 version: 6.15.0 @@ -220,23 +220,23 @@ catalogs: specifier: 1.15.9 version: 1.15.9 '@typescript-eslint/eslint-plugin': - specifier: 8.58.1 - version: 8.58.1 + specifier: 8.58.2 + version: 8.58.2 '@typescript-eslint/parser': - specifier: 8.58.1 - version: 8.58.1 + specifier: 8.58.2 + version: 8.58.2 '@typescript/native-preview': - specifier: 7.0.0-dev.20260408.1 - version: 7.0.0-dev.20260408.1 + specifier: 7.0.0-dev.20260413.1 + version: 7.0.0-dev.20260413.1 '@vitejs/plugin-react': specifier: 6.0.1 version: 6.0.1 '@vitejs/plugin-rsc': - specifier: 0.5.23 - version: 0.5.23 + specifier: 0.5.24 + version: 0.5.24 '@vitest/coverage-v8': - specifier: 4.1.3 - version: 4.1.3 + specifier: 4.1.4 + version: 4.1.4 abcjs: specifier: 6.6.2 version: 6.6.2 @@ -274,8 +274,8 @@ catalogs: specifier: 10.6.0 version: 10.6.0 dompurify: - specifier: 3.3.3 - version: 3.3.3 + specifier: 3.4.0 + version: 3.4.0 echarts: specifier: 6.0.0 version: 6.0.0 @@ -304,17 +304,17 @@ catalogs: specifier: 0.6.1 version: 0.6.1 eslint-plugin-better-tailwindcss: - specifier: 4.3.2 - version: 4.3.2 + specifier: 4.4.1 + version: 4.4.1 eslint-plugin-hyoban: specifier: 0.14.1 version: 0.14.1 eslint-plugin-markdown-preferences: - specifier: 0.41.0 - version: 0.41.0 + specifier: 0.41.1 + version: 0.41.1 eslint-plugin-no-barrel-files: - specifier: 1.2.2 - version: 1.2.2 + specifier: 1.3.1 + version: 1.3.1 eslint-plugin-react-refresh: specifier: 0.5.2 version: 0.5.2 @@ -328,14 +328,14 @@ catalogs: specifier: 3.1.3 version: 3.1.3 happy-dom: - specifier: 20.8.9 - version: 20.8.9 + specifier: 20.9.0 + version: 20.9.0 hast-util-to-jsx-runtime: specifier: 2.3.6 version: 2.3.6 hono: - specifier: 4.12.12 - version: 4.12.12 + specifier: 4.12.14 + version: 4.12.14 html-entities: specifier: 2.6.0 version: 2.6.0 @@ -373,8 +373,8 @@ catalogs: specifier: 0.16.45 version: 0.16.45 knip: - specifier: 6.3.1 - version: 6.3.1 + specifier: 6.4.1 + version: 6.4.1 ky: specifier: 2.0.0 version: 2.0.0 @@ -382,8 +382,11 @@ catalogs: specifier: 1.2.1 version: 1.2.1 lexical: - specifier: 0.42.0 - version: 0.42.0 + specifier: 0.43.0 + version: 0.43.0 + loro-crdt: + specifier: 1.10.8 + version: 1.10.8 mermaid: specifier: 11.14.0 version: 11.14.0 @@ -406,8 +409,8 @@ catalogs: specifier: 2.8.9 version: 2.8.9 pinyin-pro: - specifier: 3.28.0 - version: 3.28.0 + specifier: 3.28.1 + version: 3.28.1 postcss: specifier: 8.5.9 version: 8.5.9 @@ -471,6 +474,9 @@ catalogs: shiki: specifier: 4.0.2 version: 4.0.2 + socket.io-client: + specifier: 4.8.3 + version: 4.8.3 sortablejs: specifier: 1.15.7 version: 1.15.7 @@ -520,8 +526,8 @@ catalogs: specifier: 12.0.0-beta.1 version: 12.0.0-beta.1 vite-plus: - specifier: 0.1.16 - version: 0.1.16 + specifier: 0.1.18 + version: 0.1.18 vitest-canvas-mock: specifier: 1.1.4 version: 1.1.4 @@ -545,8 +551,8 @@ overrides: flatted@<=3.4.1: 3.4.2 glob@>=10.2.0 <10.5.0: 11.1.0 is-core-module: npm:@nolyfill/is-core-module@^1.0.39 - lodash@>=4.0.0 <= 4.17.23: 4.18.0 lodash-es@>=4.0.0 <= 4.17.23: 4.18.0 + lodash@>=4.0.0 <= 4.17.23: 4.18.0 picomatch@<2.3.2: 2.3.2 picomatch@>=4.0.0 <4.0.4: 4.0.4 rollup@>=4.0.0 <4.59.0: 4.59.0 @@ -559,8 +565,8 @@ overrides: svgo@>=3.0.0 <3.3.3: 3.3.3 tar@<=7.5.10: 7.5.11 undici@>=7.0.0 <7.24.0: 7.24.0 - vite: npm:@voidzero-dev/vite-plus-core@0.1.16 - vitest: npm:@voidzero-dev/vite-plus-test@0.1.16 + vite: npm:@voidzero-dev/vite-plus-core@0.1.18 + vitest: npm:@voidzero-dev/vite-plus-test@0.1.18 yaml@>=2.0.0 <2.8.3: 2.8.3 yauzl@<3.2.1: 3.2.1 @@ -568,24 +574,42 @@ importers: .: devDependencies: + '@antfu/eslint-config': + specifier: 'catalog:' + version: 8.2.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2)(vitest@4.1.4) + eslint: + specifier: 'catalog:' + version: 10.2.0(jiti@2.6.1) + eslint-markdown: + specifier: 'catalog:' + version: 0.6.1(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-markdown-preferences: + specifier: 'catalog:' + version: 0.41.1(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-no-barrel-files: + specifier: 'catalog:' + version: 1.3.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) vite: - specifier: npm:@voidzero-dev/vite-plus-core@0.1.16 - version: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plus: specifier: 'catalog:' - version: 0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) e2e: devDependencies: '@cucumber/cucumber': specifier: 'catalog:' - version: 12.7.0 + version: 12.8.0 + '@dify/tsconfig': + specifier: workspace:* + version: link:../packages/tsconfig '@playwright/test': specifier: 'catalog:' version: 1.59.1 '@types/node': specifier: 'catalog:' - version: 25.5.2 + version: 25.6.0 tsx: specifier: 'catalog:' version: 4.21.0 @@ -593,11 +617,30 @@ importers: specifier: 'catalog:' version: 6.0.2 vite: - specifier: npm:@voidzero-dev/vite-plus-core@0.1.16 - version: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plus: specifier: 'catalog:' - version: 0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + + packages/dify-ui: + dependencies: + clsx: + specifier: 'catalog:' + version: 2.1.1 + tailwind-merge: + specifier: 'catalog:' + version: 3.5.0 + devDependencies: + '@dify/tsconfig': + specifier: workspace:* + version: link:../tsconfig + tailwindcss: + specifier: 'catalog:' + version: 4.2.2 + typescript: + specifier: 'catalog:' + version: 6.0.2 packages/iconify-collections: devDependencies: @@ -605,23 +648,47 @@ importers: specifier: 'catalog:' version: 0.1.2 + packages/migrate-no-unchecked-indexed-access: + dependencies: + typescript: + specifier: 'catalog:' + version: 6.0.2 + devDependencies: + '@dify/tsconfig': + specifier: workspace:* + version: link:../tsconfig + '@types/node': + specifier: 'catalog:' + version: 25.6.0 + vite: + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite-plus: + specifier: 'catalog:' + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + + packages/tsconfig: {} + sdks/nodejs-client: devDependencies: + '@dify/tsconfig': + specifier: workspace:* + version: link:../../packages/tsconfig '@eslint/js': specifier: 'catalog:' version: 10.0.1(eslint@10.2.0(jiti@2.6.1)) '@types/node': specifier: 'catalog:' - version: 25.5.2 + version: 25.6.0 '@typescript-eslint/eslint-plugin': specifier: 'catalog:' - version: 8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@typescript-eslint/parser': specifier: 'catalog:' - version: 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@vitest/coverage-v8': specifier: 'catalog:' - version: 4.1.3(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 4.1.4(@voidzero-dev/vite-plus-test@0.1.18) eslint: specifier: 'catalog:' version: 10.2.0(jiti@2.6.1) @@ -629,26 +696,26 @@ importers: specifier: 'catalog:' version: 6.0.2 vite: - specifier: npm:@voidzero-dev/vite-plus-core@0.1.16 - version: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plus: specifier: 'catalog:' - version: 0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) vitest: - specifier: npm:@voidzero-dev/vite-plus-test@0.1.16 - version: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-test@0.1.18 + version: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' web: dependencies: '@amplitude/analytics-browser': specifier: 'catalog:' - version: 2.38.1 + version: 2.39.0 '@amplitude/plugin-session-replay-browser': specifier: 'catalog:' - version: 1.27.6(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0) + version: 1.27.7(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0) '@base-ui/react': specifier: 'catalog:' - version: 1.3.0(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + version: 1.4.0(@date-fns/tz@1.4.1)(@types/react@19.2.14)(date-fns@4.1.0)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@emoji-mart/data': specifier: 'catalog:' version: 1.2.1 @@ -657,7 +724,7 @@ importers: version: 0.27.19(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@formatjs/intl-localematcher': specifier: 'catalog:' - version: 0.8.2 + version: 0.8.3 '@headlessui/react': specifier: 'catalog:' version: 2.2.10(react-dom@19.2.5(react@19.2.5))(react@19.2.5) @@ -666,46 +733,46 @@ importers: version: 2.2.0(react@19.2.5) '@lexical/code': specifier: npm:lexical-code-no-prism@0.41.0 - version: lexical-code-no-prism@0.41.0(@lexical/utils@0.42.0)(lexical@0.42.0) + version: lexical-code-no-prism@0.41.0(@lexical/utils@0.43.0)(lexical@0.43.0) '@lexical/link': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@lexical/list': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@lexical/react': specifier: 'catalog:' - version: 0.42.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(yjs@13.6.30) + version: 0.43.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(yjs@13.6.30) '@lexical/selection': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@lexical/text': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@lexical/utils': specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 '@monaco-editor/react': specifier: 'catalog:' version: 4.7.0(monaco-editor@0.55.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@orpc/client': specifier: 'catalog:' - version: 1.13.13 + version: 1.13.14 '@orpc/contract': specifier: 'catalog:' - version: 1.13.13 + version: 1.13.14 '@orpc/openapi-client': specifier: 'catalog:' - version: 1.13.13 + version: 1.13.14 '@orpc/tanstack-query': specifier: 'catalog:' - version: 1.13.13(@orpc/client@1.13.13)(@tanstack/query-core@5.96.2) + version: 1.13.14(@orpc/client@1.13.14)(@tanstack/query-core@5.99.0) '@remixicon/react': specifier: 'catalog:' version: 4.9.0(react@19.2.5) '@sentry/react': specifier: 'catalog:' - version: 10.47.0(react@19.2.5) + version: 10.48.0(react@19.2.5) '@streamdown/math': specifier: 'catalog:' version: 1.0.2(react@19.2.5) @@ -720,10 +787,10 @@ importers: version: 0.5.19(tailwindcss@4.2.2) '@tanstack/react-form': specifier: 'catalog:' - version: 1.28.6(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + version: 1.29.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@tanstack/react-query': specifier: 'catalog:' - version: 5.96.2(react@19.2.5) + version: 5.99.0(react@19.2.5) '@tanstack/react-virtual': specifier: 'catalog:' version: 3.13.23(react-dom@19.2.5(react@19.2.5))(react@19.2.5) @@ -739,9 +806,6 @@ importers: client-only: specifier: 'catalog:' version: 0.0.1 - clsx: - specifier: 'catalog:' - version: 2.1.1 cmdk: specifier: 'catalog:' version: 1.1.1(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) @@ -759,7 +823,7 @@ importers: version: 10.6.0 dompurify: specifier: 'catalog:' - version: 3.3.3 + version: 3.4.0 echarts: specifier: 'catalog:' version: 6.0.0 @@ -828,7 +892,10 @@ importers: version: 1.2.1 lexical: specifier: 'catalog:' - version: 0.42.0 + version: 0.43.0 + loro-crdt: + specifier: 'catalog:' + version: 1.10.8 mermaid: specifier: 'catalog:' version: 11.14.0 @@ -852,7 +919,7 @@ importers: version: 2.8.9(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react@19.2.5) pinyin-pro: specifier: 'catalog:' - version: 3.28.0 + version: 3.28.1 qrcode.react: specifier: 'catalog:' version: 4.2.0(react@19.2.5) @@ -910,6 +977,9 @@ importers: shiki: specifier: 'catalog:' version: 4.0.2 + socket.io-client: + specifier: 'catalog:' + version: 4.8.3 sortablejs: specifier: 'catalog:' version: 1.15.7 @@ -922,9 +992,6 @@ importers: string-ts: specifier: 'catalog:' version: 2.3.1 - tailwind-merge: - specifier: 'catalog:' - version: 3.5.0 tldts: specifier: 'catalog:' version: 7.0.28 @@ -949,13 +1016,16 @@ importers: devDependencies: '@antfu/eslint-config': specifier: 'catalog:' - version: 8.1.1(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2))(@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2) + version: 8.2.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.18)(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2) '@chromatic-com/storybook': specifier: 'catalog:' - version: 5.1.1(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) + version: 5.1.2(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) '@dify/iconify-collections': specifier: workspace:* version: link:../packages/iconify-collections + '@dify/tsconfig': + specifier: workspace:* + version: link:../packages/tsconfig '@egoist/tailwindcss-icons': specifier: 'catalog:' version: 1.9.2(tailwindcss@4.2.2) @@ -964,16 +1034,19 @@ importers: version: 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@hono/node-server': specifier: 'catalog:' - version: 1.19.13(hono@4.12.12) + version: 1.19.14(hono@4.12.14) '@iconify-json/heroicons': specifier: 'catalog:' version: 1.2.3 '@iconify-json/ri': specifier: 'catalog:' version: 1.2.10 + '@langgenius/dify-ui': + specifier: workspace:* + version: link:../packages/dify-ui '@mdx-js/loader': specifier: 'catalog:' - version: 3.1.1(webpack@5.105.4(uglify-js@3.19.3)) + version: 3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@mdx-js/react': specifier: 'catalog:' version: 3.1.1(@types/react@19.2.14)(react@19.2.5) @@ -985,13 +1058,13 @@ importers: version: 16.2.3 '@next/mdx': specifier: 'catalog:' - version: 16.2.3(@mdx-js/loader@3.1.1(webpack@5.105.4(uglify-js@3.19.3)))(@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.5)) + version: 16.2.3(@mdx-js/loader@3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.5)) '@rgrove/parse-xml': specifier: 'catalog:' version: 4.2.0 '@storybook/addon-docs': specifier: 'catalog:' - version: 10.3.5(@types/react@19.2.14)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(uglify-js@3.19.3)) + version: 10.3.5(@types/react@19.2.14)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@storybook/addon-links': specifier: 'catalog:' version: 10.3.5(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) @@ -1003,7 +1076,7 @@ importers: version: 10.3.5(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) '@storybook/nextjs-vite': specifier: 'catalog:' - version: 10.3.5(@babel/core@7.29.0)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(uglify-js@3.19.3)) + version: 10.3.5(@babel/core@7.29.0)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@storybook/react': specifier: 'catalog:' version: 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) @@ -1012,19 +1085,19 @@ importers: version: 4.2.2 '@tailwindcss/vite': specifier: 'catalog:' - version: 4.2.2(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 4.2.2(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) '@tanstack/eslint-plugin-query': specifier: 'catalog:' - version: 5.96.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 5.99.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@tanstack/react-devtools': specifier: 'catalog:' version: 0.10.2(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(csstype@3.2.3)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(solid-js@1.9.11) '@tanstack/react-form-devtools': specifier: 'catalog:' - version: 0.2.20(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11) + version: 0.2.21(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11) '@tanstack/react-query-devtools': specifier: 'catalog:' - version: 5.96.2(@tanstack/react-query@5.96.2(react@19.2.5))(react@19.2.5) + version: 5.99.0(@tanstack/react-query@5.99.0(react@19.2.5))(react@19.2.5) '@testing-library/dom': specifier: 'catalog:' version: 10.4.1 @@ -1039,13 +1112,13 @@ importers: version: 14.6.1(@testing-library/dom@10.4.1) '@tsslint/cli': specifier: 'catalog:' - version: 3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) + version: 3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) '@tsslint/compat-eslint': specifier: 'catalog:' - version: 3.0.2(jiti@2.6.1)(typescript@6.0.2) + version: 3.0.3(jiti@2.6.1)(typescript@6.0.2) '@tsslint/config': specifier: 'catalog:' - version: 3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) + version: 3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) '@types/js-cookie': specifier: 'catalog:' version: 3.0.6 @@ -1057,7 +1130,7 @@ importers: version: 0.6.4 '@types/node': specifier: 'catalog:' - version: 25.5.2 + version: 25.6.0 '@types/qs': specifier: 'catalog:' version: 6.15.0 @@ -1072,19 +1145,19 @@ importers: version: 1.15.9 '@typescript-eslint/parser': specifier: 'catalog:' - version: 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@typescript/native-preview': specifier: 'catalog:' - version: 7.0.0-dev.20260408.1 + version: 7.0.0-dev.20260413.1 '@vitejs/plugin-react': specifier: 'catalog:' - version: 6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 6.0.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) '@vitejs/plugin-rsc': specifier: 'catalog:' - version: 0.5.23(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.5) + version: 0.5.24(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(react@19.2.5) '@vitest/coverage-v8': specifier: 'catalog:' - version: 4.1.3(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 4.1.4(@voidzero-dev/vite-plus-test@0.1.18) agentation: specifier: 'catalog:' version: 3.0.2(react-dom@19.2.5(react@19.2.5))(react@19.2.5) @@ -1099,16 +1172,16 @@ importers: version: 0.6.1(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-better-tailwindcss: specifier: 'catalog:' - version: 4.3.2(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))(tailwindcss@4.2.2)(typescript@6.0.2) + version: 4.4.1(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(tailwindcss@4.2.2)(typescript@6.0.2) eslint-plugin-hyoban: specifier: 'catalog:' version: 0.14.1(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-markdown-preferences: specifier: 'catalog:' - version: 0.41.0(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)) + version: 0.41.1(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-no-barrel-files: specifier: 'catalog:' - version: 1.2.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + version: 1.3.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint-plugin-react-refresh: specifier: 'catalog:' version: 0.5.2(eslint@10.2.0(jiti@2.6.1)) @@ -1120,19 +1193,19 @@ importers: version: 10.3.5(eslint@10.2.0(jiti@2.6.1))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) happy-dom: specifier: 'catalog:' - version: 20.8.9 + version: 20.9.0 hono: specifier: 'catalog:' - version: 4.12.12 + version: 4.12.14 knip: specifier: 'catalog:' - version: 6.3.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + version: 6.4.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) postcss: specifier: 'catalog:' version: 8.5.9 react-server-dom-webpack: specifier: 'catalog:' - version: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)) + version: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) storybook: specifier: 'catalog:' version: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) @@ -1150,22 +1223,22 @@ importers: version: 3.19.3 vinext: specifier: 'catalog:' - version: 0.0.41(@mdx-js/rollup@3.1.1(rollup@4.59.0))(@vitejs/plugin-react@6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)))(@vitejs/plugin-rsc@0.5.23(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.5))(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.5)(typescript@6.0.2) + version: 0.0.41(453b4e184a832f83060410b31544dc36) vite: - specifier: npm:@voidzero-dev/vite-plus-core@0.1.16 - version: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-core@0.1.18 + version: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plugin-inspect: specifier: 'catalog:' - version: 12.0.0-beta.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0) + version: 12.0.0-beta.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0) vite-plus: specifier: 'catalog:' - version: 0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + version: 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) vitest: - specifier: npm:@voidzero-dev/vite-plus-test@0.1.16 - version: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + specifier: npm:@voidzero-dev/vite-plus-test@0.1.18 + version: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vitest-canvas-mock: specifier: 'catalog:' - version: 1.1.4(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + version: 1.1.4(@voidzero-dev/vite-plus-test@0.1.18) packages: @@ -1176,17 +1249,17 @@ packages: resolution: {integrity: sha512-UrcABB+4bUrFABwbluTIBErXwvbsU/V7TZWfmbgJfbkwiBuziS9gxdODUyuiecfdGQ85jglMW6juS3+z5TsKLw==} engines: {node: '>=10'} - '@amplitude/analytics-browser@2.38.1': - resolution: {integrity: sha512-8E3WDuCz5pmVysw7iwT9MjltzaO7Sqy9jWNaXovO30Z8sXs5Ncl32qv6o14kwlpl3wRSaaAKDe0Z3Grjx3dYYQ==} + '@amplitude/analytics-browser@2.39.0': + resolution: {integrity: sha512-sTNGGjiubsDs1NqKsTXp0ykCaSIzjaGclMRHlnO7JBatqK0f/Knl0cfn1a7XBFuTVix/M5nrWATsKv6+0dSpMg==} - '@amplitude/analytics-client-common@2.4.42': - resolution: {integrity: sha512-pEpE6s8GsXTlD9Jj4b/wplCQD8fT2ml/VZSnQ1E5sU0goaeZaYQKMTXGpbA2aE40ABZMwQSopxJn+puBrJc8eg==} + '@amplitude/analytics-client-common@2.4.43': + resolution: {integrity: sha512-R5n3cfnVNLk32BE2DbCp4xpn39mfmjMUjvOO9kt5dLFdF0cozb9MCawVyZJQVfnJJT6k5NMoswdUBu7Ul0nbRw==} '@amplitude/analytics-connector@1.6.4': resolution: {integrity: sha512-SpIv0IQMNIq6SH3UqFGiaZyGSc7PBZwRdq7lvP0pBxW8i4Ny+8zwI0pV+VMfMHQwWY3wdIbWw5WQphNjpdq1/Q==} - '@amplitude/analytics-core@2.44.1': - resolution: {integrity: sha512-bx8RAYneoEyT/gsCpcktEgBMUs5vIb2piA/Kof88BaNKAWEpIa9B4Ogg4vNPqmEgNIx/wztSduFMHHw2pLcncg==} + '@amplitude/analytics-core@2.45.0': + resolution: {integrity: sha512-vWRYbXu2Grs1GM+WHo03RPtbaPs5sJm21YQcAow9JASvtoY4xNqItIeRydCJQWtFHhbbxY41n+CVW6mzDP6aBA==} '@amplitude/analytics-types@2.11.1': resolution: {integrity: sha512-wFEgb0t99ly2uJKm5oZ28Lti0Kh5RecR5XBkwfUpDzn84IoCIZ8GJTsMw/nThu8FZFc7xFDA4UAt76zhZKrs9A==} @@ -1194,26 +1267,26 @@ packages: '@amplitude/experiment-core@0.7.2': resolution: {integrity: sha512-Wc2NWvgQ+bLJLeF0A9wBSPIaw0XuqqgkPKsoNFQrmS7r5Djd56um75In05tqmVntPJZRvGKU46pAp8o5tdf4mA==} - '@amplitude/plugin-autocapture-browser@1.25.1': - resolution: {integrity: sha512-eIaPO7eUH2W0OWe0JoqUVvMPUGDeOn4JQa7zdClEbvHnPxfGS1RHIFNsBk5ofgEWxhUo2Ka/Z0Wl86k9FMaa7w==} + '@amplitude/plugin-autocapture-browser@1.25.2': + resolution: {integrity: sha512-AWzIX0uit60Q742rH/96/n88e+3BaVZa4+7Xs+BeuuIOyrljOZlQKzH23Lxzkl0DgbNb5+MMqWds0pov3DV5TA==} - '@amplitude/plugin-custom-enrichment-browser@0.1.3': - resolution: {integrity: sha512-iKZkqkI5CpLb62cGNgvqTVEUj8i5UBFWJc0aQMZZBqc+vmzHBaqvjeAU0dwO8KA623YfT5I+/Vp1MnqvEXGJFg==} + '@amplitude/plugin-custom-enrichment-browser@0.1.4': + resolution: {integrity: sha512-vxuQocn8YGE2wMLZUmotRG8c6RijoaQAsHKDQEO56CNk3WhSecgSGMnlHcUcOYIzwfXKFj4MxRJS386kdDHV+Q==} - '@amplitude/plugin-network-capture-browser@1.9.12': - resolution: {integrity: sha512-/8x+GDqE25pTvsU9Po7Ur+V8pUuX4IG5p2xHPM9N/APfyc3D1zLTkC8FKo8wfPpg4Wu97mSzy1JnvPDqbJcJyw==} + '@amplitude/plugin-network-capture-browser@1.9.13': + resolution: {integrity: sha512-8uzTQFbP+dvqJX+S39KqKw+EheJW8JCWT/xlXT55vtTU/ZTFeF074QnHFEKUPewpYXpwKXgJky8PDoMk0b46Qw==} - '@amplitude/plugin-page-url-enrichment-browser@0.7.4': - resolution: {integrity: sha512-gF7V1ypkYB7FTwKlqjbO+7Z+Wvf72RfA64aREj9aplZdRJ0EY3qSEYMA3L2v0U5ztYchiy5MJraSaaxKfzXdJg==} + '@amplitude/plugin-page-url-enrichment-browser@0.7.5': + resolution: {integrity: sha512-0Q7P5vsue/s92i3zevVDVJf9AiHkbxGdwkB8iV2oWgkXtglzWugwr//qN+muHmXdi1ZWxRjm93CW+jQJVripgw==} - '@amplitude/plugin-page-view-tracking-browser@2.9.5': - resolution: {integrity: sha512-fWewMrgo0T7AyKnrZn6ox0ER5Ibw/IFTkX0GrQ8DxcsXrmUuSWUTsxZaA7YPDzuWPbd4AX9/AWZF2i6A9Ybtfg==} + '@amplitude/plugin-page-view-tracking-browser@2.9.6': + resolution: {integrity: sha512-/4lG2lXIB6qbQNf1VYQ5fDOnvInPEtYuOgvmyLfuZ6PvHVFUu4NZtoOVdAcy0R9x76rNyCpRXxdL78p9Ra1ANA==} - '@amplitude/plugin-session-replay-browser@1.27.6': - resolution: {integrity: sha512-wHv9b/Qzu9qg0thE+qo23/KpYGiADnAj42I1C1goQAJG7XNOk62F0sdejVvnQIV9NsLe0ItoS+tg3eqlBE7Exg==} + '@amplitude/plugin-session-replay-browser@1.27.7': + resolution: {integrity: sha512-KcGMFaBGqZAOm1Gdzio9d95IL3Nmp5J1xOu1PD0NAPYLfW1MyoyA5PFIIlMqqVf1DoCjmgqP7AY4swetU2tpWg==} - '@amplitude/plugin-web-vitals-browser@1.1.27': - resolution: {integrity: sha512-jh/dWMsthx5E+ensNTwj7nkqi8iG8wyJc1HryOdY49w9zTgcbZmJwE2uumLBXBasn7l62a5EdqRkwctGL53fHw==} + '@amplitude/plugin-web-vitals-browser@1.1.28': + resolution: {integrity: sha512-gs4Y1eOuVUEDwYEJF82f/GmgQ7iM4Y/eZTkftJKjFsBNbrPro2CuLymfdAcC+QuVfyrp3qAiWcSGnjDXA6ZbQg==} '@amplitude/rrdom@2.0.0-alpha.37': resolution: {integrity: sha512-u4dSnBtlbJ8oU5P/Ywl2RLqvjqWbkl4ScMUbvQA7in4pWcx+0NRN+VVjLZXQcd8Fn7E/rcxjeUh7e7HfwvdasQ==} @@ -1247,14 +1320,14 @@ packages: '@amplitude/rrweb@2.0.0-alpha.37': resolution: {integrity: sha512-jJkSpPYiVgOZB422pb2jOJJn3pvb5E5f9vKK8CEmUlk2mVAl6kPQzW98mb05M65OJFj5nn9tSe9h5r5+Cl93ag==} - '@amplitude/session-replay-browser@1.35.1': - resolution: {integrity: sha512-7X6T+niZaG+zpvcFOwdkbTNUWzD6T9/rQ7POYkTK+C/6FtvJ0fpHXNHdHT8fozKox2UXL/wwZvoQWFriHSe1dA==} + '@amplitude/session-replay-browser@1.36.0': + resolution: {integrity: sha512-HZpNRMRAiLbzGF84DzF+ZH5WztJH4tVe2e/FzYJ2r27Sgf2gftCmzCB9pN8BXXcHKYtQK8/Qol+PTmSIzvyvEw==} '@amplitude/targeting@0.2.0': resolution: {integrity: sha512-/50ywTrC4hfcfJVBbh5DFbqMPPfaIOivZeb5Gb+OGM03QrA+lsUqdvtnKLNuWtceD4H6QQ2KFzPJ5aAJLyzVDA==} - '@antfu/eslint-config@8.1.1': - resolution: {integrity: sha512-y5/eAKlJUbQpeES2Pnb0i/VgbmqQ+srHJJNqbTKEBsxdLy3h1BqdS00zDpE+YeP71EWmlYJSTUhcJg4n4yMeAQ==} + '@antfu/eslint-config@8.2.0': + resolution: {integrity: sha512-spfwYXMNrlkl69riTSBnbC0C2K8EVfVMOK3ceP2EpAAioyfprIW1gTwyLRtd9jZSFeNdX4mFNAIG+o0sOneOfA==} hasBin: true peerDependencies: '@angular-eslint/eslint-plugin': ^21.1.0 @@ -1391,19 +1464,21 @@ packages: resolution: {integrity: sha512-LwdZHpScM4Qz8Xw2iKSzS+cfglZzJGvofQICy7W7v4caru4EaAmyUuO6BGrbyQ2mYV11W0U8j5mBhd14dd3B0A==} engines: {node: '>=6.9.0'} - '@base-ui/react@1.3.0': - resolution: {integrity: sha512-FwpKqZbPz14AITp1CVgf4AjhKPe1OeeVKSBMdgD10zbFlj3QSWelmtCMLi2+/PFZZcIm3l87G7rwtCZJwHyXWA==} + '@base-ui/react@1.4.0': + resolution: {integrity: sha512-QcqdVbr/+ba2/RAKJIV1PV6S02Q5+r6a4Eym8ndBw+ZbBILkkmQAyRxXCg/pArrHnkrGeU8goe26aw0h6eE8pg==} engines: {node: '>=14.0.0'} peerDependencies: + '@date-fns/tz': ^1.2.0 '@types/react': ^17 || ^18 || ^19 + date-fns: ^4.0.0 react: ^17 || ^18 || ^19 react-dom: ^17 || ^18 || ^19 peerDependenciesMeta: '@types/react': optional: true - '@base-ui/utils@0.2.6': - resolution: {integrity: sha512-yQ+qeuqohwhsNpoYDqqXaLllYAkPCP4vYdDrVo8FQXaAPfHWm1pG/Vm+jmGTA5JFS0BAIjookyapuJFY8F9PIw==} + '@base-ui/utils@0.2.7': + resolution: {integrity: sha512-nXYKhiL/0JafyJE8PfcflipGftOftlIwKd72rU15iZ1M5yqgg5J9P8NHU71GReDuXco5MJA/eVQqUT5WRqX9sA==} peerDependencies: '@types/react': ^17 || ^18 || ^19 react: ^17 || ^18 || ^19 @@ -1434,8 +1509,8 @@ packages: '@chevrotain/utils@11.1.2': resolution: {integrity: sha512-4mudFAQ6H+MqBTfqLmU7G1ZwRzCLfJEooL/fsF6rCX5eePMbGhoy5n4g+G4vlh2muDcsCTJtL+uKbOzWxs5LHA==} - '@chromatic-com/storybook@5.1.1': - resolution: {integrity: sha512-BPoAXHM71XgeCK2u0jKr9i8apeQMm/Z9IWGyndA2FMijfQG9m8ox45DdWh/pxFkK5ClhGgirv5QwMhFIeHmThg==} + '@chromatic-com/storybook@5.1.2': + resolution: {integrity: sha512-H/hgvwC3E+OtseP2OT2QYUJH2VfnzT6wM3pWOkaNV6g7QI+VUdWJbeJ3o2jFqvEPQNqzhQKWDOlvM4lu+7is6g==} engines: {node: '>=20.0.0', yarn: '>=1.22.18'} peerDependencies: storybook: ^0.0.0-0 || ^10.1.0 || ^10.1.0-0 || ^10.2.0-0 || ^10.3.0-0 || ^10.4.0-0 @@ -1480,8 +1555,8 @@ packages: '@cucumber/cucumber-expressions@19.0.0': resolution: {integrity: sha512-4FKoOQh2Uf6F6/Ln+1OxuK8LkTg6PyAqekhf2Ix8zqV2M54sH+m7XNJNLhOFOAW/t9nxzRbw2CcvXbCLjcvHZg==} - '@cucumber/cucumber@12.7.0': - resolution: {integrity: sha512-7A/9CJpJDxv1SQ7hAZU0zPn2yRxx6XMR+LO4T94Enm3cYNWsEEj+RGX38NLX4INT+H6w5raX3Csb/qs4vUBsOA==} + '@cucumber/cucumber@12.8.0': + resolution: {integrity: sha512-sRG2QMAgCic4Uq1q+5LRzApEHiNGX5rhQY/GuOJZ9BIySrGPA9pevB0imJsZvdzt9scaWyIM3c7dIf4Dp1YQRA==} engines: {node: 20 || 22 || >=24} hasBin: true @@ -1505,18 +1580,18 @@ packages: peerDependencies: '@cucumber/messages': '>=18' - '@cucumber/junit-xml-formatter@0.9.0': - resolution: {integrity: sha512-WF+A7pBaXpKMD1i7K59Nk5519zj4extxY4+4nSgv5XLsGXHDf1gJnb84BkLUzevNtp2o2QzMG0vWLwSm8V5blw==} + '@cucumber/junit-xml-formatter@0.13.2': + resolution: {integrity: sha512-worYkxjeOWJV+b7WkgJekWgFHlIhbuocnFK3hP+pMYXqZMmkXsxAorYPjeF8KyLnZXajw5fKHS2bM9rQIUI7Zw==} peerDependencies: '@cucumber/messages': '*' - '@cucumber/message-streams@4.0.1': - resolution: {integrity: sha512-Kxap9uP5jD8tHUZVjTWgzxemi/0uOsbGjd4LBOSxcJoOCRbESFwemUzilJuzNTB8pcTQUh8D5oudUyxfkJOKmA==} + '@cucumber/message-streams@4.1.1': + resolution: {integrity: sha512-QCAntLajesWMyX+mZKrj63YghVAts7yKFlZe46XprLbdJZN0ddB+f/Mr9OnyWKC2DHhJ18jzCfKIFCaqpAmUxg==} peerDependencies: '@cucumber/messages': '>=17.1.1' - '@cucumber/messages@32.0.1': - resolution: {integrity: sha512-1OSoW+GQvFUNAl6tdP2CTBexTXMNJF0094goVUcvugtQeXtJ0K8sCP0xbq7GGoiezs/eJAAOD03+zAPT64orHQ==} + '@cucumber/messages@32.2.0': + resolution: {integrity: sha512-oYp1dgL2TByYWL51Z+rNm+/mFtJhiPU9WS03goes9EALb8d9GFcXRbG1JluFLFaChF1YDqIzLac0kkC3tv1DjQ==} '@cucumber/pretty-formatter@1.0.1': resolution: {integrity: sha512-A1lU4VVP0aUWdOTmpdzvXOyEYuPtBDI0xYwYJnmoMDplzxMdhcHk86lyyvYDoMoPzzq6OkOE3isuosvUU4X7IQ==} @@ -1532,6 +1607,9 @@ packages: '@cucumber/tag-expressions@9.1.0': resolution: {integrity: sha512-bvHjcRFZ+J1TqIa9eFNO1wGHqwx4V9ZKV3hYgkuK/VahHx73uiP4rKV3JVrvWSMrwrFvJG6C8aEwnCWSvbyFdQ==} + '@date-fns/tz@1.4.1': + resolution: {integrity: sha512-P5LUNhtbj6YfI3iJjw5EL9eUAG6OitD0W3fWQcpQjDRc/QIsL0tRNuO1PcDvPccWL1fSTXXdE1ds+l95DV/OFA==} + '@e18e/eslint-plugin@0.3.0': resolution: {integrity: sha512-hHgfpxsrZ2UYHcicA+tGZnmk19uJTaye9VH79O+XS8R4ona2Hx3xjhXghclNW58uXMk3xXlbYEOMr8thsoBmWg==} peerDependencies: @@ -1548,14 +1626,17 @@ packages: peerDependencies: tailwindcss: '*' - '@emnapi/core@1.9.1': - resolution: {integrity: sha512-mukuNALVsoix/w1BJwFzwXBN/dHeejQtuVzcDsfOEsdpCumXb/E9j8w11h5S54tT1xhifGfbbSm/ICrObRb3KA==} + '@emnapi/core@1.9.2': + resolution: {integrity: sha512-UC+ZhH3XtczQYfOlu3lNEkdW/p4dsJ1r/bP7H8+rhao3TTTMO1ATq/4DdIi23XuGoFY+Cz0JmCbdVl0hz9jZcA==} '@emnapi/runtime@1.9.1': resolution: {integrity: sha512-VYi5+ZVLhpgK4hQ0TAjiQiZ6ol0oe4mBx7mVv7IflsiEp0OWoVsp/+f9Vc1hOhE0TtkORVrI1GvzyreqpgWtkA==} - '@emnapi/wasi-threads@1.2.0': - resolution: {integrity: sha512-N10dEJNSsUx41Z6pZsXU8FjPjpBEplgH24sfkmITrBED1/U2Esum9F3lfLrMjKHHjmi557zQn7kR9R+XWXu5Rg==} + '@emnapi/runtime@1.9.2': + resolution: {integrity: sha512-3U4+MIWHImeyu1wnmVygh5WlgfYDtyf0k8AbLhMFxOipihf6nrWC4syIm/SwEeec0mNSafiiNnMJwbza/Is6Lw==} + + '@emnapi/wasi-threads@1.2.1': + resolution: {integrity: sha512-uTII7OYF+/Mes/MrcIOYp5yOtSMLBWSIoLPpcgwipoiKbli6k322tcoFsxoIIxPDqW01SQGAgko4EzZi2BNv2w==} '@emoji-mart/data@1.2.1': resolution: {integrity: sha512-no2pQMWiBy6gpBEiqGeU77/bFejDqUTRY7KX+0+iur13op3bqUsXdnwoZs6Xb1zbv0gAj5VvS1PWoUUckSr5Dw==} @@ -1820,9 +1901,9 @@ packages: resolution: {integrity: sha512-8FTGbNzTvmSlc4cZBaShkC6YvFMG0riksYWRFKXztqVdXaQbcZLXlFbSpC05s70sGEsXAw0qwhx69JiW7hQS7A==} engines: {node: ^20.19.0 || ^22.13.0 || >=24} - '@eslint/css-tree@3.6.9': - resolution: {integrity: sha512-3D5/OHibNEGk+wKwNwMbz63NMf367EoR4mVNNpxddCHKEb2Nez7z62J2U6YjtErSsZDoY0CsccmoUpdEbkogNA==} - engines: {node: ^10 || ^12.20.0 || ^14.13.0 || >=15.0.0} + '@eslint/css-tree@4.0.1': + resolution: {integrity: sha512-2fCSKRwoUHntYq9J1Lm28s2zeoCSNh1Cbk6Tg7k7ViwOnveIfZwPRFGwBglz+dzw2MHe5w5Fo9+VJfqL9nco2w==} + engines: {node: ^20.19.0 || ^22.13.0 || >=24} '@eslint/eslintrc@3.3.5': resolution: {integrity: sha512-4IlJx0X0qftVsN5E+/vGujTRIFtwuLbNsVUe7TO6zYPDR1O6nFwvwhIKEKSrl6dZchmYBITazxKoUYOjdtjlRg==} @@ -1900,11 +1981,11 @@ packages: '@floating-ui/utils@0.2.11': resolution: {integrity: sha512-RiB/yIh78pcIxl6lLMG0CgBXAZ2Y0eVHqMPYugu+9U0AeT6YBeiJpf7lbdJNIugFP5SIjwNRgo4DhR1Qxi26Gg==} - '@formatjs/fast-memoize@3.1.1': - resolution: {integrity: sha512-CbNbf+tlJn1baRnPkNePnBqTLxGliG6DDgNa/UtV66abwIjwsliPMOt0172tzxABYzSuxZBZfcp//qI8AvBWPg==} + '@formatjs/fast-memoize@3.1.2': + resolution: {integrity: sha512-vPnriihkfK0lzoQGaXq+qXH23VsYyansRTkTgo2aTG0k1NjLFyZimFVdfj4C9JkSE5dm7CEngcQ5TTc1yAyBfQ==} - '@formatjs/intl-localematcher@0.8.2': - resolution: {integrity: sha512-q05KMYGJLyqFNFtIb8NhWLF5X3aK/k0wYt7dnRFuy6aLQL+vUwQ1cg5cO4qawEiINybeCPXAWlprY2mSBjSXAQ==} + '@formatjs/intl-localematcher@0.8.3': + resolution: {integrity: sha512-pHUjWb9NuhnMs8+PxQdzBtZRFJHlGhrURGAbm6Ltwl82BFajeuiIR3jblSa7ia3r62rXe/0YtVpUG3xWr5bFCA==} '@headlessui/react@2.2.10': resolution: {integrity: sha512-5pVLNK9wlpxTUTy9GpgbX/SdcRh+HBnPktjM2wbiLTH4p+2EPHBO1aoSryUCuKUIItdDWO9ITlhUL8UnUN/oIA==} @@ -1918,8 +1999,8 @@ packages: peerDependencies: react: '>= 16 || ^19.0.0-rc' - '@hono/node-server@1.19.13': - resolution: {integrity: sha512-TsQLe4i2gvoTtrHje625ngThGBySOgSK3Xo2XRYOdqGN1teR8+I7vchQC46uLJi8OF62YTYA3AhSpumtkhsaKQ==} + '@hono/node-server@1.19.14': + resolution: {integrity: sha512-GwtvgtXxnWsucXvbQXkRgqksiH2Qed37H9xHZocE5sA3N8O8O8/8FA3uclQXxXVzc9XBZuEOMK7+r02FmSpHtw==} engines: {node: '>=18.14.1'} peerDependencies: hono: ^4 @@ -2143,77 +2224,77 @@ packages: '@jridgewell/trace-mapping@0.3.31': resolution: {integrity: sha512-zzNR+SdQSDJzc8joaeP8QQoCQr8NuYx2dIIytl1QeBEZHJ9uW6hebsrYgbz8hJwUQao3TWCMtmfV8Nu1twOLAw==} - '@lexical/clipboard@0.42.0': - resolution: {integrity: sha512-D3K2ID0zew/+CKpwxnUTTh/N46yU4IK8bFWV9Htz+g1vFhgUF9UnDOQCmqpJbdP7z+9U1F8rk3fzf9OmP2Fm2w==} + '@lexical/clipboard@0.43.0': + resolution: {integrity: sha512-3dWDusVyM9EosBt4/n/ERyPIGOyuWuECj9zbvJdzGUdvu/VsqCdlyDsU5M7NxTUNQn2Fhkdj2o00UeB6bagX5Q==} - '@lexical/code-core@0.42.0': - resolution: {integrity: sha512-vrZTUPWDJkHjAAvuV2+Qte4vYE80s7hIO7wxipiJmWojGx6lcmQjO+UqJ8AIrqI4Wjy8kXrK74kisApWmwxuCw==} + '@lexical/code-core@0.43.0': + resolution: {integrity: sha512-8NtEOI4+hM688Pmd0Qh/aTCS5uovps902V53LGB15DUUwwL+Z5U+Hz7ZYozhyM6W755FQ3x15qtEGIIbDHE5bQ==} - '@lexical/devtools-core@0.42.0': - resolution: {integrity: sha512-8nP8eE9i8JImgSrvInkWFfMCmXVKp3w3VaOvbJysdlK/Zal6xd8EWJEi6elj0mUW5T/oycfipPs2Sfl7Z+n14A==} + '@lexical/devtools-core@0.43.0': + resolution: {integrity: sha512-Hyz8vxvmo0aThXjq3+t0mabozmQeb6U+pxKceAgBSxE9oLWbQmP7RW8jYPZW20bYqEcX1Kgmu+CdW8e3eSF7Kw==} peerDependencies: react: '>=17.x' react-dom: '>=17.x' - '@lexical/dragon@0.42.0': - resolution: {integrity: sha512-/TQzP+7PLJMqq9+MlgQWiJsxS9GOOa8Gp0svCD8vNIOciYmXfd28TR1Go+ZnBWwr7k/2W++3XUYVQU2KUcQsDQ==} + '@lexical/dragon@0.43.0': + resolution: {integrity: sha512-wB2s8uO9DFwS5err1wM+7Yoz3cixtEXy1ZiU8RoJJ7tmjSEmQsLIflAQq8Lic291tCNPs+lSHKjdw+52vi0Z7Q==} - '@lexical/extension@0.42.0': - resolution: {integrity: sha512-rkZq/h8d1BenKRqU4t/zQUVfY/RinMX1Tz7t+Ee3ss0sk+kzP4W+URXNAxpn7r39Vn6wrFBqmCziah3dLAIqPw==} + '@lexical/extension@0.43.0': + resolution: {integrity: sha512-hCFj//3RhsPrCmx8VRTTLIsWtC2n5GG03ZDdyrgmeLzXNuknwDqhzaGAfQi9LSYn+NU+j3yCUROu8pZqaedtvw==} - '@lexical/hashtag@0.42.0': - resolution: {integrity: sha512-WOg5nFOfhabNBXzEIutdWDj+TUHtJEezj6w8jyYDGqZ31gu0cgrXSeV8UIynz/1oj+rpzEeEB7P6ODnwgjt7qA==} + '@lexical/hashtag@0.43.0': + resolution: {integrity: sha512-oCKjY8/jkxJuu8iBnNX0WSLA6ZIYTn+v3NLpJxDqnAFZJCnJ2i/nM8GKzPMzHCDzJVNxbQB08fOptdXf8eN0Fg==} - '@lexical/history@0.42.0': - resolution: {integrity: sha512-YfCZ1ICUt6BCg2ncJWFMuS4yftnB7FEHFRf3qqTSTf6oGZ4IZfzabMNEy47xybUuf7FXBbdaCKJrc/zOM+wGxw==} + '@lexical/history@0.43.0': + resolution: {integrity: sha512-SdrH3xgtUcolVRLihbQwiANQIiwSLdkKBon9oSsZNNnzVgEb7DUQUtJQGf33oW8HHWObIuWkh72W0fN1dZixOw==} - '@lexical/html@0.42.0': - resolution: {integrity: sha512-KgBUDLXehufCsXW3w0XsuoI2xecIhouOishnaNOH4zIA7dAtnNAfdPN/kWrWs0s83gz44OrnqccP+Bprw3UDEQ==} + '@lexical/html@0.43.0': + resolution: {integrity: sha512-C6LpUQlRl9J8Hqpm/C8LCX1ZxFHyD/gvOdV+NuNGnXN06uo0jDDm9SNh/HI3VWvFu9ec4OuzUkQRCafW8WC8fQ==} - '@lexical/link@0.42.0': - resolution: {integrity: sha512-cdeM/+f+kn7aGwW/3FIi6USjl1gBNdEEwg0/ZS+KlYcsy8gxx2e4cyVjsomBu/WU17Qxa0NC0paSr7qEJ/1Fig==} + '@lexical/link@0.43.0': + resolution: {integrity: sha512-jjU9PVWWBA2yEssbVkLQpu1ZIpXi3JwYb+JO20R47hzUm7T8SAPDd/VwU+2tcjqz065YntSGIaQ79dCft7WOJw==} - '@lexical/list@0.42.0': - resolution: {integrity: sha512-TIezILnmIVuvfqEEbcMnsT4xQRlswI6ysHISqsvKL6l5EBhs1gqmNYjHa/Yrfzaq5y52TM1PAtxbFts+G7N6kg==} + '@lexical/list@0.43.0': + resolution: {integrity: sha512-WyYVeQa2x1LrI8Emr9AiWTjSMiZw77Zy7MRnohPTdX/4fu3Njfw61lpoonCNHlv/r5Mb/RHkIAwWjtjcSzwA+g==} - '@lexical/mark@0.42.0': - resolution: {integrity: sha512-H1aGjbMEcL4B8GT7bm/ePHm7j3Wema+wIRNPmxMtXGMz5gpVN3gZlvg2UcUHHJb00SrBA95OUVT5I2nu/KP06w==} + '@lexical/mark@0.43.0': + resolution: {integrity: sha512-pgwR5ia2ECDS0pyQxIrFvMOKjffI6fo2cGwqYg+Jz+ANMqE5zD4PoOUs7FEuZYAKPOAQR9GrETB7YAVSzKjk3Q==} - '@lexical/markdown@0.42.0': - resolution: {integrity: sha512-+mOxgBiumlgVX8Acna+9HjJfSOw1jywufGcAQq3/8S11wZ4gE0u13AaR8LMmU8ydVeOQg09y8PNzGNQ/avZJbg==} + '@lexical/markdown@0.43.0': + resolution: {integrity: sha512-bJYhISQkdRo6XxcajgP9T+c8XAGfkJ/DHnSvM5nyJnHD0vZSH/2RZd2Lgt0eAnMVEt9ECG8cUkR557QSaPeJBA==} - '@lexical/offset@0.42.0': - resolution: {integrity: sha512-V+4af1KmTOnBZrR+kU3e6eD33W/g3QqMPPp3cpFwyXk/dKRc4K8HfyDsSDrjop1mPd9pl3lKSiEmX6uQG8K9XQ==} + '@lexical/offset@0.43.0': + resolution: {integrity: sha512-SYNF16Hk17ePaxFtPcBx3rzSM8yxDYSAzkSOdnUUePSzfTW3DUDzvUfe7q/7QCe/UlZd+4ULI0VjNgYRlR8Uiw==} - '@lexical/overflow@0.42.0': - resolution: {integrity: sha512-wlrHaM27rODJP5m+CTgfZGLg3qWlQ0ptGodcqoGdq6HSbV8nGFY6TvcLMaMtYQ1lm4v9G7Xe9LwjooR6xS3Gug==} + '@lexical/overflow@0.43.0': + resolution: {integrity: sha512-Usm7UfIwydhsg+qMbkBav79AOKqYa32zXY+TXveTqbaA+IAoIl3vFYP9x9ie4cHz/kgrmt/QuQs66cwPefRakg==} - '@lexical/plain-text@0.42.0': - resolution: {integrity: sha512-YWvBwIxLltrIaZDcv0rK4s44P6Yt17yhOb0E+g3+tjF8GGPrrocox+Pglu0m2RHR+G7zULN3isolmWIm/HhWiw==} + '@lexical/plain-text@0.43.0': + resolution: {integrity: sha512-wza2z2+OSsq3UPsFseqsVvnAWvW9s3W/rjQuf6Bk2/Xde2F3R7fvu3kArsaaVPzUKTVeOPCD8hUKIUpxP5OT2g==} - '@lexical/react@0.42.0': - resolution: {integrity: sha512-ujWJXhvlFVVTpwDcnSgEYWRuqUbreZaMB+4bjIDT5r7hkAplUHQndlkeuFHKFiJBasSAreleV7zhXrLL5xa9eA==} + '@lexical/react@0.43.0': + resolution: {integrity: sha512-Ov9PCS7Ghm83fmjSDr6CafDLsuMhf7A7FFfEr4DmDM/6Lw2w0a0QQJP+KqxPqaVaRgeQMJAVg38Zgrvuk3v7tw==} peerDependencies: react: '>=17.x' react-dom: '>=17.x' - '@lexical/rich-text@0.42.0': - resolution: {integrity: sha512-v4YgiM3oK3FZcRrfB+LetvLbQ5aee9MRO9tHf0EFweXg19XnSjHV0cfPAW7TyPxRELzB69+K0Q3AybRlTMjG4Q==} + '@lexical/rich-text@0.43.0': + resolution: {integrity: sha512-y6uhY5X+PBLg8LSCDazSMAkUfA1RwBW6DFOuUKW5SI1DaB/oc/vpQhkR1DYGqXnytMx7hfiK+7lL51ZC0ydeWg==} - '@lexical/selection@0.42.0': - resolution: {integrity: sha512-iWTjLA5BSEuUnvWe9Xwu9FSdZFl3Yi0NqalabXKI+7KgCIlIVXE74y4NvWPUSLkSCB/Z1RPKiHmZqZ1vyu/yGQ==} + '@lexical/selection@0.43.0': + resolution: {integrity: sha512-sdKdXIFggtHxTctvXjTyx2RgWuKOOP3PhrzRJF+COGfckrr/YzDtQCOfyvktElyKEeYXa3t9sx/R6Ep3n074fA==} - '@lexical/table@0.42.0': - resolution: {integrity: sha512-GKiZyjQsHDXRckq5VBrOowyvds51WoVRECfDgcl8pqLMnKyEdCa58E7fkSJrr5LS80Scod+Cjn6SBRzOcdsrKg==} + '@lexical/table@0.43.0': + resolution: {integrity: sha512-oLrOBzRwpmdHDpGVRgwBVgO1ro0w50rMdtOVQ6KsL53ijZ6OiI1YE2ZNOy4qfJvjub+2dgp83gKpB7YcmXAP3w==} - '@lexical/text@0.42.0': - resolution: {integrity: sha512-hT3EYVtBmONXyXe4TFVgtFcG1tf6JhLEuAf95+cOjgFGFSgvkZ/64BPbKLNTj2/9n6cU7EGPUNNwVigCSECJ2g==} + '@lexical/text@0.43.0': + resolution: {integrity: sha512-dtUZ79WaAv3nEYBIWPBZIrjwCUPONN8HcgtReY3qku7WQkzqy3FaMwT/lBa92cUhqsn4ChLIBO3lPFhWRALyvg==} - '@lexical/utils@0.42.0': - resolution: {integrity: sha512-wGNdCW3QWEyVdFiSTLZfFPtiASPyYLcekIiYYZmoRVxVimT/jY+QPfnkO4JYgkO7Z70g/dsg9OhqyQSChQfvkQ==} + '@lexical/utils@0.43.0': + resolution: {integrity: sha512-Y9wzFwoeI9KLDJsztTz45Aobp6sACHSRqUtyjxpCsU0jwL60Tt9rD71QVz7SvpmzxjtnBb040s6LHa6vP0gY+A==} - '@lexical/yjs@0.42.0': - resolution: {integrity: sha512-DplzWnYhfFceGPR+UyDFpZdB287wF/vNOHFuDsBF/nGDdTezvr0Gf60opzyBEF3oXym6p3xTmGygxvO97LZ+vw==} + '@lexical/yjs@0.43.0': + resolution: {integrity: sha512-3ghY9BYZVo3Hg2TmY2+H3Q6+AhhGwNIhnr6mvCbdLBEsnSTXr4VZSPMXN2ae5phCPrI19eHrx4MvFNYodQcqrA==} peerDependencies: yjs: '>=13.5.22' @@ -2357,36 +2438,36 @@ packages: resolution: {integrity: sha512-y3SvzjuY1ygnzWA4Krwx/WaJAsTMP11DN+e21A8Fa8PW1oDtVB5NSRW7LWurAiS2oKRkuCgcjTYMkBuBkcPCRg==} engines: {node: '>=12.4.0'} - '@orpc/client@1.13.13': - resolution: {integrity: sha512-jagx/Sa+9K4HEC5lBrUlMSrmR/06hvZctWh93/sKZc8GBk4zM0+71oT1kXQVw1oRYFV2XAq3xy3m6NdM6gfKYA==} + '@orpc/client@1.13.14': + resolution: {integrity: sha512-JQf3lO//UGHmmkd8+9fuWuh1gga1lhWuKnsT19cui7F6WizBy0NdFSVQerOsSy2c1kxOthlD7GnicGgSY2rhQA==} - '@orpc/contract@1.13.13': - resolution: {integrity: sha512-md6iyrYkePBSJNs1VnVEEnAUORMDPHIf3JGRSHxyssIcNakev/iOjP0HvpH0Sx0MlTBhihAJo6uFL8Vpth58Nw==} + '@orpc/contract@1.13.14': + resolution: {integrity: sha512-MfsjaQQDVcs4wHmdl5N/7vkwMnQ41nlojWXyRfRXNJHQczqBzM6sYaTJuUPXlw4YbIu64KHZ5nbbtwNCO5YXsg==} - '@orpc/openapi-client@1.13.13': - resolution: {integrity: sha512-k8od+bD7MqysKPPybAkxgfaNIaNseFPXtbidWkZAdCZ5w34SnDc7QPZJ0PQbyt9n9B+jOXSADNwQSTWSuGpjyA==} + '@orpc/openapi-client@1.13.14': + resolution: {integrity: sha512-mHuj/UL5qLqB1JqrRdlAoUYMidbsry8Cr9QOlOZk1mp7+OZhasFv75UNzxyjNNaSjyd3l2k4UkgpcHK4VSD7tQ==} - '@orpc/shared@1.13.13': - resolution: {integrity: sha512-kNpYOBjHvmgKHla6munWOaEeA0utEfAvoiZpXjiRjjt1RxTibdwQvVHgxRIBNMXfQsb+ON3Q/wDkoaUhvvSnIw==} + '@orpc/shared@1.13.14': + resolution: {integrity: sha512-/ri8ttSX+ppoo01d3LdqQ4Xh6VZS5PYRYmHxTvO8tuyiqBJhN18d8P1VtEW4T9hetoK7JZKeU7EAeqVUnCF9WA==} peerDependencies: '@opentelemetry/api': '>=1.9.0' peerDependenciesMeta: '@opentelemetry/api': optional: true - '@orpc/standard-server-fetch@1.13.13': - resolution: {integrity: sha512-Lffy26+WtCQkwOUacsrdyeJF1GNzrhm75O3LXKVFXqmSdyVVdyI6zuqLn/YKGODU2L9IqGxZ2CwsV2tE298SSA==} + '@orpc/standard-server-fetch@1.13.14': + resolution: {integrity: sha512-k2zkCi98qd3NkvWhUX/Yece/qjB+o07g/gHC509YB5HbOGtBV/da3eseYjFyzBx5LDxMz28BOALI8/q/YDhKZw==} - '@orpc/standard-server-peer@1.13.13': - resolution: {integrity: sha512-FeWAbXfnZDPYQRajM0hD6GJvHeC3DZILngAjdcLHy5zt3riu6nL2lLPSWDv5yNWWscmYU+CfKmXWd0Z01BOeWA==} + '@orpc/standard-server-peer@1.13.14': + resolution: {integrity: sha512-jinseQ8bn7XQOHjsCXhR1HiF3wAwn1xEQPpnE/av0PoOi4h0ATvhZjDLaRHvRavs8YwrIqwSuAuYT/hDxON58A==} - '@orpc/standard-server@1.13.13': - resolution: {integrity: sha512-9pgS8XvauuRQElkyuD8F3om+nN0KBEnTkhblDHCBzkZERjWkmfirJmshQrWHoFaDTk+nnXHIaY6d7TBTxXdPRw==} + '@orpc/standard-server@1.13.14': + resolution: {integrity: sha512-o8PaDERiwREFQpIZO0mQ1PhguchyNzrf1w7m3eK1JB4rPjHu1VJUgqCpy/sV3Id5ji4bX/gKHEC3NZjDX6mEWQ==} - '@orpc/tanstack-query@1.13.13': - resolution: {integrity: sha512-6+Cheaiu+RDPdszdeRKoBINrF8MQp64zSeZB+L3gqgF43zlYDhLOgELZMzYa6U3U6bLk4rmIeubpk+i1kACfRg==} + '@orpc/tanstack-query@1.13.14': + resolution: {integrity: sha512-5rq1Z1anVTVBseYeNBi5RJSgWPxpD0MqK7MYej3xnt56jjc6mFmWpUGNz9xy0BXPh3KmA/xDTNuB23kKgJ5JmQ==} peerDependencies: - '@orpc/client': 1.13.13 + '@orpc/client': 1.13.14 '@tanstack/query-core': '>=5.80.2' '@ota-meshi/ast-token-store@0.3.0': @@ -2520,15 +2601,15 @@ packages: cpu: [x64] os: [win32] - '@oxc-project/runtime@0.123.0': - resolution: {integrity: sha512-wRf0z8saz9tHLcK3YeTeBmwISrpy4bBimvKxUmryiIhbt+ZJb0nwwJNL3D8xpeWbNfZlGSlzRBZbfcbApIGZJw==} + '@oxc-project/runtime@0.124.0': + resolution: {integrity: sha512-sSg6n37J3w3mM4odFvRqzQENf6+qxKnvStr/gU0FgRRg1VE/4MqryLd9PJmE0a7K5xlDfbrctBtSagaFH6ij9Q==} engines: {node: ^20.19.0 || >=22.12.0} '@oxc-project/types@0.121.0': resolution: {integrity: sha512-CGtOARQb9tyv7ECgdAlFxi0Fv7lmzvmlm2rpD/RdijOO9rfk/JvB1CjT8EnoD+tjna/IYgKKw3IV7objRb+aYw==} - '@oxc-project/types@0.123.0': - resolution: {integrity: sha512-YtECP/y8Mj1lSHiUWGSRzy/C6teUKlS87dEfuVKT09LgQbUsBW1rNg+MiJ4buGu3yuADV60gbIvo9/HplA56Ew==} + '@oxc-project/types@0.124.0': + resolution: {integrity: sha512-VBFWMTBvHxS11Z5Lvlr3IWgrwhMTXV+Md+EQF0Xf60+wAdsGFTBx7X7K/hP4pi8N7dcm1RvcHwDxZ16Qx8keUg==} '@oxc-resolver/binding-android-arm-eabi@11.19.1': resolution: {integrity: sha512-aUs47y+xyXHUKlbhqHUjBABjvycq6YSD7bpxSW7vplUmdzAlJ93yXY6ZR0c1o1x5A/QKbENCvs3+NlY8IpIVzg==} @@ -2638,124 +2719,124 @@ packages: cpu: [x64] os: [win32] - '@oxfmt/binding-android-arm-eabi@0.43.0': - resolution: {integrity: sha512-CgU2s+/9hHZgo0IxVxrbMPrMj+tJ6VM3mD7Mr/4oiz4FNTISLoCvRmB5nk4wAAle045RtRjd86m673jwPyb1OQ==} + '@oxfmt/binding-android-arm-eabi@0.45.0': + resolution: {integrity: sha512-A/UMxFob1fefCuMeGxQBulGfFE38g2Gm23ynr3u6b+b7fY7/ajGbNsa3ikMIkGMLJW/TRoQaMoP1kME7S+815w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [android] - '@oxfmt/binding-android-arm64@0.43.0': - resolution: {integrity: sha512-T9OfRwjA/EdYxAqbvR7TtqLv5nIrwPXuCtTwOHtS7aR9uXyn74ZYgzgTo6/ZwvTq9DY4W+DsV09hB2EXgn9EbA==} + '@oxfmt/binding-android-arm64@0.45.0': + resolution: {integrity: sha512-L63z4uZmHjgvvqvMJD7mwff8aSBkM0+X4uFr6l6U5t6+Qc9DCLVZWIunJ7Gm4fn4zHPdSq6FFQnhu9yqqobxIg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [android] - '@oxfmt/binding-darwin-arm64@0.43.0': - resolution: {integrity: sha512-o3i49ZUSJWANzXMAAVY1wnqb65hn4JVzwlRQ5qfcwhRzIA8lGVaud31Q3by5ALHPrksp5QEaKCQF9aAS3TXpZA==} + '@oxfmt/binding-darwin-arm64@0.45.0': + resolution: {integrity: sha512-UV34dd623FzqT+outIGndsCA/RBB+qgB3XVQhgmmJ9PJwa37NzPC9qzgKeOhPKxVk2HW+JKldQrVL54zs4Noww==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [darwin] - '@oxfmt/binding-darwin-x64@0.43.0': - resolution: {integrity: sha512-vWECzzCFkb0kK6jaHjbtC5sC3adiNWtqawFCxhpvsWlzVeKmv5bNvkB4nux+o4JKWTpHCM57NDK/MeXt44txmA==} + '@oxfmt/binding-darwin-x64@0.45.0': + resolution: {integrity: sha512-pMNJv0CMa1pDefVPeNbuQxibh8ITpWDFEhMC/IBB9Zlu76EbgzYwrzI4Cb11mqX2+rIYN70UTrh3z06TM59ptQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [darwin] - '@oxfmt/binding-freebsd-x64@0.43.0': - resolution: {integrity: sha512-rgz8JpkKiI/umOf7fl9gwKyQasC8bs5SYHy6g7e4SunfLBY3+8ATcD5caIg8KLGEtKFm5ujKaH8EfjcmnhzTLg==} + '@oxfmt/binding-freebsd-x64@0.45.0': + resolution: {integrity: sha512-xTcRoxbbo61sW2+ZRPeH+vp/o9G8gkdhiVumFU+TpneiPm14c79l6GFlxPXlCE9bNWikigbsrvJw46zCVAQFfg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [freebsd] - '@oxfmt/binding-linux-arm-gnueabihf@0.43.0': - resolution: {integrity: sha512-nWYnF3vIFzT4OM1qL/HSf1Yuj96aBuKWSaObXHSWliwAk2rcj7AWd6Lf7jowEBQMo4wCZVnueIGw/7C4u0KTBQ==} + '@oxfmt/binding-linux-arm-gnueabihf@0.45.0': + resolution: {integrity: sha512-hWL8Hdni+3U1mPFx1UtWeGp3tNb6EhBAUHRMbKUxVkOp3WwoJbpVO2bfUVbS4PfpledviXXNHSTl1veTa6FhkQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [linux] - '@oxfmt/binding-linux-arm-musleabihf@0.43.0': - resolution: {integrity: sha512-sFg+NWJbLfupYTF4WELHAPSnLPOn1jiDZ33Z1jfDnTaA+cC3iB35x0FMMZTFdFOz3icRIArncwCcemJFGXu6TQ==} + '@oxfmt/binding-linux-arm-musleabihf@0.45.0': + resolution: {integrity: sha512-6Blt/0OBT7vvfQpqYuYbpbFLPqSiaYpEJzUUWhinPEuADypDbtV1+LdjM0vYBNGPvnj85ex7lTerEX6JGcPt9w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [linux] - '@oxfmt/binding-linux-arm64-gnu@0.43.0': - resolution: {integrity: sha512-MelWqv68tX6wZEILDrTc9yewiGXe7im62+5x0bNXlCYFOZdA+VnYiJfAihbROsZ5fm90p9C3haFrqjj43XnlAA==} + '@oxfmt/binding-linux-arm64-gnu@0.45.0': + resolution: {integrity: sha512-jLjoLfe+hGfjhA8hNBSdw85yCA8ePKq7ME4T+g6P9caQXvmt6IhE2X7iVjnVdkmYUWEzZrxlh4p6RkDmAMJY/A==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-arm64-musl@0.43.0': - resolution: {integrity: sha512-ROaWfYh+6BSJ1Arwy5ujijTlwnZetxDxzBpDc1oBR4d7rfrPBqzeyjd5WOudowzQUgyavl2wEpzn1hw3jWcqLA==} + '@oxfmt/binding-linux-arm64-musl@0.45.0': + resolution: {integrity: sha512-XQKXZIKYJC3GQJ8FnD3iMntpw69Wd9kDDK/Xt79p6xnFYlGGxSNv2vIBvRTDg5CKByWFWWZLCRDOXoP/m6YN4g==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [musl] - '@oxfmt/binding-linux-ppc64-gnu@0.43.0': - resolution: {integrity: sha512-PJRs/uNxmFipJJ8+SyKHh7Y7VZIKQicqrrBzvfyM5CtKi8D7yZKTwUOZV3ffxmiC2e7l1SDJpkBEOyue5NAFsg==} + '@oxfmt/binding-linux-ppc64-gnu@0.45.0': + resolution: {integrity: sha512-+g5RiG+xOkdrCWkKodv407nTvMq4vYM18Uox2MhZBm/YoqFxxJpWKsloskFFG5NU13HGPw1wzYjjOVcyd9moCA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [ppc64] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-riscv64-gnu@0.43.0': - resolution: {integrity: sha512-j6biGAgzIhj+EtHXlbNumvwG7XqOIdiU4KgIWRXAEj/iUbHKukKW8eXa4MIwpQwW1YkxovduKtzEAPnjlnAhVQ==} + '@oxfmt/binding-linux-riscv64-gnu@0.45.0': + resolution: {integrity: sha512-V7dXKoSyEbWAkkSF4JJNtF+NJZDmJoSarSoP30WCsB3X636Rehd3CvxBj49FIJxEBFWhvcUjGSHVeU8Erck1bQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [riscv64] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-riscv64-musl@0.43.0': - resolution: {integrity: sha512-RYWxAcslKxvy7yri24Xm9cmD0RiANaiEPs007EFG6l9h1ChM69Q5SOzACaCoz4Z9dEplnhhneeBaTWMEdpgIbA==} + '@oxfmt/binding-linux-riscv64-musl@0.45.0': + resolution: {integrity: sha512-Vdelft1sAEYojVGgcODEFXSWYQYlIvoyIGWebKCuUibd1tvS1TjTx413xG2ZLuHpYj45CkN/ztMLMX6jrgqpgg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [riscv64] os: [linux] libc: [musl] - '@oxfmt/binding-linux-s390x-gnu@0.43.0': - resolution: {integrity: sha512-DT6Q8zfQQy3jxpezAsBACEHNUUixKSYTwdXeXojNHe4DQOoxjPdjr3Szu6BRNjxLykZM/xMNmp9ElOIyDppwtw==} + '@oxfmt/binding-linux-s390x-gnu@0.45.0': + resolution: {integrity: sha512-RR7xKgNpqwENnK0aYCGYg0JycY2n93J0reNjHyes+I9Gq52dH95x+CBlnlAQHCPfz6FGnKA9HirgUl14WO6o7w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [s390x] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-x64-gnu@0.43.0': - resolution: {integrity: sha512-R8Yk7iYcuZORXmCfFZClqbDxRZgZ9/HEidUuBNdoX8Ptx07cMePnMVJ/woB84lFIDjh2ROHVaOP40Ds3rBXFqg==} + '@oxfmt/binding-linux-x64-gnu@0.45.0': + resolution: {integrity: sha512-U/QQ0+BQNSHxjuXR/utvXnQ50Vu5kUuqEomZvQ1/3mhgbBiMc2WU9q5kZ5WwLp3gnFIx9ibkveoRSe2EZubkqg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [glibc] - '@oxfmt/binding-linux-x64-musl@0.43.0': - resolution: {integrity: sha512-F2YYqyvnQNvi320RWZNAvsaWEHwmW3k4OwNJ1hZxRKXupY63expbBaNp6jAgvYs7y/g546vuQnGHQuCBhslhLQ==} + '@oxfmt/binding-linux-x64-musl@0.45.0': + resolution: {integrity: sha512-o5TLOUCF0RWQjsIS06yVC+kFgp092/yLe6qBGSUvtnmTVw9gxjpdQSXc3VN5Cnive4K11HNstEZF8ROKHfDFSw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [musl] - '@oxfmt/binding-openharmony-arm64@0.43.0': - resolution: {integrity: sha512-OE6TdietLXV3F6c7pNIhx/9YC1/2YFwjU9DPc/fbjxIX19hNIaP1rS0cFjCGJlGX+cVJwIKWe8Mos+LdQ1yAJw==} + '@oxfmt/binding-openharmony-arm64@0.45.0': + resolution: {integrity: sha512-RnGcV3HgPuOjsGx/k9oyRNKmOp+NBLGzZTdPDYbc19r7NGeYPplnUU/BfU35bX2Y/O4ejvHxcfkvW2WoYL/gsg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [openharmony] - '@oxfmt/binding-win32-arm64-msvc@0.43.0': - resolution: {integrity: sha512-0nWK6a7pGkbdoypfVicmV9k/N1FwjPZENoqhlTU+5HhZnAhpIO3za30nEE33u6l6tuy9OVfpdXUqxUgZ+4lbZw==} + '@oxfmt/binding-win32-arm64-msvc@0.45.0': + resolution: {integrity: sha512-v3Vj7iKKsUFwt9w5hsqIIoErKVoENC6LoqfDlteOQ5QMDCXihlqLoxpmviUhXnNncg4zV6U9BPwlBbwa+qm4wg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [win32] - '@oxfmt/binding-win32-ia32-msvc@0.43.0': - resolution: {integrity: sha512-9aokTR4Ft+tRdvgN/pKzSkVy2ksc4/dCpDm9L/xFrbIw0yhLtASLbvoG/5WOTUh/BRPPnfGTsWznEqv0dlOmhA==} + '@oxfmt/binding-win32-ia32-msvc@0.45.0': + resolution: {integrity: sha512-N8yotPBX6ph0H3toF4AEpdCeVPrdcSetj+8eGiZGsrLsng3bs/Q5HPu4bbSxip5GBPx5hGbGHrZwH4+rcrjhHA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [ia32] os: [win32] - '@oxfmt/binding-win32-x64-msvc@0.43.0': - resolution: {integrity: sha512-4bPgdQux2ZLWn3bf2TTXXMHcJB4lenmuxrLqygPmvCJ104Yqzj1UctxSRzR31TiJ4MLaG22RK8dUsVpJtrCz5g==} + '@oxfmt/binding-win32-x64-msvc@0.45.0': + resolution: {integrity: sha512-w5MMTRCK1dpQeRA+HHqXQXyN33DlG/N2LOYxJmaT4fJjcmZrbNnqw7SmIk7I2/a2493PPLZ+2E/Ar6t2iKVMug==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [win32] @@ -2790,124 +2871,124 @@ packages: cpu: [x64] os: [win32] - '@oxlint/binding-android-arm-eabi@1.58.0': - resolution: {integrity: sha512-1T7UN3SsWWxpWyWGn1cT3ASNJOo+pI3eUkmEl7HgtowapcV8kslYpFQcYn431VuxghXakPNlbjRwhqmR37PFOg==} + '@oxlint/binding-android-arm-eabi@1.60.0': + resolution: {integrity: sha512-YdeJKaZckDQL1qa62a1aKq/goyq48aX3yOxaaWqWb4sau4Ee4IiLbamftNLU3zbePky6QsDj6thnSSzHRBjDfA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [android] - '@oxlint/binding-android-arm64@1.58.0': - resolution: {integrity: sha512-GryzujxuiRv2YFF7bRy8mKcxlbuAN+euVUtGJt9KKbLT8JBUIosamVhcthLh+VEr6KE6cjeVMAQxKAzJcoN7dg==} + '@oxlint/binding-android-arm64@1.60.0': + resolution: {integrity: sha512-7ANS7PpXCfq84xZQ8E5WPs14gwcuPcl+/8TFNXfpSu0CQBXz3cUo2fDpHT8v8HJN+Ut02eacvMAzTnc9s6X4tw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [android] - '@oxlint/binding-darwin-arm64@1.58.0': - resolution: {integrity: sha512-7/bRSJIwl4GxeZL9rPZ11anNTyUO9epZrfEJH/ZMla3+/gbQ6xZixh9nOhsZ0QwsTW7/5J2A/fHbD1udC5DQQA==} + '@oxlint/binding-darwin-arm64@1.60.0': + resolution: {integrity: sha512-pJsgd9AfplLGBm1fIr25V6V14vMrayhx4uIQvlfH7jWs2SZwSrvi3TfgfJySB8T+hvyEH8K2zXljQiUnkgUnfQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [darwin] - '@oxlint/binding-darwin-x64@1.58.0': - resolution: {integrity: sha512-EqdtJSiHweS2vfILNrpyJ6HUwpEq2g7+4Zx1FPi4hu3Hu7tC3znF6ufbXO8Ub2LD4mGgznjI7kSdku9NDD1Mkg==} + '@oxlint/binding-darwin-x64@1.60.0': + resolution: {integrity: sha512-Ue1aXHX49ivwflKqGJc7zcd/LeLgbhaTcDCQStgx5x06AXgjEAZmvrlMuIkWd4AL4FHQe6QJ9f33z04Cg448VQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [darwin] - '@oxlint/binding-freebsd-x64@1.58.0': - resolution: {integrity: sha512-VQt5TH4M42mY20F545G637RKxV/yjwVtKk2vfXuazfReSIiuvWBnv+FVSvIV5fKVTJNjt3GSJibh6JecbhGdBw==} + '@oxlint/binding-freebsd-x64@1.60.0': + resolution: {integrity: sha512-YCyQzsQtusQw+gNRW9rRTifSO+Dt/+dtCl2NHoDMZqJlRTEZ/Oht9YnuporI9yiTx7+cB+eqzX3MtHHVHGIWhg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [freebsd] - '@oxlint/binding-linux-arm-gnueabihf@1.58.0': - resolution: {integrity: sha512-fBYcj4ucwpAtjJT3oeBdFBYKvNyjRSK+cyuvBOTQjh0jvKp4yeA4S/D0IsCHus/VPaNG5L48qQkh+Vjy3HL2/Q==} + '@oxlint/binding-linux-arm-gnueabihf@1.60.0': + resolution: {integrity: sha512-c7dxM2Zksa45Qw16i2iGY3Fti2NirJ38FrsBsKw+qcJ0OtqTsBgKJLF0xV+yLG56UH01Z8WRPgsw31e0MoRoGQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [linux] - '@oxlint/binding-linux-arm-musleabihf@1.58.0': - resolution: {integrity: sha512-0BeuFfwlUHlJ1xpEdSD1YO3vByEFGPg36uLjK1JgFaxFb4W6w17F8ET8sz5cheZ4+x5f2xzdnRrrWv83E3Yd8g==} + '@oxlint/binding-linux-arm-musleabihf@1.60.0': + resolution: {integrity: sha512-ZWALoA42UYqBEP1Tbw9OWURgFGS1nWj2AAvLdY6ZcGx/Gj93qVCBKjcvwXMupZibYwFbi9s/rzqkZseb/6gVtQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm] os: [linux] - '@oxlint/binding-linux-arm64-gnu@1.58.0': - resolution: {integrity: sha512-TXlZgnPTlxrQzxG9ZXU7BNwx1Ilrr17P3GwZY0If2EzrinqRH3zXPc3HrRcBJgcsoZNMuNL5YivtkJYgp467UQ==} + '@oxlint/binding-linux-arm64-gnu@1.60.0': + resolution: {integrity: sha512-tpy+1w4p9hN5CicMCxqNy6ymfRtV5ayE573vFNjp1k1TN/qhLFgflveZoE/0++RlkHikBz2vY545NWm/hp7big==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [glibc] - '@oxlint/binding-linux-arm64-musl@1.58.0': - resolution: {integrity: sha512-zSoYRo5dxHLcUx93Stl2hW3hSNjPt99O70eRVWt5A1zwJ+FPjeCCANCD2a9R4JbHsdcl11TIQOjyigcRVOH2mw==} + '@oxlint/binding-linux-arm64-musl@1.60.0': + resolution: {integrity: sha512-eDYDXZGhQAXyn6GwtwiX/qcLS0HlOLPJ/+iiIY8RYr+3P8oKBmgKxADLlniL6FtWfE7pPk7IGN9/xvDEvDvFeg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [musl] - '@oxlint/binding-linux-ppc64-gnu@1.58.0': - resolution: {integrity: sha512-NQ0U/lqxH2/VxBYeAIvMNUK1y0a1bJ3ZicqkF2c6wfakbEciP9jvIE4yNzCFpZaqeIeRYaV7AVGqEO1yrfVPjA==} + '@oxlint/binding-linux-ppc64-gnu@1.60.0': + resolution: {integrity: sha512-nxehly5XYBHUWI9VJX1bqCf9j/B43DaK/aS/T1fcxCpX3PA4Rm9BB54nPD1CKayT8xg6REN1ao+01hSRNgy8OA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [ppc64] os: [linux] libc: [glibc] - '@oxlint/binding-linux-riscv64-gnu@1.58.0': - resolution: {integrity: sha512-X9J+kr3gIC9FT8GuZt0ekzpNUtkBVzMVU4KiKDSlocyQuEgi3gBbXYN8UkQiV77FTusLDPsovjo95YedHr+3yg==} + '@oxlint/binding-linux-riscv64-gnu@1.60.0': + resolution: {integrity: sha512-j1qf/NaUfOWQutjeoooNG1Q0zsK0XGmSu1uDLq3cctquRF3j7t9Hxqf/76ehCc5GEUAanth2W4Fa+XT1RFg/nw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [riscv64] os: [linux] libc: [glibc] - '@oxlint/binding-linux-riscv64-musl@1.58.0': - resolution: {integrity: sha512-CDze3pi1OO3Wvb/QsXjmLEY4XPKGM6kIo82ssNOgmcl1IdndF9VSGAE38YLhADWmOac7fjqhBw82LozuUVxD0Q==} + '@oxlint/binding-linux-riscv64-musl@1.60.0': + resolution: {integrity: sha512-YELKPRefQ/q/h3RUmeRfPCUhh2wBvgV1RyZ/F9M9u8cDyXsQW2ojv1DeWQTt466yczDITjZnIOg/s05pk7Ve2A==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [riscv64] os: [linux] libc: [musl] - '@oxlint/binding-linux-s390x-gnu@1.58.0': - resolution: {integrity: sha512-b/89glbxFaEAcA6Uf1FvCNecBJEgcUTsV1quzrqXM/o4R1M4u+2KCVuyGCayN2UpsRWtGGLb+Ver0tBBpxaPog==} + '@oxlint/binding-linux-s390x-gnu@1.60.0': + resolution: {integrity: sha512-JkO3C6Gki7Y6h/MiIkFKvHFOz98/YWvQ4WYbK9DLXACMP2rjULzkeGyAzorJE5S1dzLQGFgeqvN779kSFwoV1g==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [s390x] os: [linux] libc: [glibc] - '@oxlint/binding-linux-x64-gnu@1.58.0': - resolution: {integrity: sha512-0/yYpkq9VJFCEcuRlrViGj8pJUFFvNS4EkEREaN7CB1EcLXJIaVSSa5eCihwBGXtOZxhnblWgxks9juRdNQI7w==} + '@oxlint/binding-linux-x64-gnu@1.60.0': + resolution: {integrity: sha512-XjKHdFVCpZZZSWBCKyyqCq65s2AKXykMXkjLoKYODrD+f5toLhlwsMESscu8FbgnJQ4Y/dpR/zdazsahmgBJIA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [glibc] - '@oxlint/binding-linux-x64-musl@1.58.0': - resolution: {integrity: sha512-hr6FNvmcAXiH+JxSvaJ4SJ1HofkdqEElXICW9sm3/Rd5eC3t7kzvmLyRAB3NngKO2wzXRCAm4Z/mGWfrsS4X8w==} + '@oxlint/binding-linux-x64-musl@1.60.0': + resolution: {integrity: sha512-js29ZWIuPhNWzY8NC7KoffEMEeWG105vbmm+8EOJsC+T/jHBiKIJEUF78+F/IrgEWMMP9N0kRND4Pp75+xAhKg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [musl] - '@oxlint/binding-openharmony-arm64@1.58.0': - resolution: {integrity: sha512-R+O368VXgRql1K6Xar+FEo7NEwfo13EibPMoTv3sesYQedRXd6m30Dh/7lZMxnrQVFfeo4EOfYIP4FpcgWQNHg==} + '@oxlint/binding-openharmony-arm64@1.60.0': + resolution: {integrity: sha512-H+PUITKHk04stFpWj3x3Kg08Afp/bcXSBi0EhasR5a0Vw7StXHTzdl655PUI0fB4qdh2Wsu6Dsi+3ACxPoyQnA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [openharmony] - '@oxlint/binding-win32-arm64-msvc@1.58.0': - resolution: {integrity: sha512-Q0FZiAY/3c4YRj4z3h9K1PgaByrifrfbBoODSeX7gy97UtB7pySPUQfC2B/GbxWU6k7CzQrRy5gME10PltLAFQ==} + '@oxlint/binding-win32-arm64-msvc@1.60.0': + resolution: {integrity: sha512-WA/yc7f7ZfCefBXVzNHn1Ztulb1EFwNBb4jMZ6pjML0zz6pHujlF3Q3jySluz3XHl/GNeMTntG1seUBWVMlMag==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [win32] - '@oxlint/binding-win32-ia32-msvc@1.58.0': - resolution: {integrity: sha512-Y8FKBABrSPp9H0QkRLHDHOSUgM/309a3IvOVgPcVxYcX70wxJrk608CuTg7w+C6vEd724X5wJoNkBcGYfH7nNQ==} + '@oxlint/binding-win32-ia32-msvc@1.60.0': + resolution: {integrity: sha512-33YxL1sqwYNZXtn3MD/4dno6s0xeedXOJlT1WohkVD565WvohClZUr7vwKdAk954n4xiEWJkewiCr+zLeq7AeA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [ia32] os: [win32] - '@oxlint/binding-win32-x64-msvc@1.58.0': - resolution: {integrity: sha512-bCn5rbiz5My+Bj7M09sDcnqW0QJyINRVxdZ65x1/Y2tGrMwherwK/lpk+HRQCKvXa8pcaQdF5KY5j54VGZLwNg==} + '@oxlint/binding-win32-x64-msvc@1.60.0': + resolution: {integrity: sha512-JOro4ZcfBLamJCyfURQmOQByoorgOdx3ZjAkSqnb/CyG/i+lN3KoV5LAgk5ZAW6DPq7/Cx7n23f8DuTWXTWgyQ==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [win32] @@ -3012,8 +3093,8 @@ packages: '@polka/url@1.0.0-next.29': resolution: {integrity: sha512-wwQAWhWSuHaag8c4q/KN/vCoeOJYshAIvMQwD4GpSb3OiZklFfvAgmj0VCBBImRpuF/aFgIRzllXlVX93Jevww==} - '@preact/signals-core@1.14.0': - resolution: {integrity: sha512-AowtCcCU/33lFlh1zRFf/u+12rfrhtNakj7UpaGEsmMwUKpKWMVvcktOGcwBBNiB4lWrZWc01LhiyyzVklJyaQ==} + '@preact/signals-core@1.14.1': + resolution: {integrity: sha512-vxPpfXqrwUe9lpjqfYNjAF/0RF/eFGeLgdJzdmIIZjpOnTmGmAB4BjWone562mJGMRP4frU6iZ6ei3PDsu52Ng==} '@radix-ui/primitive@1.1.3': resolution: {integrity: sha512-JTF99U/6XIjCBo0wqkU5sK10glYe27MRRsfwoiq5zzOEZLHU3A3KCMa5X/azekYRCJ0HlwI0crAXS/5dEHTzDg==} @@ -3294,8 +3375,8 @@ packages: resolution: {integrity: sha512-UuBOt7BOsKVOkFXRe4Ypd/lADuNIfqJXv8GvHqtXaTYXPPKkj2nS2zPllVsrtRjcomDhIJVBnZwfmlI222WH8g==} engines: {node: '>=14.0.0'} - '@rolldown/pluginutils@1.0.0-rc.13': - resolution: {integrity: sha512-3ngTAv6F/Py35BsYbeeLeecvhMKdsKm4AoOETVhAA+Qc8nrA2I0kF7oa93mE9qnIurngOSpMnQ0x2nQY2FPviA==} + '@rolldown/pluginutils@1.0.0-rc.15': + resolution: {integrity: sha512-UromN0peaE53IaBRe9W7CjrZgXl90fqGpK+mIZbA3qSTeYqg3pqpROBdIPvOG3F5ereDHNwoHBI2e50n1BDr1g==} '@rolldown/pluginutils@1.0.0-rc.7': resolution: {integrity: sha512-qujRfC8sFVInYSPPMLQByRh7zhwkGFS4+tyMQ83srV1qrxL4g8E2tyxVVyxd0+8QeBM1mIk9KbWxkegRr76XzA==} @@ -3456,32 +3537,32 @@ packages: cpu: [x64] os: [win32] - '@sentry-internal/browser-utils@10.47.0': - resolution: {integrity: sha512-bVFRAeJWMBcBCvJKIFCMJ1/yQToL4vPGqfmlnDZeypcxkqUDKQ/Y3ziLHXoDL2sx0lagcgU2vH1QhCQ67Aujjw==} + '@sentry-internal/browser-utils@10.48.0': + resolution: {integrity: sha512-SCiTLBXzugFKxev6NoKYBIhQoDk0gUh0AVVVepCBqfCJiWBG01Zvv0R5tCVohr4cWRllkQ8mlBdNQd/I7s9tdA==} engines: {node: '>=18'} - '@sentry-internal/feedback@10.47.0': - resolution: {integrity: sha512-pdvMmi4dQpX5S/vAAzrhHPIw3T3HjUgDNgUiCBrlp7N9/6zGO2gNPhUnNekP+CjgI/z0rvf49RLqlDenpNrMOg==} + '@sentry-internal/feedback@10.48.0': + resolution: {integrity: sha512-tGkEyOM1HDS9qebDphUMEnyk3qq/50AnuTBiFmMJyjNzowylVGmRRk0sr3xkmbVHCDXQCiYnDmSVlJ2x4SDMrQ==} engines: {node: '>=18'} - '@sentry-internal/replay-canvas@10.47.0': - resolution: {integrity: sha512-A5OY8friSe6g8WAK4L8IeOPiEd9D3Ps40DzRH5j2f6SUja0t90mKMvHRcRf8zq0d4BkdB+JM7tjOkwxpuv8heA==} + '@sentry-internal/replay-canvas@10.48.0': + resolution: {integrity: sha512-9nWuN2z4O+iwbTfuYV5ZmngBgJU/ZxfOo47A5RJP3Nu/kl59aJ1lUhILYOKyeNOIC/JyeERmpIcTxnlPXQzZ3Q==} engines: {node: '>=18'} - '@sentry-internal/replay@10.47.0': - resolution: {integrity: sha512-ScdovxP7hJxgMt70+7hFvwT02GIaIUAxdEM/YPsayZBeCoAukPW8WiwztJfoKtsfPyKJ5A6f0H3PIxTPcA9Row==} + '@sentry-internal/replay@10.48.0': + resolution: {integrity: sha512-sevRTePfuk4PNuz9KAKpmTZEomAU0aLXyIhOwA0OnUDdxPhkY8kq5lwDbuxTHv6DQUjUX3YgFbY45VH1JEqHKA==} engines: {node: '>=18'} - '@sentry/browser@10.47.0': - resolution: {integrity: sha512-rC0agZdxKA5XWfL4VwPOr/rJMogXDqZgnVzr93YWpFn9DMZT/7LzxSJVPIJwRUjx3bFEby3PcTa3YaX7pxm1AA==} + '@sentry/browser@10.48.0': + resolution: {integrity: sha512-4jt2zX2ExgFcNe2x+W+/k81fmDUsOrquGtt028CiGuDuma6kEsWBI4JbooT1jhj2T+eeUxe3YGbM23Zhh7Ghhw==} engines: {node: '>=18'} - '@sentry/core@10.47.0': - resolution: {integrity: sha512-nsYRAx3EWezDut+Zl+UwwP07thh9uY7CfSAi2whTdcJl5hu1nSp2z8bba7Vq/MGbNLnazkd3A+GITBEML924JA==} + '@sentry/core@10.48.0': + resolution: {integrity: sha512-h8F+fXVwYC9ro5ZaO8V+v3vqc0awlXHGblEAuVxSGgh4IV/oFX+QVzXeDTTrFOFS6v/Vn5vAyu240eJrJAS6/g==} engines: {node: '>=18'} - '@sentry/react@10.47.0': - resolution: {integrity: sha512-ZtJV6xxF8jUVE9e3YQUG3Do0XapG1GjniyLyqMPgN6cNvs/HaRJODf7m60By+VGqcl5XArEjEPTvx8CdPUXDfA==} + '@sentry/react@10.48.0': + resolution: {integrity: sha512-uc93vKjmu6gNns+JAX4qquuxWpAMit0uGPA1TYlMjct9NG1uX3TkDPJAr9Pgd1lOXx8mKqCmj5fK33QeExMpPw==} engines: {node: '>=18'} peerDependencies: react: ^16.14.0 || 17.x || 18.x || 19.x @@ -3526,6 +3607,9 @@ packages: resolution: {integrity: sha512-TeheYy0ILzBEI/CO55CP6zJCSdSWeRtGnHy8U8dWSUH4I68iqTsy7HkMktR4xakThc9jotkPQUXT4ITdbV7cHA==} engines: {node: '>=18'} + '@socket.io/component-emitter@3.1.2': + resolution: {integrity: sha512-9BCxFwvbGg/RsZK9tjXd8s4UcwR0MWeFQ1XEKIQVVvAGJyINdrqKMcTRyLoK8Rse1GjzLV9cwjWV1olXRWEXVA==} + '@solid-primitives/event-listener@2.4.5': resolution: {integrity: sha512-nwRV558mIabl4yVAhZKY8cb6G+O1F0M6Z75ttTu5hk+SxdOnKSGj+eetDIu7Oax1P138ZdUU01qnBPR8rnxaEA==} peerDependencies: @@ -3862,20 +3946,20 @@ packages: peerDependencies: solid-js: 1.9.11 - '@tanstack/eslint-plugin-query@5.96.2': - resolution: {integrity: sha512-OsXCATZ+YmG8TyHrunfYy2IDB+dqY87en2im2A60JPgDAg66cCoHTzJWbe9uH8Cw9/K3NiKYlyyo1erVFu3qFw==} + '@tanstack/eslint-plugin-query@5.99.0': + resolution: {integrity: sha512-jVp1AEL7S7BeuQvH5SN1F5UdrNW/AbryKDeWUUMeAKNzh9C+Ik/bRSa/HeuJLlmaN+WOUkdDFbtCK0go7BxnUQ==} peerDependencies: - eslint: ^8.57.0 || ^9.0.0 - typescript: ^5.4.0 + eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 + typescript: ^5.4.0 || ^6.0.0 peerDependenciesMeta: typescript: optional: true - '@tanstack/form-core@1.28.6': - resolution: {integrity: sha512-4zroxL6VDj5O+w7l3dYZnUeL/h30KtNSV7UWzKAL7cl+8clMFdISPDlDlluS37As7oqvPVKo8B83VlIBvgmRog==} + '@tanstack/form-core@1.29.0': + resolution: {integrity: sha512-uyeKEdJBfbj0bkBSwvSYVRtWLOaXvfNX3CeVw1HqGOXVLxpBBGAqWdYLc+UoX/9xcoFwFXrjR9QqMPzvwm2yyQ==} - '@tanstack/form-devtools@0.2.20': - resolution: {integrity: sha512-4cW/eU5DBTrWP53mxwHKp4NQWTIQ3XCA91pMWK7dFNNClIwFnxoSJoKwyUa6b8kRIO6uq1Sjk2mhkAtj5kB22A==} + '@tanstack/form-devtools@0.2.21': + resolution: {integrity: sha512-8mxR1/QDw37mNVSFsr4ZN8+bdamH9LU1/iQ3I7/sfTzFmMsNzUOysX3OZf053eaS4Gaw44PT0pH7U0FWD98QKw==} peerDependencies: solid-js: 1.9.11 @@ -3883,11 +3967,11 @@ packages: resolution: {integrity: sha512-y/xtNPNt/YeyoVxE/JCx+T7yjEzpezmbb+toK8DDD1P4m7Kzs5YR956+7OKexG3f8aXgC3rLZl7b1V+yNUSy5w==} engines: {node: '>=18'} - '@tanstack/query-core@5.96.2': - resolution: {integrity: sha512-hzI6cTVh4KNRk8UtoIBS7Lv9g6BnJPXvBKsvYH1aGWvv0347jT3BnSvztOE+kD76XGvZnRC/t6qdW1CaIfwCeA==} + '@tanstack/query-core@5.99.0': + resolution: {integrity: sha512-3Jv3WQG0BCcH7G+7lf/bP8QyBfJOXeY+T08Rin3GZ1bshvwlbPt7NrDHMEzGdKIOmOzvIQmxjk28YEQX60k7pQ==} - '@tanstack/query-devtools@5.96.2': - resolution: {integrity: sha512-vBTB1Qhbm3nHSbEUtQwks/EdcAtFfEapr1WyBW4w2ExYKuXVi3jIxUIHf5MlSltiHuL7zNyUuanqT/7sI2sb6g==} + '@tanstack/query-devtools@5.99.0': + resolution: {integrity: sha512-m4ufXaJ8FjWXw7xDtyzE/6fkZAyQFg9WrbMrUpt8ZecRJx58jiFOZ2lxZMphZdIpAnIeto/S8stbwLKLusyckQ==} '@tanstack/react-devtools@0.10.2': resolution: {integrity: sha512-1BmZyxOrI5SqmRJ5MgkYZNNdnlLsJxQRI2YgorrAvcF2MxK6x5RcuStvD8+YlXoMw3JtNukPxoITirKAnKYDQA==} @@ -3898,13 +3982,13 @@ packages: react: '>=16.8' react-dom: '>=16.8' - '@tanstack/react-form-devtools@0.2.20': - resolution: {integrity: sha512-aXtorJ7p3TbzOapjaxbjGX/c0uQh/wbYSwgzFt3qatNMb1xL4HM/j00Bx7hDENZNBCf8MF8YEEtvpBmnGb4rnQ==} + '@tanstack/react-form-devtools@0.2.21': + resolution: {integrity: sha512-WBQ7NOcb3FM9UA4juZVyWUyJkyl62vHFbEBybZuvBFw3wq/v9pDGS01Ye8kepGXDg1+LQsOOxyDR65AKsdqSYQ==} peerDependencies: react: ^17.0.0 || ^18.0.0 || ^19.0.0 - '@tanstack/react-form@1.28.6': - resolution: {integrity: sha512-dRxwKeNW3uuJvf0sXsIQ2compFMnIJNk9B436Lx0fqkqK+CBvA1tNmEdX+faoCpuQ5Wua3c8ahVibJ65cpkijA==} + '@tanstack/react-form@1.29.0': + resolution: {integrity: sha512-jj425NNX0QKqbUzqSNiYI3HCPHSk2df47acXCJyXczWOTmG81ECZGkgofgqamFsSU9kMiH6Di5RLUnftrlhWSw==} peerDependencies: '@tanstack/react-start': '*' react: ^17.0.0 || ^18.0.0 || ^19.0.0 @@ -3912,14 +3996,14 @@ packages: '@tanstack/react-start': optional: true - '@tanstack/react-query-devtools@5.96.2': - resolution: {integrity: sha512-nTFKLGuTOFvmFRvcyZ3ArWC/DnMNPoBh6h/2yD6rsf7TCTJCQt+oUWOp2uKPTIuEPtF/vN9Kw5tl5mD1Kbposw==} + '@tanstack/react-query-devtools@5.99.0': + resolution: {integrity: sha512-CqqX7LCU9yOfCY/vBURSx2YSD83ryfX+QkfkaKionTfg1s2Hdm572Ro99gW3QPoJjzvsj1HM4pnN4nbDy3MXKA==} peerDependencies: - '@tanstack/react-query': ^5.96.2 + '@tanstack/react-query': ^5.99.0 react: ^18 || ^19 - '@tanstack/react-query@5.96.2': - resolution: {integrity: sha512-sYyzzJT4G0g02azzJ8o55VFFV31XvFpdUpG+unxS0vSaYsJnSPKGoI6WdPwUucJL1wpgGfwfmntNX/Ub1uOViA==} + '@tanstack/react-query@5.99.0': + resolution: {integrity: sha512-OY2bCqPemT1LlqJ8Y2CUau4KELnIhhG9Ol3ZndPbdnB095pRbPo1cHuXTndg8iIwtoHTgwZjyaDnQ0xD0mYwAw==} peerDependencies: react: ^18 || ^19 @@ -3974,18 +4058,18 @@ packages: peerDependencies: '@testing-library/dom': '>=7.21.4' - '@tsslint/cli@3.0.2': - resolution: {integrity: sha512-8lyZcDEs86zitz0wZ5QRdswY6xGz8j+WL11baN4rlpwahtPgYatujpYV5gpoKeyMAyerlNTdQh6u2LUJLoLNyQ==} + '@tsslint/cli@3.0.3': + resolution: {integrity: sha512-Pt1AuEZoh+dK4QYt95oCjBdBp2h2iYY9pSerf9BTLgfsjeyEsNk7Juhn51sFlAuEnWDNvI8mLULzsIkayd0nUQ==} engines: {node: '>=22.6.0'} hasBin: true peerDependencies: typescript: '*' - '@tsslint/compat-eslint@3.0.2': - resolution: {integrity: sha512-2TzSJPybCEfU/kHNi9UybwI//A7Fe14CwqmNuJ4fR4WYGpfIclXqfDJwsn5U1NzrWbHjWzRSntJITQPNw1SCNA==} + '@tsslint/compat-eslint@3.0.3': + resolution: {integrity: sha512-UGWrE4fu8fUCLkc+zMQNsEfuEkGHjndpa5oSQmzhmo9BQJYAqqH1s2kGIiDsAYwaQTUts4SjclXaITq3pZhkrA==} - '@tsslint/config@3.0.2': - resolution: {integrity: sha512-oHzteAwL6NHVrLzJnrpqMwewEFOydhDH228weO4wkHW8SwvE4oVV5qrKmjwL69ClYt5Le3y2aGDzGou+GuTbKg==} + '@tsslint/config@3.0.3': + resolution: {integrity: sha512-3yFyM4Sj+0LxwmcokwNPuS9pWUBMIhO8vwHiG4vGuquTvF4cgZqDPyQ3GN4hDb5qAZ56iqYtMoBEiSZXlJDYPQ==} engines: {node: '>=22.6.0'} hasBin: true peerDependencies: @@ -3997,12 +4081,12 @@ packages: tsl: optional: true - '@tsslint/core@3.0.2': - resolution: {integrity: sha512-Cu50e9vBojEMQjbqMoshkgLSoBj1BKbbmhSvzgbo07TiQ1wrOblZjvhU8ygB1fAIIHgU4laExX3pLU5OOeeR9g==} + '@tsslint/core@3.0.3': + resolution: {integrity: sha512-EpCKw34f2XyypH5xlxKCwnTgPGpZxbPXfvpwddT3DCxsIzUDJY4SpVJULAZFPAjJd49vopG0kNhXn0C/b+kHcg==} engines: {node: '>=22.6.0'} - '@tsslint/types@3.0.2': - resolution: {integrity: sha512-RbF3TIxu/YQwRpYrH5j2EL3ff4+Lr2SSmwCJmPJfi832F0hpgJj6xB9xKEorrUj0ZaTHE1QOr5SOMe5B6Qv+2Q==} + '@tsslint/types@3.0.3': + resolution: {integrity: sha512-3Jlb5UTPrzqu1D1qOrzjwy0QW2n41A1+ILKvzgViFrtiTwurM5Tav6V7Y4AFxO0xatCA0VHAzzifK0r5znaKbw==} '@tybys/wasm-util@0.10.1': resolution: {integrity: sha512-9tTaPJLSiejZKx+Bmog4uSubteqTvFrVrURwkmHixBo0G4seD0zUxp98E1DzUBJxLQ3NPwXrGKDiVjwx/DpPsg==} @@ -4175,8 +4259,8 @@ packages: '@types/negotiator@0.6.4': resolution: {integrity: sha512-elf6BsTq+AkyNsb2h5cGNst2Mc7dPliVoAPm1fXglC/BM3f2pFA40BaSSv3E5lyHteEawVKLP+8TwiY1DMNb3A==} - '@types/node@25.5.2': - resolution: {integrity: sha512-tO4ZIRKNC+MDWV4qKVZe3Ql/woTnmHDr5JD8UI5hn2pwBrHEwOEMZK7WlNb5RKB6EoJ02gwmQS9OrjuFnZYdpg==} + '@types/node@25.6.0': + resolution: {integrity: sha512-+qIYRKdNYJwY3vRCZMdJbPLJAtGjQBudzZzdzwQYkEPQd+PJGixUL5QfvCLDaULoLv+RhT3LDkwEfKaAkgSmNQ==} '@types/normalize-package-data@2.4.4': resolution: {integrity: sha512-37i+OaWTh9qeK4LSHPsyRC7NahnGotNuZvjLSgcPzblpHB3rrCJxAOgI5gCdKm7coonsaX1Of0ILiTcnZjbfxA==} @@ -4222,11 +4306,11 @@ packages: '@types/zen-observable@0.8.3': resolution: {integrity: sha512-fbF6oTd4sGGy0xjHPKAt+eS2CrxJ3+6gQ3FGcBoIJR2TLAyCkCyI8JqZNy+FeON0AhVgNJoUumVoZQjBFUqHkw==} - '@typescript-eslint/eslint-plugin@8.58.1': - resolution: {integrity: sha512-eSkwoemjo76bdXl2MYqtxg51HNwUSkWfODUOQ3PaTLZGh9uIWWFZIjyjaJnex7wXDu+TRx+ATsnSxdN9YWfRTQ==} + '@typescript-eslint/eslint-plugin@8.58.2': + resolution: {integrity: sha512-aC2qc5thQahutKjP+cl8cgN9DWe3ZUqVko30CMSZHnFEHyhOYoZSzkGtAI2mcwZ38xeImDucI4dnqsHiOYuuCw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: - '@typescript-eslint/parser': ^8.58.1 + '@typescript-eslint/parser': ^8.58.2 eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 typescript: '>=4.8.4 <6.1.0' @@ -4237,8 +4321,8 @@ packages: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/parser@8.58.1': - resolution: {integrity: sha512-gGkiNMPqerb2cJSVcruigx9eHBlLG14fSdPdqMoOcBfh+vvn4iCq2C8MzUB89PrxOXk0y3GZ1yIWb9aOzL93bw==} + '@typescript-eslint/parser@8.58.2': + resolution: {integrity: sha512-/Zb/xaIDfxeJnvishjGdcR4jmr7S+bda8PKNhRGdljDM+elXhlvN0FyPSsMnLmJUrVG9aPO6dof80wjMawsASg==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 @@ -4250,8 +4334,8 @@ packages: peerDependencies: typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/project-service@8.58.1': - resolution: {integrity: sha512-gfQ8fk6cxhtptek+/8ZIqw8YrRW5048Gug8Ts5IYcMLCw18iUgrZAEY/D7s4hkI0FxEfGakKuPK/XUMPzPxi5g==} + '@typescript-eslint/project-service@8.58.2': + resolution: {integrity: sha512-Cq6UfpZZk15+r87BkIh5rDpi38W4b+Sjnb8wQCPPDDweS/LRCFjCyViEbzHk5Ck3f2QDfgmlxqSa7S7clDtlfg==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: typescript: '>=4.8.4 <6.1.0' @@ -4266,8 +4350,8 @@ packages: resolution: {integrity: sha512-snZKH+W4WbWkrBqj4gUNRIGb/jipDW3qMqVJ4C9rzdFc+wLwruxk+2a5D+uoFcKPAqyqEnSb4l2ULuZf95eSkw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@typescript-eslint/scope-manager@8.58.1': - resolution: {integrity: sha512-TPYUEqJK6avLcEjumWsIuTpuYODTTDAtoMdt8ZZa93uWMTX13Nb8L5leSje1NluammvU+oI3QRr5lLXPgihX3w==} + '@typescript-eslint/scope-manager@8.58.2': + resolution: {integrity: sha512-SgmyvDPexWETQek+qzZnrG6844IaO02UVyOLhI4wpo82dpZJY9+6YZCKAMFzXb7qhx37mFK1QcPQ18tud+vo6Q==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} '@typescript-eslint/tsconfig-utils@8.57.2': @@ -4276,14 +4360,14 @@ packages: peerDependencies: typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/tsconfig-utils@8.58.1': - resolution: {integrity: sha512-JAr2hOIct2Q+qk3G+8YFfqkqi7sC86uNryT+2i5HzMa2MPjw4qNFvtjnw1IiA1rP7QhNKVe21mSSLaSjwA1Olw==} + '@typescript-eslint/tsconfig-utils@8.58.2': + resolution: {integrity: sha512-3SR+RukipDvkkKp/d0jP0dyzuls3DbGmwDpVEc5wqk5f38KFThakqAAO0XMirWAE+kT00oTauTbzMFGPoAzB0A==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: typescript: '>=4.8.4 <6.1.0' - '@typescript-eslint/type-utils@8.58.1': - resolution: {integrity: sha512-HUFxvTJVroT+0rXVJC7eD5zol6ID+Sn5npVPWoFuHGg9Ncq5Q4EYstqR+UOqaNRFXi5TYkpXXkLhoCHe3G0+7w==} + '@typescript-eslint/type-utils@8.58.2': + resolution: {integrity: sha512-Z7EloNR/B389FvabdGeTo2XMs4W9TjtPiO9DAsmT0yom0bwlPyRjkJ1uCdW1DvrrrYP50AJZ9Xc3sByZA9+dcg==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 @@ -4293,8 +4377,8 @@ packages: resolution: {integrity: sha512-/iZM6FnM4tnx9csuTxspMW4BOSegshwX5oBDznJ7S4WggL7Vczz5d2W11ecc4vRrQMQHXRSxzrCsyG5EsPPTbA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@typescript-eslint/types@8.58.1': - resolution: {integrity: sha512-io/dV5Aw5ezwzfPBBWLoT+5QfVtP8O7q4Kftjn5azJ88bYyp/ZMCsyW1lpKK46EXJcaYMZ1JtYj+s/7TdzmQMw==} + '@typescript-eslint/types@8.58.2': + resolution: {integrity: sha512-9TukXyATBQf/Jq9AMQXfvurk+G5R2MwfqQGDR2GzGz28HvY/lXNKGhkY+6IOubwcquikWk5cjlgPvD2uAA7htQ==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} '@typescript-eslint/typescript-estree@8.57.2': @@ -4303,8 +4387,8 @@ packages: peerDependencies: typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/typescript-estree@8.58.1': - resolution: {integrity: sha512-w4w7WR7GHOjqqPnvAYbazq+Y5oS68b9CzasGtnd6jIeOIeKUzYzupGTB2T4LTPSv4d+WPeccbxuneTFHYgAAWg==} + '@typescript-eslint/typescript-estree@8.58.2': + resolution: {integrity: sha512-ELGuoofuhhoCvNbQjFFiobFcGgcDCEm0ThWdmO4Z0UzLqPXS3KFvnEZ+SHewwOYHjM09tkzOWXNTv9u6Gqtyuw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: typescript: '>=4.8.4 <6.1.0' @@ -4316,8 +4400,8 @@ packages: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 typescript: '>=4.8.4 <6.0.0' - '@typescript-eslint/utils@8.58.1': - resolution: {integrity: sha512-Ln8R0tmWC7pTtLOzgJzYTXSCjJ9rDNHAqTaVONF4FEi2qwce8mD9iSOxOpLFFvWp/wBFlew0mjM1L1ihYWfBdQ==} + '@typescript-eslint/utils@8.58.2': + resolution: {integrity: sha512-QZfjHNEzPY8+l0+fIXMvuQ2sJlplB4zgDZvA+NmvZsZv3EQwOcc1DuIU1VJUTWZ/RKouBMhDyNaBMx4sWvrzRA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} peerDependencies: eslint: ^8.57.0 || ^9.0.0 || ^10.0.0 @@ -4327,47 +4411,47 @@ packages: resolution: {integrity: sha512-zhahknjobV2FiD6Ee9iLbS7OV9zi10rG26odsQdfBO/hjSzUQbkIYgda+iNKK1zNiW2ey+Lf8MU5btN17V3dUw==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@typescript-eslint/visitor-keys@8.58.1': - resolution: {integrity: sha512-y+vH7QE8ycjoa0bWciFg7OpFcipUuem1ujhrdLtq1gByKwfbC7bPeKsiny9e0urg93DqwGcHey+bGRKCnF1nZQ==} + '@typescript-eslint/visitor-keys@8.58.2': + resolution: {integrity: sha512-f1WO2Lx8a9t8DARmcWAUPJbu0G20bJlj8L4z72K00TMeJAoyLr/tHhI/pzYBLrR4dXWkcxO1cWYZEOX8DKHTqA==} engines: {node: ^18.18.0 || ^20.9.0 || >=21.1.0} - '@typescript/native-preview-darwin-arm64@7.0.0-dev.20260408.1': - resolution: {integrity: sha512-YcPczNLfPDB13eUBYHkTOkL7HyWqqqEhho4eSxhAvigZuxvtHQ1uyILIvLVAwipEVzhJ8QciKmLdLucpfi4XyA==} + '@typescript/native-preview-darwin-arm64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-CDgxIPvAWRCfOiQKvSk4wUkAoRW4Cy6vfAUBPNHSeLalIt43ToF0LOAsa5uLyRGsftjfMYY0A4qFOmgDvBhgzQ==} cpu: [arm64] os: [darwin] - '@typescript/native-preview-darwin-x64@7.0.0-dev.20260408.1': - resolution: {integrity: sha512-cHqkDg53xxxz21MThLBf4vx1kyIpRPEYNdEiQlvu9O35Tth49+aub6F+/YEMd9MG4TYZmxh1bEjkjErTUIElpA==} + '@typescript/native-preview-darwin-x64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-oiMmUtNMaqBh+eUogX53ichcEf7d+7upC0qa7xS9zWl85XEPKlrZCZpZ79yixw1PkdpjqJJigI11bmCi/JVv+g==} cpu: [x64] os: [darwin] - '@typescript/native-preview-linux-arm64@7.0.0-dev.20260408.1': - resolution: {integrity: sha512-iHG0FEXq/QFsn+qlTPllxdcbvfQ9aRYggy4lc1z0+f11Nyk4YDNCSiR8WW7pbnOTx/VreGbbXhlpuJXTidqL8g==} + '@typescript/native-preview-linux-arm64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-hPKanfs9c+7953gIYw13CNxN0HqFAOfJjnWk4SHqSBe3Pj9pxoeJvvRWlofp5C833eOZK6gZB7ll0/uNb0djtA==} cpu: [arm64] os: [linux] - '@typescript/native-preview-linux-arm@7.0.0-dev.20260408.1': - resolution: {integrity: sha512-w26Gv9yq9LIYIhxjkQC+i0wBPDdQdX+H06ZhyVRL5grKWTIsk9Xwjp9mDRB/dGlXBKcvnM25JH16OyAA0rFH3A==} + '@typescript/native-preview-linux-arm@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-0lSXBzBVsxIGrFv/PxoswzMptsnU6BgSk7GMAUt/o1dVw36R2XrSs538vwKnujaJwt4iIdMS0uGdpUC5s9jkzQ==} cpu: [arm] os: [linux] - '@typescript/native-preview-linux-x64@7.0.0-dev.20260408.1': - resolution: {integrity: sha512-hMcUlUIzYbvbdq6j/B4RPL+kZR917NGnE9AgPZ7dJ92yamH/7LGT1Mnlc6McUx31yqTFBFHdTc7Cfx+ynua7Iw==} + '@typescript/native-preview-linux-x64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-8Cr477HRmHZ5YyLfikNvw7qp3/WmnRjzIzJhUDrAx5173OBe8BdyV9jPemFHKDPqwI1AUMTijvptOFoQE7429w==} cpu: [x64] os: [linux] - '@typescript/native-preview-win32-arm64@7.0.0-dev.20260408.1': - resolution: {integrity: sha512-avJWIEKSx4rdBLZD1FOOTuxTU51dQfYb3jZvZMaXD4thJjq+6eSwfzu2elwL36AZDlnaxggGjB5nBxp0t54iOA==} + '@typescript/native-preview-win32-arm64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-ulJD9ZbIQyTBIDx8zzAzQLtbvQDGHSWrNRgkgBU5Os2NTYADQRco4pU747R9wZPMLopy3IeNck6m8vwPoYMk1g==} cpu: [arm64] os: [win32] - '@typescript/native-preview-win32-x64@7.0.0-dev.20260408.1': - resolution: {integrity: sha512-gpvEHkF/WoxkA3711c4uWNCZO9WAuwrq49COdNwxgOTzYHnMc1yCj8CpkCUJwU0f/Ydwp2s6/efn6gTMvtckPg==} + '@typescript/native-preview-win32-x64@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-x7DsSXnLQBf5XBBR8luHf1Nc/T1eByUmrOSEThW6825UB7lHoPlqKdhIoUNnTnS4nXQMxLwcusD4P1EP23GPJw==} cpu: [x64] os: [win32] - '@typescript/native-preview@7.0.0-dev.20260408.1': - resolution: {integrity: sha512-N0MZLEUnAoP/aRVk7MY119LDsESkbtEwIw+YeXi/jjx2XCqf7ni3GxIVsUYtf/troyuSedq3V/OUrkoCh5A9gA==} + '@typescript/native-preview@7.0.0-dev.20260413.1': + resolution: {integrity: sha512-twzr3V4QLEbXaESuI2DqdzutOVFGpkY3VZDR9sF8YlLsAXkwyQvZo58cKM77mZcsHoCR4lCYcdTatWTTa/+8tw==} hasBin: true '@ungap/structured-clone@1.3.0': @@ -4424,8 +4508,8 @@ packages: babel-plugin-react-compiler: optional: true - '@vitejs/plugin-rsc@0.5.23': - resolution: {integrity: sha512-CV6kWPE4E241qDStwK3ErYjuZqW1i1xun3/P1wsm94RJoActLTrQsGzGsf75ioeVxEK0roPqLGhcV2WlSlPePQ==} + '@vitejs/plugin-rsc@0.5.24': + resolution: {integrity: sha512-FQ7o1Zf1GUB8L5qlIuV2mvIv/KahG2qUYW2gMpxyIN3zF7voDsfvA/t8w/TLjYC0T6p3JwMnK3N+YzMGf/m75A==} peerDependencies: react: '*' react-dom: '*' @@ -4435,17 +4519,17 @@ packages: react-server-dom-webpack: optional: true - '@vitest/coverage-v8@4.1.3': - resolution: {integrity: sha512-/MBdrkA8t6hbdCWFKs09dPik774xvs4Z6L4bycdCxYNLHM8oZuRyosumQMG19LUlBsB6GeVpL1q4kFFazvyKGA==} + '@vitest/coverage-v8@4.1.4': + resolution: {integrity: sha512-x7FptB5oDruxNPDNY2+S8tCh0pcq7ymCe1gTHcsp733jYjrJl8V1gMUlVysuCD9Kz46Xz9t1akkv08dPcYDs1w==} peerDependencies: - '@vitest/browser': 4.1.3 - vitest: 4.1.3 + '@vitest/browser': 4.1.4 + vitest: 4.1.4 peerDependenciesMeta: '@vitest/browser': optional: true - '@vitest/eslint-plugin@1.6.14': - resolution: {integrity: sha512-PXZ5ysw4eHU9h8nDtBvVcGC7Z2C/T9CFdheqSw1NNXFYqViojub0V9bgdYI67iBTOcra2mwD0EYldlY9bGPf2Q==} + '@vitest/eslint-plugin@1.6.15': + resolution: {integrity: sha512-dTMjrdngmcB+DxomlKQ+SUubCTvd0m2hQQFpv5sx+GRodmeoxr2PVbphk57SVp250vpxphk9Ccwyv6fQ6+2gkA==} engines: {node: '>=18'} peerDependencies: '@typescript-eslint/eslint-plugin': '*' @@ -4463,31 +4547,54 @@ packages: '@vitest/expect@3.2.4': resolution: {integrity: sha512-Io0yyORnB6sikFlt8QW5K7slY4OjqNX9jmJQ02QDda8lyM6B5oNgVWoSoKPac8/kgnCUzuHQKrSLtu/uOqqrig==} + '@vitest/expect@4.1.4': + resolution: {integrity: sha512-iPBpra+VDuXmBFI3FMKHSFXp3Gx5HfmSCE8X67Dn+bwephCnQCaB7qWK2ldHa+8ncN8hJU8VTMcxjPpyMkUjww==} + + '@vitest/mocker@4.1.4': + resolution: {integrity: sha512-R9HTZBhW6yCSGbGQnDnH3QHfJxokKN4KB+Yvk9Q1le7eQNYwiCyKxmLmurSpFy6BzJanSLuEUDrD+j97Q+ZLPg==} + peerDependencies: + msw: ^2.4.9 + vite: ^6.0.0 || ^7.0.0 || ^8.0.0 + peerDependenciesMeta: + msw: + optional: true + vite: + optional: true + '@vitest/pretty-format@3.2.4': resolution: {integrity: sha512-IVNZik8IVRJRTr9fxlitMKeJeXFFFN0JaB9PHPGQ8NKQbGpfjlTx9zO4RefN8gp7eqjNy8nyK3NZmBzOPeIxtA==} - '@vitest/pretty-format@4.1.3': - resolution: {integrity: sha512-hYqqwuMbpkkBodpRh4k4cQSOELxXky1NfMmQvOfKvV8zQHz8x8Dla+2wzElkMkBvSAJX5TRGHJAQvK0TcOafwg==} + '@vitest/pretty-format@4.1.4': + resolution: {integrity: sha512-ddmDHU0gjEUyEVLxtZa7xamrpIefdEETu3nZjWtHeZX4QxqJ7tRxSteHVXJOcr8jhiLoGAhkK4WJ3WqBpjx42A==} + + '@vitest/runner@4.1.4': + resolution: {integrity: sha512-xTp7VZ5aXP5ZJrn15UtJUWlx6qXLnGtF6jNxHepdPHpMfz/aVPx+htHtgcAL2mDXJgKhpoo2e9/hVJsIeFbytQ==} + + '@vitest/snapshot@4.1.4': + resolution: {integrity: sha512-MCjCFgaS8aZz+m5nTcEcgk/xhWv0rEH4Yl53PPlMXOZ1/Ka2VcZU6CJ+MgYCZbcJvzGhQRjVrGQNZqkGPttIKw==} '@vitest/spy@3.2.4': resolution: {integrity: sha512-vAfasCOe6AIK70iP5UD11Ac4siNUNJ9i/9PZ3NKx07sG6sUxeag1LWdNrMWeKKYBLlzuK+Gn65Yd5nyL6ds+nw==} + '@vitest/spy@4.1.4': + resolution: {integrity: sha512-XxNdAsKW7C+FLydqFJLb5KhJtl3PGCMmYwFRfhvIgxJvLSXhhVI1zM8f1qD3Zg7RCjTSzDVyct6sghs9UEgBEQ==} + '@vitest/utils@3.2.4': resolution: {integrity: sha512-fB2V0JFrQSMsCo9HiSq3Ezpdv4iYaXRG1Sx8edX3MwxfyNn83mKiGzOcH+Fkxt4MHxr3y42fQi1oeAInqgX2QA==} - '@vitest/utils@4.1.3': - resolution: {integrity: sha512-Pc/Oexse/khOWsGB+w3q4yzA4te7W4gpZZAvk+fr8qXfTURZUMj5i7kuxsNK5mP/dEB6ao3jfr0rs17fHhbHdw==} + '@vitest/utils@4.1.4': + resolution: {integrity: sha512-13QMT+eysM5uVGa1rG4kegGYNp6cnQcsTc67ELFbhNLQO+vgsygtYJx2khvdt4gVQqSSpC/KT5FZZxUpP3Oatw==} - '@voidzero-dev/vite-plus-core@0.1.16': - resolution: {integrity: sha512-fOyf14CXjcXqANFs2fCXEX+0Tn9ZjmqfFV+qTnARwIF1Kzl8WquO4XtvlDgs/fTQ91H4AyoNUgkvWdKS+C4xYA==} + '@voidzero-dev/vite-plus-core@0.1.18': + resolution: {integrity: sha512-3PmXOL26yHzlw8ET9SwXCmglGzUYq2fOTYf2t0mxvVIs7ua3bnf6tOnmR+6YX5k1Ez26B0ooYzx+znc8k+CAMw==} engines: {node: ^20.19.0 || >=22.12.0} peerDependencies: '@arethetypeswrong/core': ^0.18.1 - '@tsdown/css': 0.21.7 - '@tsdown/exe': 0.21.7 + '@tsdown/css': 0.21.8 + '@tsdown/exe': 0.21.8 '@types/node': ^20.19.0 || >=22.12.0 '@vitejs/devtools': ^0.1.0 - esbuild: ^0.28.0 + esbuild: 0.27.2 jiti: '>=1.21.0' less: ^4.0.0 publint: ^0.3.0 @@ -4538,54 +4645,56 @@ packages: yaml: optional: true - '@voidzero-dev/vite-plus-darwin-arm64@0.1.16': - resolution: {integrity: sha512-InG0ZmuGh7DTrn7zWQ0UvKapElphKI6G1oYfys+jraedG70EhIIee9gtO+mTE1T0bF67SgAcLXwNyaiNda0XwA==} + '@voidzero-dev/vite-plus-darwin-arm64@0.1.18': + resolution: {integrity: sha512-bw2pWWE8RZRELWjXcdxdmRaOaYjmGmsxEm23TxvGxQXFb7k9l51W8tpjxariPGLxrEl+Cw5u601IL5LASaPJ5w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [darwin] - '@voidzero-dev/vite-plus-darwin-x64@0.1.16': - resolution: {integrity: sha512-LGNrECstuhkCRKRj/dE98Xcprw8HU3VMIMJnZsnDR2C5RB2HADNIu21at/a/G3giA9eWm7uhtPp9FvUtTCK9TA==} + '@voidzero-dev/vite-plus-darwin-x64@0.1.18': + resolution: {integrity: sha512-8TFj6yJNsumoH+yFc+6zf3g2UuzvrPHq2FAAVORffaVZ29PWnDSsXjegaIBmoAtGO5Xb4lcilQx7NoF9hONrZg==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [darwin] - '@voidzero-dev/vite-plus-linux-arm64-gnu@0.1.16': - resolution: {integrity: sha512-AoFKu6dIOtlkp/mwmtU8ES2uzoaxCHhIym1Tk7qMxyvke4IXnye6VDc4kPMRQwD8mwR3T3bO0HuaEEHxrIWDxw==} + '@voidzero-dev/vite-plus-linux-arm64-gnu@0.1.18': + resolution: {integrity: sha512-xHRqncKanOZ0zNnZSufL4Yx/gWrIFkCjU6jFzCukBOOCrcemq3SrALPHrNf+Nw1RLwNptGUZn2Vx/IjRLzUQDw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [glibc] - '@voidzero-dev/vite-plus-linux-arm64-musl@0.1.16': - resolution: {integrity: sha512-PloCsGTRIhcXIpUOJ6PqVG8gYNpq+ooJNyqy5sQ82BRnJuo8oV7uBLFvg0X9B3Bzh+vO1F8/+92+o5TiL35JMg==} + '@voidzero-dev/vite-plus-linux-arm64-musl@0.1.18': + resolution: {integrity: sha512-CA6XxZbkT8lYwWzS2yAj6exr7nHl3R8Sz+ZdOhYCU4yR2qvzGatdVgFr7oPnrkHLF426cHJ172rmNNj8NKie/w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [linux] libc: [musl] - '@voidzero-dev/vite-plus-linux-x64-gnu@0.1.16': - resolution: {integrity: sha512-nY9/2g+qjhwsW5U3MrFLlx+bOBsdOJiO2HzbxQy7jo/S3jPTnXhFlrRegQuAmqrHAXrSdNwgblgRpICKhx1xZg==} + '@voidzero-dev/vite-plus-linux-x64-gnu@0.1.18': + resolution: {integrity: sha512-xBO3MtLGVASPjH/GDRxexfLCT0othVpiFMdEQ83Y+woVNbrrzcdQTGFUuFG4cAiMhtmjytyFwPBtZ76BWsDO3w==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [glibc] - '@voidzero-dev/vite-plus-linux-x64-musl@0.1.16': - resolution: {integrity: sha512-JGKEAMoXqzdr9lHT/13uRNV9uzrSYXAFhjAfIC8WEQMG2VUFksvq5/TOc26hzmzbqu+bxRmfN8h1aVTDL8KwFg==} + '@voidzero-dev/vite-plus-linux-x64-musl@0.1.18': + resolution: {integrity: sha512-ADNis6SMarY7i8+b2ynUJ1PiqCHqnVwY7EQ+fSGug5zZ+W/cZq14+VWPxOvGR9LJk+iol8XuqsHy4BaV2+gjzw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [linux] libc: [musl] - '@voidzero-dev/vite-plus-test@0.1.16': - resolution: {integrity: sha512-d/rJPX/heMzoAFdnpZsp04MAa6nw1yH1tA4mVCV4m8goVcE9nAvt69mjLMzE8N/rYIQOSgenf3hDXuQRuD6OKQ==} + '@voidzero-dev/vite-plus-test@0.1.18': + resolution: {integrity: sha512-dovC2kJgiwMI8ay0i+3NvQGCDWPj8HQB2ONP/HbdJ5/XQVPq13+BihnCq8/ztz6uGhiDD8Nu4OZ3RgB14uvTfA==} engines: {node: ^20.0.0 || ^22.0.0 || >=24.0.0} peerDependencies: '@edge-runtime/vm': '*' '@opentelemetry/api': ^1.9.0 '@types/node': ^20.0.0 || ^22.0.0 || >=24.0.0 - '@vitest/ui': 4.1.2 + '@vitest/coverage-istanbul': 4.1.4 + '@vitest/coverage-v8': 4.1.4 + '@vitest/ui': 4.1.4 happy-dom: '*' jsdom: '*' vite: ^6.0.0 || ^7.0.0 || ^8.0.0 @@ -4596,6 +4705,10 @@ packages: optional: true '@types/node': optional: true + '@vitest/coverage-istanbul': + optional: true + '@vitest/coverage-v8': + optional: true '@vitest/ui': optional: true happy-dom: @@ -4603,14 +4716,14 @@ packages: jsdom: optional: true - '@voidzero-dev/vite-plus-win32-arm64-msvc@0.1.16': - resolution: {integrity: sha512-IugPUCLY7HmiPcCeuHKUqO1+G2vxHnYzAGhS02AixD0sJLTAIKCUANDOiVUFf/HMw+jh/UkugW7MWek8lf/JrQ==} + '@voidzero-dev/vite-plus-win32-arm64-msvc@0.1.18': + resolution: {integrity: sha512-EcDETMHG8xgjIlMizIu/wf0UtRZLGz+lHFvYFZVCkz4vLLz93a06vZ+3Oi9xY2Kc8aOHsCf8Gj5/dox/03cscw==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [arm64] os: [win32] - '@voidzero-dev/vite-plus-win32-x64-msvc@0.1.16': - resolution: {integrity: sha512-tq93CIeMs92HF7rdylJknRiyzMOWMKCmpw+g8nl5Q5nmUDNLUsrL3CGfbyqjgbruuPnIr761r9MfydPqZU/cYg==} + '@voidzero-dev/vite-plus-win32-x64-msvc@0.1.18': + resolution: {integrity: sha512-jBgL4ZjSJJu3FDcrqj4muzbr0WKlU6Ym1ilHQnq8R+2TRvE0AtvAMMuphICDslZGi6EK3fwJ+r2Lv7GU1AipQA==} engines: {node: ^20.19.0 || >=22.12.0} cpu: [x64] os: [win32] @@ -4931,6 +5044,10 @@ packages: resolution: {integrity: sha512-4zNhdJD/iOjSH0A05ea+Ke6MU5mmpQcbQsSOkgdaUMJ9zTlDTD/GYlwohmIE2u0gaxHYiVHEn1Fw9mZ/ktJWgw==} engines: {node: '>=18'} + chai@6.2.2: + resolution: {integrity: sha512-NUPRluOfOiTKBKvWPtSD4PhFvWCqOi0BGStNWs57X9js7XGTprSmFoz5F0tWhR4WPjNeR9jXqdC7/UpSJTnlRg==} + engines: {node: '>=18'} + chalk@4.1.1: resolution: {integrity: sha512-diHzdDKxcU+bAsUboHLPEDQiw0qEe0qd7SYUn3HgcFlWgbDcfLGswOHYeGrHKzG9z6UYf01d9VFMfZxPM1xZSg==} engines: {node: '>=10'} @@ -5325,6 +5442,9 @@ packages: dagre-d3-es@7.0.14: resolution: {integrity: sha512-P4rFMVq9ESWqmOgK+dlXvOtLwYg0i7u0HBGJER0LZDJT2VHIPAMZ/riPxqJceWMStH5+E61QxFra9kIS3AqdMg==} + date-fns@4.1.0: + resolution: {integrity: sha512-Ukq0owbQXxa/U3EGtsdVBkR1w7KOQ5gIBqdH2hkvknzZPYvBxb/aa6E8L7tmjFtkwZBu3UXBbjIgPo/Ez4xaNg==} + dayjs@1.11.20: resolution: {integrity: sha512-YbwwqR/uYpeoP4pu043q+LTDLFBLApUP6VxRihdfNTqu4ubqMlGDLd6ErXhEgsyvY0K6nCs7nggYumAN+9uEuQ==} @@ -5419,8 +5539,8 @@ packages: resolution: {integrity: sha512-6obghkliLdmKa56xdbLOpUZ43pAR6xFy1uOrxBaIDjT+yaRuuybLjGS9eVBoSR/UPU5fq3OXClEHLJNGvbxKpQ==} engines: {node: '>=20'} - dompurify@3.3.3: - resolution: {integrity: sha512-Oj6pzI2+RqBfFG+qOaOLbFXLQ90ARpcGG6UePL82bJLtdsa6CYJD7nmiU8MW9nQNOtCHV3lZ/Bzq1X0QYbBZCA==} + dompurify@3.4.0: + resolution: {integrity: sha512-nolgK9JcaUXMSmW+j1yaSvaEaoXYHwWyGJlkoCTghc97KgGDDSnpoU/PlEnw63Ah+TGKFOyY+X5LnxaWbCSfXg==} domutils@3.2.2: resolution: {integrity: sha512-6kZKyUajlDuqlHKVX1w7gyslj9MPIXzIFiz/rGu35uC1wMi+kMhQwGhl4lt9unC9Vb9INnY9Z3/ZA3+FhASLaw==} @@ -5479,6 +5599,13 @@ packages: end-of-stream@1.4.5: resolution: {integrity: sha512-ooEGc6HP26xXq/N+GCGOT0JKCLDGrq2bQUZrQ7gyrJiZANJ/8YDTxTpQBXGMn+WbIQXNVpyWymm7KYVICQnyOg==} + engine.io-client@6.6.4: + resolution: {integrity: sha512-+kjUJnZGwzewFDw951CDWcwj35vMNf2fcj7xQWOctq1F2i1jkDdVvdFG9kM/BEChymCH36KgjnW0NsL58JYRxw==} + + engine.io-parser@5.2.3: + resolution: {integrity: sha512-HqD3yTBfnBxIrbnM1DoD6Pcq8NECnh8d4As1Qgh0z5Gg3jRRIqijury0CL3ghu/edArpUYiYqQiDUQBIs4np3Q==} + engines: {node: '>=10.0.0'} + enhanced-resolve@5.20.1: resolution: {integrity: sha512-Qohcme7V1inbAfvjItgw0EaxVX5q2rdVEZHRBrEQdRZTssLDGsL8Lwrznl8oQ/6kuTJONLaDcGjkNP247XEhcA==} engines: {node: '>=10.13.0'} @@ -5584,8 +5711,8 @@ packages: peerDependencies: eslint: '*' - eslint-plugin-better-tailwindcss@4.3.2: - resolution: {integrity: sha512-1DLX2QmHmOj3u667f8vEI0zKoRc0Y1qJt33tfIeIkpTyzWaz9b2GzWBLD4bR+WJ/kxzC0Skcbx7cMerRWQ6OYg==} + eslint-plugin-better-tailwindcss@4.4.1: + resolution: {integrity: sha512-ueFciTgj2M+4YklYdtvpbMA3Nn22z60sQoSA4bnctOP4h0daUhJKAsDaGi888N00qWtIUqeK5Ikt6xnNnHPg2g==} engines: {node: ^20.19.0 || ^22.12.0 || >=23.0.0} peerDependencies: eslint: ^7.0.0 || ^8.0.0 || ^9.0.0 || ^10.0.0 @@ -5639,8 +5766,8 @@ packages: peerDependencies: eslint: '>=9.38.0' - eslint-plugin-markdown-preferences@0.41.0: - resolution: {integrity: sha512-Pu150jKH1Cf5sW/Igck0VbuT0A9qFpIPG1dDvyAt2lG8tA3VzPDkwxBusO8JqQ9NRIrm3pat0X6cfanSki3WZQ==} + eslint-plugin-markdown-preferences@0.41.1: + resolution: {integrity: sha512-Xi4rlT7oBZ8PMGDl7J9khgO2vF9X0F/6ag05/25Vyq7r3llaK95x9D6DpzXidxC2Gagl/e8bp2Hw47r4I3wWSA==} engines: {node: ^20.19.0 || ^22.12.0 || >=24.0.0} peerDependencies: '@eslint/markdown': ^7.4.0 || ^8.0.0 @@ -5652,8 +5779,10 @@ packages: peerDependencies: eslint: '>=8.23.0' - eslint-plugin-no-barrel-files@1.2.2: - resolution: {integrity: sha512-DF2bnHuEHClmL1+maBO5TD2HnnRsLj8J69FFtVkjObkELyjCXaWBsk+URJkqBpdOWURlL+raGX9AEpWCAiOV0g==} + eslint-plugin-no-barrel-files@1.3.1: + resolution: {integrity: sha512-y7OX5kyH7PMNRFhLF6SmM4JapxvaxExrgWPndPNTzilpO5uBqybuN480g3E8TTxT3OLOOhQDynmcJ0dnipIyNA==} + peerDependencies: + eslint: ^8.0.0 || ^9.0.0 || ^10.0.0 eslint-plugin-no-only-tests@3.3.0: resolution: {integrity: sha512-brcKcxGnISN2CcVhXJ/kEQlNa0MEfGRtwKtWA16SkqXHKitaKIMrfemJKLKX1YqDU5C/5JY3PvZXd5jEW04e0Q==} @@ -5883,6 +6012,10 @@ packages: resolution: {integrity: sha512-XYfuKMvj4O35f/pOXLObndIRvyQ+/+6AhODh+OKWj9S9498pHHn/IMszH+gt0fBCRWMNfk1ZSp5x3AifmnI2vg==} engines: {node: '>=6'} + expect-type@1.3.0: + resolution: {integrity: sha512-knvyeauYhqjOYvQ66MznSMs83wmHrCycNEN6Ao+2AeYEfxUIkuiVxdEa1qlGEPK+We3n0THiDciYSsCcgW/DoA==} + engines: {node: '>=12.0.0'} + exsolve@1.0.8: resolution: {integrity: sha512-LmDxfWXwcTArk8fUEnOfSZpHOJ6zOMUJKOtFLFqJLoKJetuQG874Uc7/Kki7zFLzYybmZhp1M7+98pfMqeX8yA==} @@ -6055,8 +6188,8 @@ packages: resolution: {integrity: sha512-7ACyT3wmyp3I61S4fG682L0VA2RGD9otkqGJIwNUMF1SWUombIIk+af1unuDYgMm082aHYwD+mzJvv9Iu8dsgg==} engines: {node: '>=18'} - globals@17.4.0: - resolution: {integrity: sha512-hjrNztw/VajQwOLsMNT1cbJiH2muO3OROCHnbehc8eY5JyD2gqz4AcMHPqgaOR59DjgUjYAYLeH699g/eWi2jw==} + globals@17.5.0: + resolution: {integrity: sha512-qoV+HK2yFl/366t2/Cb3+xxPUo5BuMynomoDmiaZBIdbs+0pYbjfZU+twLhGKp4uCZ/+NbtpVepH5bGCxRyy2g==} engines: {node: '>=18'} globrex@0.1.2: @@ -6073,8 +6206,8 @@ packages: hachure-fill@0.5.2: resolution: {integrity: sha512-3GKBOn+m2LX9iq+JC1064cSFprJY4jL1jCXTcpnfER5HYE2l/4EfWSGzkPa/ZDBmYI0ZOEj5VHV/eKnPGkHuOg==} - happy-dom@20.8.9: - resolution: {integrity: sha512-Tz23LR9T9jOGVZm2x1EPdXqwA37G/owYMxRwU0E4miurAtFsPMQ1d2Jc2okUaSjZqAFz2oEn3FLXC5a0a+siyA==} + happy-dom@20.9.0: + resolution: {integrity: sha512-GZZ9mKe8r646NUAf/zemnGbjYh4Bt8/MqASJY+pSm5ZDtc3YQox+4gsLI7yi1hba6o+eCsGxpHn5+iEVn31/FQ==} engines: {node: '>=20.0.0'} has-ansi@4.0.1: @@ -6134,8 +6267,8 @@ packages: resolution: {integrity: sha512-Ox1pJVrDCyGHMG9CFg1tmrRUMRPRsAWYc/PinY0XzJU4K7y7vjNoLKIQ7BR5UJMCxNN8EM1MNDmHWA/B3aZUuw==} engines: {node: '>=6'} - hono@4.12.12: - resolution: {integrity: sha512-p1JfQMKaceuCbpJKAPKVqyqviZdS0eUxH9v82oWo1kb9xjQ5wA6iP3FNVAPDFlz5/p7d45lO+BpSk1tuSZMF4Q==} + hono@4.12.14: + resolution: {integrity: sha512-am5zfg3yu6sqn5yjKBNqhnTX7Cv+m00ox+7jbaKkrLMRJ4rAdldd1xPd/JzbBWspqaQv6RSTrgFN95EsfhC+7w==} engines: {node: '>=16.9.0'} hosted-git-info@9.0.2: @@ -6439,8 +6572,8 @@ packages: khroma@2.1.0: resolution: {integrity: sha512-Ls993zuzfayK269Svk9hzpeGUKob/sIgZzyHYdjQoAdQetRKpOLj+k/QQQ/6Qi0Yz65mlROrfd+Ev+1+7dz9Kw==} - knip@6.3.1: - resolution: {integrity: sha512-22kLJloVcOVOAudCxlFOC0ICAMme7dKsS7pVTEnrmyKGpswb8ieznvAiSKUeFVDJhb01ect6dkDc1Ha1g1sPpg==} + knip@6.4.1: + resolution: {integrity: sha512-Ry+ywmDFSZvKp/jx7LxMgsZWRTs931alV84e60lh0Stf6kSRYqSIUTkviyyDFRcSO3yY1Kpbi83OirN+4lA2Xw==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true @@ -6480,8 +6613,8 @@ packages: '@lexical/utils': '>=0.28.0' lexical: '>=0.28.0' - lexical@0.42.0: - resolution: {integrity: sha512-GY9Lg3YEIU7nSFaiUlLspZ1fm4NfIcfABaxy9nT+fRVDkX7iV005T5Swil83gXUmxFUNKGal3j+hUxHOUDr+Aw==} + lexical@0.43.0: + resolution: {integrity: sha512-waSeXyt1HxTFpU8KNRA3IQcvjvpw0lZNaSbGopfOi4bLV0FF9zYpqiScTnEUMP/b1W7qWmD4Z2Detw43XICxqQ==} lib0@0.2.117: resolution: {integrity: sha512-DeXj9X5xDCjgKLU/7RR+/HQEVzuuEUiwldwOGsHK/sfAfELGWEyTcf0x+uOvCvK3O2zPmZePXWL85vtia6GyZw==} @@ -6601,6 +6734,9 @@ packages: resolution: {integrity: sha512-lyuxPGr/Wfhrlem2CL/UcnUc1zcqKAImBDzukY7Y5F/yQiNdko6+fRLevlw1HgMySw7f611UIY408EtxRSoK3Q==} hasBin: true + loro-crdt@1.10.8: + resolution: {integrity: sha512-GvH8fSJST1VDHRGzlQml80pBYoFbIP4ULeV1S8fD4ffmA8m+icoPORyVUW2AkJBY3dxKIcMMn0WqaJmpCmnbkQ==} + loupe@3.2.1: resolution: {integrity: sha512-CdzqowRJCeLU72bHvWqwRBBlLcMEtIvGrlvef74kMnV2AolS9Y8xUv1I0U/MNAWMhBlKIoyuEgoJ0t/bbwHbLQ==} @@ -6720,8 +6856,8 @@ packages: mdn-data@2.0.30: resolution: {integrity: sha512-GaqWWShW4kv/G9IEucWScBx9G1/vsFZZJUO+tD26M8J8z3Kw5RDQjaoZe03YAClgeS/SWPOcb4nkFBTEi5DUEA==} - mdn-data@2.23.0: - resolution: {integrity: sha512-786vq1+4079JSeu2XdcDjrhi/Ry7BWtjDl9WtGPWLiIHb2T66GvIVflZTBoSNZ5JqTtJGYEVMuFA/lbQlMOyDQ==} + mdn-data@2.27.1: + resolution: {integrity: sha512-9Yubnt3e8A0OKwxYSXyhLymGW4sCufcLG6VdiDdUGVkPhpqLxlvP5vl1983gQjJl3tqbrM731mjaZaP68AgosQ==} merge-stream@2.0.0: resolution: {integrity: sha512-abv/qOcuPfk3URPfDzmZU1LKmuw8kT+0nIHvKrKgFrwifol/doWcdA4ZqsWQ8ENrFKkd67Mfpo/LovbIUsbt3w==} @@ -7068,8 +7204,8 @@ packages: oxc-resolver@11.19.1: resolution: {integrity: sha512-qE/CIg/spwrTBFt5aKmwe3ifeDdLfA2NESN30E42X/lII5ClF8V7Wt6WIJhcGZjp0/Q+nQ+9vgxGk//xZNX2hg==} - oxfmt@0.43.0: - resolution: {integrity: sha512-KTYNG5ISfHSdmeZ25Xzb3qgz9EmQvkaGAxgBY/p38+ZiAet3uZeu7FnMwcSQJg152Qwl0wnYAxDc+Z/H6cvrwA==} + oxfmt@0.45.0: + resolution: {integrity: sha512-0o/COoN9fY50bjVeM7PQsNgbhndKurBIeTIcspW033OumksjJJmIVDKjAk5HMwU/GHTxSOdGDdhJ6BRzGPmsHg==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true @@ -7077,8 +7213,8 @@ packages: resolution: {integrity: sha512-/Uc9TQyN1l8w9QNvXtVHYtz+SzDJHKpb5X0UnHodl0BVzijUPk0LPlDOHAvogd1UI+iy9ZSF6gQxEqfzUxCULQ==} hasBin: true - oxlint@1.58.0: - resolution: {integrity: sha512-t4s9leczDMqlvOSjnbCQe7gtoLkWgBGZ7sBdCJ9EOj5IXFSG/X7OAzK4yuH4iW+4cAYe8kLFbC8tuYMwWZm+Cg==} + oxlint@1.60.0: + resolution: {integrity: sha512-tnRzTWiWJ9pg3ftRWnD0+Oqh78L6ZSwcEudvCZaER0PIqiAnNyXj5N1dPwjmNpDalkKS9m/WMLN1CTPUBPmsgw==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true peerDependencies: @@ -7201,8 +7337,8 @@ packages: resolution: {integrity: sha512-QP88BAKvMam/3NxH6vj2o21R6MjxZUAd6nlwAS/pnGvN9IVLocLHxGYIzFhg6fUQ+5th6P4dv4eW9jX3DSIj7A==} engines: {node: '>=12'} - pinyin-pro@3.28.0: - resolution: {integrity: sha512-mMRty6RisoyYNphJrTo3pnvp3w8OMZBrXm9YSWkxhAfxKj1KZk2y8T2PDIZlDDRsvZ0No+Hz6FI4sZpA6Ey25g==} + pinyin-pro@3.28.1: + resolution: {integrity: sha512-oqz8ulwRgtUXRi0vbqEfGNly19zpyCxYrjhkk5TibGcgSW6eNwS5woajCXRwqURi8Ehc2yOFTiB4uNoZ+NJOnA==} pixelmatch@7.1.0: resolution: {integrity: sha512-1wrVzJ2STrpmONHKBy228LM1b84msXDUoAzVEl0R8Mz4Ce6EPr+IVtxm8+yvrqLYMHswREkjYFaMxnyGnaY3Ng==} @@ -7720,6 +7856,9 @@ packages: resolution: {integrity: sha512-eAVKTMedR5ckPo4xne/PjYQYrU3qx78gtJZ+sHlXEg5IHhhoQhMfZVzetTYuaJS0L2Ef3AcCRzCHV8T0WI6nIQ==} engines: {node: '>=20'} + siginfo@2.0.0: + resolution: {integrity: sha512-ybx0WO1/8bSBLEWXZvEd7gMW3Sn3JFlW3TvX1nREbDLRNQNaeNN8WK0meBwPdAaOI7TtRRRJn/Es1zhrrCHu7g==} + simple-concat@1.0.1: resolution: {integrity: sha512-cSFtAPtRhljv69IK0hTVZQ+OfE9nePi/rtJmw5UjHeVyVroEqJXP1sFztKUy1qU+xvz3u/sfYJLa947b7nAN2Q==} @@ -7740,6 +7879,14 @@ packages: resolution: {integrity: sha512-dWUG8F5sIIARXih1DTaQAX4SsiTXhInKf1buxdY9DIg4ZYPZK5nGM1VRIYmEbDbsHt7USo99xSLFu5Q1IqTmsg==} engines: {node: '>= 18'} + socket.io-client@4.8.3: + resolution: {integrity: sha512-uP0bpjWrjQmUt5DTHq9RuoCBdFJF10cdX9X+a368j/Ft0wmaVgxlrjvK3kjvgCODOMMOz9lcaRzxmso0bTWZ/g==} + engines: {node: '>=10.0.0'} + + socket.io-parser@4.2.6: + resolution: {integrity: sha512-asJqbVBDsBCJx0pTqw3WfesSY0iRX+2xzWEWzrpcH7L6fLzrhyF8WPI8UaeM4YCuDfpwA/cgsdugMsmtz8EJeg==} + engines: {node: '>=10.0.0'} + solid-js@1.9.11: resolution: {integrity: sha512-WEJtcc5mkh/BnHA6Yrg4whlF8g6QwpmXXRg4P2ztPmcKeHHlH4+djYecBLhSpecZY2RRECXYUwIc/C2r3yzQ4Q==} @@ -7784,6 +7931,9 @@ packages: engines: {node: '>=20.16.0'} hasBin: true + stackback@0.0.2: + resolution: {integrity: sha512-1XMJE5fQo1jGH6Y/7ebnwPOBEkIEnT4QF32d5R1+VXdXveM0IBMJt8zfaxX1P3QhVwrYe+576+jkANtSS2mBbw==} + stackframe@1.3.4: resolution: {integrity: sha512-oeVtt7eWQS+Na6F//S4kJ2K2VbRlS9D43mAlMyVpVWovy9o+jfgH8O9agzANzaiLjclA0oYzUXEM4PurhSUChw==} @@ -7916,8 +8066,8 @@ packages: resolution: {integrity: sha512-yEFYrVhod+hdNyx7g5Bnkkb0G6si8HJurOoOEgC8B/O0uXLHlaey/65KRv6cuWBNhBgHKAROVpc7QyYqE5gFng==} engines: {node: '>=20'} - tailwind-csstree@0.1.5: - resolution: {integrity: sha512-ZHCKXz+TcBj7CJYStiuAtNenPpdHMrhgotOSNJ3UQTSTgwTfAyoyTA2SNW4oD8+2T6xt6awM7CZSU2+PXx9V3w==} + tailwind-csstree@0.3.1: + resolution: {integrity: sha512-v147gLOR+E+9H4dNaP9rBeS/S/CTQJMRItlX9jLOXjdBGfSRauLwiz7LBCViaQmn6URXIlOdN6iMzSzOaeoUUw==} engines: {node: '>=18.18'} peerDependencies: '@eslint/css': '>=1.0.0' @@ -7997,6 +8147,10 @@ packages: resolution: {integrity: sha512-j2Zq4NyQYG5XMST4cbs02Ak8iJUdxRM0XI5QyxXuZOzKOINmWurp3smXu3y5wDcJrptwpSjgXHzIQxR0omXljQ==} engines: {node: '>=12.0.0'} + tinyglobby@0.2.16: + resolution: {integrity: sha512-pn99VhoACYR8nFHhxqix+uvsbXineAasWm5ojXoN8xEwK5Kd3/TrhNn1wByuD52UxWRLy8pu+kRMniEi6Eq9Zg==} + engines: {node: '>=12.0.0'} + tinypool@2.1.0: resolution: {integrity: sha512-Pugqs6M0m7Lv1I7FtxN4aoyToKg1C4tu+/381vH35y8oENM/Ai7f7C4StcoK4/+BSw9ebcS8jRiVrORFKCALLw==} engines: {node: ^20.0.0 || >=22.0.0} @@ -8140,8 +8294,8 @@ packages: resolution: {integrity: sha512-X2wH19RAPZE3+ldGicOkoj/SIA83OIxcJ6Cuaw23hf8Xc6fQpvZXY0SftE2JgS0QhYLUG4uwodSI3R53keyh7w==} engines: {node: '>=14'} - undici-types@7.18.2: - resolution: {integrity: sha512-AsuCzffGHJybSaRrmr5eHr81mwJU3kjw6M+uprWvCXiNeN9SOGwQ3Jn8jb8m3Z6izVgknn1R0FTCEAP2QrLY/w==} + undici-types@7.19.2: + resolution: {integrity: sha512-qYVnV5OEm2AW8cJMCpdV20CDyaN3g0AjDlOGf1OW4iaDEx8MwdtChUp4zu4H0VP3nDRF/8RKWH+IPp9uW0YGZg==} undici@7.24.0: resolution: {integrity: sha512-jxytwMHhsbdpBXxLAcuu0fzlQeXCNnWdDyRHpvWsUl8vd98UwYdl9YTyn8/HcpcJPC3pwUveefsa3zTxyD/ERg==} @@ -8346,8 +8500,8 @@ packages: storybook: ^0.0.0-0 || ^9.0.0 || ^10.0.0 || ^10.0.0-0 || ^10.1.0-0 || ^10.2.0-0 || ^10.3.0-0 || ^10.4.0-0 vite: ^5.0.0 || ^6.0.0 || ^7.0.0 || ^8.0.0 - vite-plus@0.1.16: - resolution: {integrity: sha512-sgYHc5zWLSDInaHb/abvEA7UOwh7sUWuyNt+Slphj55jPvzodT8Dqw115xyKwDARTuRFSpm1eo/t58qZ8/NylQ==} + vite-plus@0.1.18: + resolution: {integrity: sha512-RiWUoOmQiJMtd4Dfm6WD0v0Selqh/nQzmaGVIrkfnr+2s5UxGVZy7n2TCO5ZnR7w9noMIgtUAQN8GtKhwHEiOQ==} engines: {node: ^20.19.0 || >=22.12.0} hasBin: true @@ -8377,6 +8531,47 @@ packages: peerDependencies: vitest: ^3.0.0 || ^4.0.0 + vitest@4.1.4: + resolution: {integrity: sha512-tFuJqTxKb8AvfyqMfnavXdzfy3h3sWZRWwfluGbkeR7n0HUev+FmNgZ8SDrRBTVrVCjgH5cA21qGbCffMNtWvg==} + engines: {node: ^20.0.0 || ^22.0.0 || >=24.0.0} + hasBin: true + peerDependencies: + '@edge-runtime/vm': '*' + '@opentelemetry/api': ^1.9.0 + '@types/node': ^20.0.0 || ^22.0.0 || >=24.0.0 + '@vitest/browser-playwright': 4.1.4 + '@vitest/browser-preview': 4.1.4 + '@vitest/browser-webdriverio': 4.1.4 + '@vitest/coverage-istanbul': 4.1.4 + '@vitest/coverage-v8': 4.1.4 + '@vitest/ui': 4.1.4 + happy-dom: '*' + jsdom: '*' + vite: ^6.0.0 || ^7.0.0 || ^8.0.0 + peerDependenciesMeta: + '@edge-runtime/vm': + optional: true + '@opentelemetry/api': + optional: true + '@types/node': + optional: true + '@vitest/browser-playwright': + optional: true + '@vitest/browser-preview': + optional: true + '@vitest/browser-webdriverio': + optional: true + '@vitest/coverage-istanbul': + optional: true + '@vitest/coverage-v8': + optional: true + '@vitest/ui': + optional: true + happy-dom: + optional: true + jsdom: + optional: true + void-elements@3.1.0: resolution: {integrity: sha512-Dhxzh5HZuiHQhbvTW9AMetFfBHDMYpo23Uo9btPXgdYP+3T5S+p+jgNy7spra+veYhBP2dCSgxR/i2Y02h5/6w==} engines: {node: '>=0.10.0'} @@ -8456,6 +8651,11 @@ packages: engines: {node: '>= 8'} hasBin: true + why-is-node-running@2.3.0: + resolution: {integrity: sha512-hUrmaWBdVDcxvYqnyh09zunKzROWjbZTiNy8dBEjkS7ehEDQibXJ7XvlmtbwuTclUiIyN+CyXQD4Vmko8fNm8w==} + engines: {node: '>=8'} + hasBin: true + word-wrap@1.2.5: resolution: {integrity: sha512-BN22B5eaMMI9UMtjrGd5g5eCYPpCPDUy0FJXbYsaT5zYxjFOckS53SQDE3pWkVoWpHXVb3BrYcEN4Twa55B5cA==} engines: {node: '>=0.10.0'} @@ -8463,6 +8663,18 @@ packages: wrappy@1.0.2: resolution: {integrity: sha512-l4Sp/DRseor9wL6EvV2+TuQn63dMkPjZ/sp9XkghTEbV9KlPS1xUsZ3u7/IQO4wxtcFB4bgpQPRcR3QCvezPcQ==} + ws@8.18.3: + resolution: {integrity: sha512-PEIGCY5tSlUt50cqyMXfCzX+oOPqN0vuGqWzbcJ2xvnkzkq46oOpz7dQaTDBdfICb4N14+GARUDw2XV2N4tvzg==} + engines: {node: '>=10.0.0'} + peerDependencies: + bufferutil: ^4.0.1 + utf-8-validate: '>=5.0.2' + peerDependenciesMeta: + bufferutil: + optional: true + utf-8-validate: + optional: true + ws@8.20.0: resolution: {integrity: sha512-sAt8BhgNbzCtgGbt2OxmpuryO63ZoDk/sqaB/znQm94T4fCEsy/yV+7CdC1kJhOU9lboAEU7R3kquuycDoibVA==} engines: {node: '>=10.0.0'} @@ -8491,6 +8703,10 @@ packages: resolution: {integrity: sha512-yMqGBqtXyeN1e3TGYvgNgDVZ3j84W4cwkOXQswghol6APgZWaff9lnbvN7MHYJOiXsvGPXtjTYJEiC9J2wv9Eg==} engines: {node: '>=8.0'} + xmlhttprequest-ssl@2.1.2: + resolution: {integrity: sha512-TEU+nJVUUnA4CYJFLvK5X9AOeH4KvDvhIfm0vV1GaQRtchnG0hgK5p8hw/xjv8cunWYCsiPCSDzObPyhEwq3KQ==} + engines: {node: '>=0.4.0'} + yallist@3.1.1: resolution: {integrity: sha512-a4UGQaWPH59mOXUYnAG2ewncQS4i4F43Tv3JoAM+s2VDAmS9NsK8GpDMLrCHPksFT7h3K6TOoUNn2pb7RoXx4g==} @@ -8585,27 +8801,27 @@ snapshots: '@alloc/quick-lru@5.2.0': {} - '@amplitude/analytics-browser@2.38.1': + '@amplitude/analytics-browser@2.39.0': dependencies: - '@amplitude/analytics-core': 2.44.1 - '@amplitude/plugin-autocapture-browser': 1.25.1 - '@amplitude/plugin-custom-enrichment-browser': 0.1.3 - '@amplitude/plugin-network-capture-browser': 1.9.12 - '@amplitude/plugin-page-url-enrichment-browser': 0.7.4 - '@amplitude/plugin-page-view-tracking-browser': 2.9.5 - '@amplitude/plugin-web-vitals-browser': 1.1.27 + '@amplitude/analytics-core': 2.45.0 + '@amplitude/plugin-autocapture-browser': 1.25.2 + '@amplitude/plugin-custom-enrichment-browser': 0.1.4 + '@amplitude/plugin-network-capture-browser': 1.9.13 + '@amplitude/plugin-page-url-enrichment-browser': 0.7.5 + '@amplitude/plugin-page-view-tracking-browser': 2.9.6 + '@amplitude/plugin-web-vitals-browser': 1.1.28 tslib: 2.8.1 - '@amplitude/analytics-client-common@2.4.42': + '@amplitude/analytics-client-common@2.4.43': dependencies: '@amplitude/analytics-connector': 1.6.4 - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 '@amplitude/analytics-types': 2.11.1 tslib: 2.8.1 '@amplitude/analytics-connector@1.6.4': {} - '@amplitude/analytics-core@2.44.1': + '@amplitude/analytics-core@2.45.0': dependencies: '@amplitude/analytics-connector': 1.6.4 '@types/zen-observable': 0.8.3 @@ -8619,48 +8835,48 @@ snapshots: dependencies: js-base64: 3.7.8 - '@amplitude/plugin-autocapture-browser@1.25.1': + '@amplitude/plugin-autocapture-browser@1.25.2': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-custom-enrichment-browser@0.1.3': + '@amplitude/plugin-custom-enrichment-browser@0.1.4': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-network-capture-browser@1.9.12': + '@amplitude/plugin-network-capture-browser@1.9.13': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-page-url-enrichment-browser@0.7.4': + '@amplitude/plugin-page-url-enrichment-browser@0.7.5': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-page-view-tracking-browser@2.9.5': + '@amplitude/plugin-page-view-tracking-browser@2.9.6': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 - '@amplitude/plugin-session-replay-browser@1.27.6(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0)': + '@amplitude/plugin-session-replay-browser@1.27.7(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0)': dependencies: - '@amplitude/analytics-client-common': 2.4.42 - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-client-common': 2.4.43 + '@amplitude/analytics-core': 2.45.0 '@amplitude/analytics-types': 2.11.1 '@amplitude/rrweb-plugin-console-record': 2.0.0-alpha.36(@amplitude/rrweb@2.0.0-alpha.37) '@amplitude/rrweb-record': 2.0.0-alpha.36 - '@amplitude/session-replay-browser': 1.35.1(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0) + '@amplitude/session-replay-browser': 1.36.0(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0) idb-keyval: 6.2.2 tslib: 2.8.1 transitivePeerDependencies: - '@amplitude/rrweb' - rollup - '@amplitude/plugin-web-vitals-browser@1.1.27': + '@amplitude/plugin-web-vitals-browser@1.1.28': dependencies: - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-core': 2.45.0 tslib: 2.8.1 web-vitals: 5.1.0 @@ -8705,10 +8921,10 @@ snapshots: base64-arraybuffer: 1.0.2 mitt: 3.0.1 - '@amplitude/session-replay-browser@1.35.1(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0)': + '@amplitude/session-replay-browser@1.36.0(@amplitude/rrweb@2.0.0-alpha.37)(rollup@4.59.0)': dependencies: - '@amplitude/analytics-client-common': 2.4.42 - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-client-common': 2.4.43 + '@amplitude/analytics-core': 2.45.0 '@amplitude/analytics-types': 2.11.1 '@amplitude/experiment-core': 0.7.2 '@amplitude/rrweb-packer': 2.0.0-alpha.36 @@ -8726,24 +8942,24 @@ snapshots: '@amplitude/targeting@0.2.0': dependencies: - '@amplitude/analytics-client-common': 2.4.42 - '@amplitude/analytics-core': 2.44.1 + '@amplitude/analytics-client-common': 2.4.43 + '@amplitude/analytics-core': 2.45.0 '@amplitude/analytics-types': 2.11.1 '@amplitude/experiment-core': 0.7.2 idb: 8.0.0 tslib: 2.8.1 - '@antfu/eslint-config@8.1.1(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2))(@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2)': + '@antfu/eslint-config@8.2.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.18)(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2)': dependencies: '@antfu/install-pkg': 1.1.0 '@clack/prompts': 1.2.0 - '@e18e/eslint-plugin': 0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0)) + '@e18e/eslint-plugin': 0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0)) '@eslint-community/eslint-plugin-eslint-comments': 4.7.1(eslint@10.2.0(jiti@2.6.1)) '@eslint/markdown': 8.0.1 '@stylistic/eslint-plugin': 5.10.0(eslint@10.2.0(jiti@2.6.1)) - '@typescript-eslint/eslint-plugin': 8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/parser': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@vitest/eslint-plugin': 1.6.14(@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/parser': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@vitest/eslint-plugin': 1.6.15(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.18)(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) ansis: 4.2.0 cac: 7.0.0 eslint: 10.2.0(jiti@2.6.1) @@ -8751,7 +8967,7 @@ snapshots: eslint-flat-config-utils: 3.1.0 eslint-merge-processors: 2.0.0(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-antfu: 3.2.2(eslint@10.2.0(jiti@2.6.1)) - eslint-plugin-command: 3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2))(@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-command: 3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-import-lite: 0.6.0(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-jsdoc: 62.9.0(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-jsonc: 3.1.2(eslint@10.2.0(jiti@2.6.1)) @@ -8762,11 +8978,65 @@ snapshots: eslint-plugin-regexp: 3.1.0(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-toml: 1.3.1(eslint@10.2.0(jiti@2.6.1)) eslint-plugin-unicorn: 64.0.0(eslint@10.2.0(jiti@2.6.1)) - eslint-plugin-unused-imports: 4.4.1(@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) - eslint-plugin-vue: 10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))) + eslint-plugin-unused-imports: 4.4.1(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-vue: 10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))) eslint-plugin-yml: 3.3.1(eslint@10.2.0(jiti@2.6.1)) eslint-processor-vue-blocks: 2.0.0(@vue/compiler-sfc@3.5.31)(eslint@10.2.0(jiti@2.6.1)) - globals: 17.4.0 + globals: 17.5.0 + local-pkg: 1.1.2 + parse-gitignore: 2.0.0 + toml-eslint-parser: 1.0.3 + vue-eslint-parser: 10.4.0(eslint@10.2.0(jiti@2.6.1)) + yaml-eslint-parser: 2.0.0 + optionalDependencies: + '@eslint-react/eslint-plugin': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@next/eslint-plugin-next': 16.2.3 + eslint-plugin-react-refresh: 0.5.2(eslint@10.2.0(jiti@2.6.1)) + transitivePeerDependencies: + - '@eslint/json' + - '@typescript-eslint/rule-tester' + - '@typescript-eslint/typescript-estree' + - '@typescript-eslint/utils' + - '@vue/compiler-sfc' + - oxlint + - supports-color + - typescript + - vitest + + '@antfu/eslint-config@8.2.0(@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@next/eslint-plugin-next@16.2.3)(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@vue/compiler-sfc@3.5.31)(eslint-plugin-react-refresh@0.5.2(eslint@10.2.0(jiti@2.6.1)))(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(typescript@6.0.2)(vitest@4.1.4)': + dependencies: + '@antfu/install-pkg': 1.1.0 + '@clack/prompts': 1.2.0 + '@e18e/eslint-plugin': 0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0)) + '@eslint-community/eslint-plugin-eslint-comments': 4.7.1(eslint@10.2.0(jiti@2.6.1)) + '@eslint/markdown': 8.0.1 + '@stylistic/eslint-plugin': 5.10.0(eslint@10.2.0(jiti@2.6.1)) + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/parser': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@vitest/eslint-plugin': 1.6.15(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)(vitest@4.1.4) + ansis: 4.2.0 + cac: 7.0.0 + eslint: 10.2.0(jiti@2.6.1) + eslint-config-flat-gitignore: 2.3.0(eslint@10.2.0(jiti@2.6.1)) + eslint-flat-config-utils: 3.1.0 + eslint-merge-processors: 2.0.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-antfu: 3.2.2(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-command: 3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-import-lite: 0.6.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-jsdoc: 62.9.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-jsonc: 3.1.2(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-n: 17.24.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint-plugin-no-only-tests: 3.3.0 + eslint-plugin-perfectionist: 5.8.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint-plugin-pnpm: 1.6.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-regexp: 3.1.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-toml: 1.3.1(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-unicorn: 64.0.0(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-unused-imports: 4.4.1(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)) + eslint-plugin-vue: 10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))) + eslint-plugin-yml: 3.3.1(eslint@10.2.0(jiti@2.6.1)) + eslint-processor-vue-blocks: 2.0.0(@vue/compiler-sfc@3.5.31)(eslint@10.2.0(jiti@2.6.1)) + globals: 17.5.0 local-pkg: 1.1.2 parse-gitignore: 2.0.0 toml-eslint-parser: 1.0.3 @@ -8896,20 +9166,21 @@ snapshots: '@babel/helper-string-parser': 7.27.1 '@babel/helper-validator-identifier': 7.28.5 - '@base-ui/react@1.3.0(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': + '@base-ui/react@1.4.0(@date-fns/tz@1.4.1)(@types/react@19.2.14)(date-fns@4.1.0)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@babel/runtime': 7.29.2 - '@base-ui/utils': 0.2.6(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@base-ui/utils': 0.2.7(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@date-fns/tz': 1.4.1 '@floating-ui/react-dom': 2.1.8(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@floating-ui/utils': 0.2.11 + date-fns: 4.1.0 react: 19.2.5 react-dom: 19.2.5(react@19.2.5) - tabbable: 6.4.0 use-sync-external-store: 1.6.0(react@19.2.5) optionalDependencies: '@types/react': 19.2.14 - '@base-ui/utils@0.2.6(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': + '@base-ui/utils@0.2.7(@types/react@19.2.14)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: '@babel/runtime': 7.29.2 '@floating-ui/utils': 0.2.11 @@ -8941,7 +9212,7 @@ snapshots: '@chevrotain/utils@11.1.2': {} - '@chromatic-com/storybook@5.1.1(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))': + '@chromatic-com/storybook@5.1.2(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))': dependencies: '@neoconfetti/react': 1.0.0 chromatic: 13.3.5 @@ -9027,18 +9298,18 @@ snapshots: dependencies: regexp-match-indices: 1.0.2 - '@cucumber/cucumber@12.7.0': + '@cucumber/cucumber@12.8.0': dependencies: '@cucumber/ci-environment': 13.0.0 '@cucumber/cucumber-expressions': 19.0.0 '@cucumber/gherkin': 38.0.0 - '@cucumber/gherkin-streams': 6.0.0(@cucumber/gherkin@38.0.0)(@cucumber/message-streams@4.0.1(@cucumber/messages@32.0.1))(@cucumber/messages@32.0.1) + '@cucumber/gherkin-streams': 6.0.0(@cucumber/gherkin@38.0.0)(@cucumber/message-streams@4.1.1(@cucumber/messages@32.2.0))(@cucumber/messages@32.2.0) '@cucumber/gherkin-utils': 11.0.0 - '@cucumber/html-formatter': 23.0.0(@cucumber/messages@32.0.1) - '@cucumber/junit-xml-formatter': 0.9.0(@cucumber/messages@32.0.1) - '@cucumber/message-streams': 4.0.1(@cucumber/messages@32.0.1) - '@cucumber/messages': 32.0.1 - '@cucumber/pretty-formatter': 1.0.1(@cucumber/cucumber@12.7.0)(@cucumber/messages@32.0.1) + '@cucumber/html-formatter': 23.0.0(@cucumber/messages@32.2.0) + '@cucumber/junit-xml-formatter': 0.13.2(@cucumber/messages@32.2.0) + '@cucumber/message-streams': 4.1.1(@cucumber/messages@32.2.0) + '@cucumber/messages': 32.2.0 + '@cucumber/pretty-formatter': 1.0.1(@cucumber/cucumber@12.8.0)(@cucumber/messages@32.2.0) '@cucumber/tag-expressions': 9.1.0 assertion-error-formatter: 3.0.0 capital-case: 1.0.4 @@ -9057,7 +9328,6 @@ snapshots: lodash.merge: 4.6.2 lodash.mergewith: 4.6.2 luxon: 3.7.2 - mime: 3.0.0 mkdirp: 3.0.1 mz: 2.7.0 progress: 2.0.3 @@ -9070,79 +9340,82 @@ snapshots: yaml: 2.8.3 yup: 1.7.1 - '@cucumber/gherkin-streams@6.0.0(@cucumber/gherkin@38.0.0)(@cucumber/message-streams@4.0.1(@cucumber/messages@32.0.1))(@cucumber/messages@32.0.1)': + '@cucumber/gherkin-streams@6.0.0(@cucumber/gherkin@38.0.0)(@cucumber/message-streams@4.1.1(@cucumber/messages@32.2.0))(@cucumber/messages@32.2.0)': dependencies: '@cucumber/gherkin': 38.0.0 - '@cucumber/message-streams': 4.0.1(@cucumber/messages@32.0.1) - '@cucumber/messages': 32.0.1 + '@cucumber/message-streams': 4.1.1(@cucumber/messages@32.2.0) + '@cucumber/messages': 32.2.0 commander: 14.0.0 source-map-support: 0.5.21 '@cucumber/gherkin-utils@11.0.0': dependencies: '@cucumber/gherkin': 38.0.0 - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 '@teppeis/multimaps': 3.0.0 commander: 14.0.2 source-map-support: 0.5.21 '@cucumber/gherkin@38.0.0': dependencies: - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 - '@cucumber/html-formatter@23.0.0(@cucumber/messages@32.0.1)': + '@cucumber/html-formatter@23.0.0(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 - '@cucumber/junit-xml-formatter@0.9.0(@cucumber/messages@32.0.1)': + '@cucumber/junit-xml-formatter@0.13.2(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/messages': 32.0.1 - '@cucumber/query': 14.7.0(@cucumber/messages@32.0.1) + '@cucumber/messages': 32.2.0 + '@cucumber/query': 14.7.0(@cucumber/messages@32.2.0) '@teppeis/multimaps': 3.0.0 luxon: 3.7.2 xmlbuilder: 15.1.1 - '@cucumber/message-streams@4.0.1(@cucumber/messages@32.0.1)': + '@cucumber/message-streams@4.1.1(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 + mime: 3.0.0 - '@cucumber/messages@32.0.1': + '@cucumber/messages@32.2.0': dependencies: class-transformer: 0.5.1 reflect-metadata: 0.2.2 - '@cucumber/pretty-formatter@1.0.1(@cucumber/cucumber@12.7.0)(@cucumber/messages@32.0.1)': + '@cucumber/pretty-formatter@1.0.1(@cucumber/cucumber@12.8.0)(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/cucumber': 12.7.0 - '@cucumber/messages': 32.0.1 + '@cucumber/cucumber': 12.8.0 + '@cucumber/messages': 32.2.0 ansi-styles: 5.2.0 cli-table3: 0.6.5 figures: 3.2.0 ts-dedent: 2.2.0 - '@cucumber/query@14.7.0(@cucumber/messages@32.0.1)': + '@cucumber/query@14.7.0(@cucumber/messages@32.2.0)': dependencies: - '@cucumber/messages': 32.0.1 + '@cucumber/messages': 32.2.0 '@teppeis/multimaps': 3.0.0 lodash.sortby: 4.7.0 '@cucumber/tag-expressions@9.1.0': {} - '@e18e/eslint-plugin@0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))': + '@date-fns/tz@1.4.1': {} + + '@e18e/eslint-plugin@0.3.0(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))': dependencies: eslint-plugin-depend: 1.5.0(eslint@10.2.0(jiti@2.6.1)) optionalDependencies: eslint: 10.2.0(jiti@2.6.1) - oxlint: 1.58.0(oxlint-tsgolint@0.20.0) + oxlint: 1.60.0(oxlint-tsgolint@0.20.0) '@egoist/tailwindcss-icons@1.9.2(tailwindcss@4.2.2)': dependencies: '@iconify/utils': 3.1.0 tailwindcss: 4.2.2 - '@emnapi/core@1.9.1': + '@emnapi/core@1.9.2': dependencies: - '@emnapi/wasi-threads': 1.2.0 + '@emnapi/wasi-threads': 1.2.1 tslib: 2.8.1 optional: true @@ -9151,7 +9424,12 @@ snapshots: tslib: 2.8.1 optional: true - '@emnapi/wasi-threads@1.2.0': + '@emnapi/runtime@1.9.2': + dependencies: + tslib: 2.8.1 + optional: true + + '@emnapi/wasi-threads@1.2.1': dependencies: tslib: 2.8.1 optional: true @@ -9161,7 +9439,7 @@ snapshots: '@es-joy/jsdoccomment@0.84.0': dependencies: '@types/estree': 1.0.8 - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/types': 8.58.2 comment-parser: 1.4.5 esquery: 1.7.0 jsdoc-type-pratt-parser: 7.1.1 @@ -9169,7 +9447,7 @@ snapshots: '@es-joy/jsdoccomment@0.86.0': dependencies: '@types/estree': 1.0.8 - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/types': 8.58.2 comment-parser: 1.4.6 esquery: 1.7.0 jsdoc-type-pratt-parser: 7.2.0 @@ -9274,9 +9552,9 @@ snapshots: '@eslint-react/ast@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) string-ts: 2.3.1 typescript: 6.0.2 @@ -9288,9 +9566,9 @@ snapshots: '@eslint-react/ast': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 typescript: 6.0.2 @@ -9300,10 +9578,10 @@ snapshots: '@eslint-react/eslint-plugin@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) eslint-plugin-react-dom: 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint-plugin-react-naming-convention: 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) @@ -9317,7 +9595,7 @@ snapshots: '@eslint-react/shared@3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 typescript: 6.0.2 @@ -9329,9 +9607,9 @@ snapshots: dependencies: '@eslint-react/ast': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 typescript: 6.0.2 @@ -9382,9 +9660,9 @@ snapshots: dependencies: '@types/json-schema': 7.0.15 - '@eslint/css-tree@3.6.9': + '@eslint/css-tree@4.0.1': dependencies: - mdn-data: 2.23.0 + mdn-data: 2.27.1 source-map-js: 1.2.1 '@eslint/eslintrc@3.3.5': @@ -9494,11 +9772,11 @@ snapshots: '@floating-ui/utils@0.2.11': {} - '@formatjs/fast-memoize@3.1.1': {} + '@formatjs/fast-memoize@3.1.2': {} - '@formatjs/intl-localematcher@0.8.2': + '@formatjs/intl-localematcher@0.8.3': dependencies: - '@formatjs/fast-memoize': 3.1.1 + '@formatjs/fast-memoize': 3.1.2 '@headlessui/react@2.2.10(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: @@ -9514,9 +9792,9 @@ snapshots: dependencies: react: 19.2.5 - '@hono/node-server@1.19.13(hono@4.12.12)': + '@hono/node-server@1.19.14(hono@4.12.14)': dependencies: - hono: 4.12.12 + hono: 4.12.14 '@humanfs/core@0.19.1': {} @@ -9672,11 +9950,11 @@ snapshots: dependencies: minipass: 7.1.3 - '@joshwooding/vite-plugin-react-docgen-typescript@0.7.0(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)': + '@joshwooding/vite-plugin-react-docgen-typescript@0.7.0(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)': dependencies: glob: 13.0.6 react-docgen-typescript: 2.4.0(typescript@6.0.2) - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' optionalDependencies: typescript: 6.0.2 @@ -9704,169 +9982,169 @@ snapshots: '@jridgewell/resolve-uri': 3.1.2 '@jridgewell/sourcemap-codec': 1.5.5 - '@lexical/clipboard@0.42.0': + '@lexical/clipboard@0.43.0': dependencies: - '@lexical/html': 0.42.0 - '@lexical/list': 0.42.0 - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/html': 0.43.0 + '@lexical/list': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/code-core@0.42.0': + '@lexical/code-core@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/devtools-core@0.42.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': + '@lexical/devtools-core@0.43.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@lexical/html': 0.42.0 - '@lexical/link': 0.42.0 - '@lexical/mark': 0.42.0 - '@lexical/table': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/html': 0.43.0 + '@lexical/link': 0.43.0 + '@lexical/mark': 0.43.0 + '@lexical/table': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 react: 19.2.5 react-dom: 19.2.5(react@19.2.5) - '@lexical/dragon@0.42.0': + '@lexical/dragon@0.43.0': dependencies: - '@lexical/extension': 0.42.0 - lexical: 0.42.0 + '@lexical/extension': 0.43.0 + lexical: 0.43.0 - '@lexical/extension@0.42.0': + '@lexical/extension@0.43.0': dependencies: - '@lexical/utils': 0.42.0 - '@preact/signals-core': 1.14.0 - lexical: 0.42.0 + '@lexical/utils': 0.43.0 + '@preact/signals-core': 1.14.1 + lexical: 0.43.0 - '@lexical/hashtag@0.42.0': + '@lexical/hashtag@0.43.0': dependencies: - '@lexical/text': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/text': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/history@0.42.0': + '@lexical/history@0.43.0': dependencies: - '@lexical/extension': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/extension': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/html@0.42.0': + '@lexical/html@0.43.0': dependencies: - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/link@0.42.0': + '@lexical/link@0.43.0': dependencies: - '@lexical/extension': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/extension': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/list@0.42.0': + '@lexical/list@0.43.0': dependencies: - '@lexical/extension': 0.42.0 - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/extension': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/mark@0.42.0': + '@lexical/mark@0.43.0': dependencies: - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/markdown@0.42.0': + '@lexical/markdown@0.43.0': dependencies: - '@lexical/code-core': 0.42.0 - '@lexical/link': 0.42.0 - '@lexical/list': 0.42.0 - '@lexical/rich-text': 0.42.0 - '@lexical/text': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/code-core': 0.43.0 + '@lexical/link': 0.43.0 + '@lexical/list': 0.43.0 + '@lexical/rich-text': 0.43.0 + '@lexical/text': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/offset@0.42.0': + '@lexical/offset@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/overflow@0.42.0': + '@lexical/overflow@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/plain-text@0.42.0': + '@lexical/plain-text@0.43.0': dependencies: - '@lexical/clipboard': 0.42.0 - '@lexical/dragon': 0.42.0 - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/clipboard': 0.43.0 + '@lexical/dragon': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/react@0.42.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(yjs@13.6.30)': + '@lexical/react@0.43.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(yjs@13.6.30)': dependencies: '@floating-ui/react': 0.27.19(react-dom@19.2.5(react@19.2.5))(react@19.2.5) - '@lexical/devtools-core': 0.42.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) - '@lexical/dragon': 0.42.0 - '@lexical/extension': 0.42.0 - '@lexical/hashtag': 0.42.0 - '@lexical/history': 0.42.0 - '@lexical/link': 0.42.0 - '@lexical/list': 0.42.0 - '@lexical/mark': 0.42.0 - '@lexical/markdown': 0.42.0 - '@lexical/overflow': 0.42.0 - '@lexical/plain-text': 0.42.0 - '@lexical/rich-text': 0.42.0 - '@lexical/table': 0.42.0 - '@lexical/text': 0.42.0 - '@lexical/utils': 0.42.0 - '@lexical/yjs': 0.42.0(yjs@13.6.30) - lexical: 0.42.0 + '@lexical/devtools-core': 0.43.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) + '@lexical/dragon': 0.43.0 + '@lexical/extension': 0.43.0 + '@lexical/hashtag': 0.43.0 + '@lexical/history': 0.43.0 + '@lexical/link': 0.43.0 + '@lexical/list': 0.43.0 + '@lexical/mark': 0.43.0 + '@lexical/markdown': 0.43.0 + '@lexical/overflow': 0.43.0 + '@lexical/plain-text': 0.43.0 + '@lexical/rich-text': 0.43.0 + '@lexical/table': 0.43.0 + '@lexical/text': 0.43.0 + '@lexical/utils': 0.43.0 + '@lexical/yjs': 0.43.0(yjs@13.6.30) + lexical: 0.43.0 react: 19.2.5 react-dom: 19.2.5(react@19.2.5) react-error-boundary: 6.1.1(react@19.2.5) transitivePeerDependencies: - yjs - '@lexical/rich-text@0.42.0': + '@lexical/rich-text@0.43.0': dependencies: - '@lexical/clipboard': 0.42.0 - '@lexical/dragon': 0.42.0 - '@lexical/selection': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/clipboard': 0.43.0 + '@lexical/dragon': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/selection@0.42.0': + '@lexical/selection@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/table@0.42.0': + '@lexical/table@0.43.0': dependencies: - '@lexical/clipboard': 0.42.0 - '@lexical/extension': 0.42.0 - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/clipboard': 0.43.0 + '@lexical/extension': 0.43.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - '@lexical/text@0.42.0': + '@lexical/text@0.43.0': dependencies: - lexical: 0.42.0 + lexical: 0.43.0 - '@lexical/utils@0.42.0': + '@lexical/utils@0.43.0': dependencies: - '@lexical/selection': 0.42.0 - lexical: 0.42.0 + '@lexical/selection': 0.43.0 + lexical: 0.43.0 - '@lexical/yjs@0.42.0(yjs@13.6.30)': + '@lexical/yjs@0.43.0(yjs@13.6.30)': dependencies: - '@lexical/offset': 0.42.0 - '@lexical/selection': 0.42.0 - lexical: 0.42.0 + '@lexical/offset': 0.43.0 + '@lexical/selection': 0.43.0 + lexical: 0.43.0 yjs: 13.6.30 - '@mdx-js/loader@3.1.1(webpack@5.105.4(uglify-js@3.19.3))': + '@mdx-js/loader@3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: '@mdx-js/mdx': 3.1.1 source-map: 0.7.6 optionalDependencies: - webpack: 5.105.4(uglify-js@3.19.3) + webpack: 5.105.4(esbuild@0.27.2)(uglify-js@3.19.3) transitivePeerDependencies: - supports-color @@ -9931,10 +10209,10 @@ snapshots: react: 19.2.5 react-dom: 19.2.5(react@19.2.5) - '@napi-rs/wasm-runtime@1.1.2(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)': + '@napi-rs/wasm-runtime@1.1.2(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)': dependencies: - '@emnapi/core': 1.9.1 - '@emnapi/runtime': 1.9.1 + '@emnapi/core': 1.9.2 + '@emnapi/runtime': 1.9.2 '@tybys/wasm-util': 0.10.1 optional: true @@ -9948,11 +10226,11 @@ snapshots: dependencies: fast-glob: 3.3.1 - '@next/mdx@16.2.3(@mdx-js/loader@3.1.1(webpack@5.105.4(uglify-js@3.19.3)))(@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.5))': + '@next/mdx@16.2.3(@mdx-js/loader@3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(@mdx-js/react@3.1.1(@types/react@19.2.14)(react@19.2.5))': dependencies: source-map: 0.7.6 optionalDependencies: - '@mdx-js/loader': 3.1.1(webpack@5.105.4(uglify-js@3.19.3)) + '@mdx-js/loader': 3.1.1(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@mdx-js/react': 3.1.1(@types/react@19.2.14)(react@19.2.5) '@next/swc-darwin-arm64@16.2.3': @@ -9997,63 +10275,63 @@ snapshots: '@nolyfill/side-channel@1.0.44': {} - '@orpc/client@1.13.13': + '@orpc/client@1.13.14': dependencies: - '@orpc/shared': 1.13.13 - '@orpc/standard-server': 1.13.13 - '@orpc/standard-server-fetch': 1.13.13 - '@orpc/standard-server-peer': 1.13.13 + '@orpc/shared': 1.13.14 + '@orpc/standard-server': 1.13.14 + '@orpc/standard-server-fetch': 1.13.14 + '@orpc/standard-server-peer': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/contract@1.13.13': + '@orpc/contract@1.13.14': dependencies: - '@orpc/client': 1.13.13 - '@orpc/shared': 1.13.13 + '@orpc/client': 1.13.14 + '@orpc/shared': 1.13.14 '@standard-schema/spec': 1.1.0 openapi-types: 12.1.3 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/openapi-client@1.13.13': + '@orpc/openapi-client@1.13.14': dependencies: - '@orpc/client': 1.13.13 - '@orpc/contract': 1.13.13 - '@orpc/shared': 1.13.13 - '@orpc/standard-server': 1.13.13 + '@orpc/client': 1.13.14 + '@orpc/contract': 1.13.14 + '@orpc/shared': 1.13.14 + '@orpc/standard-server': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/shared@1.13.13': + '@orpc/shared@1.13.14': dependencies: radash: 12.1.1 type-fest: 5.5.0 - '@orpc/standard-server-fetch@1.13.13': + '@orpc/standard-server-fetch@1.13.14': dependencies: - '@orpc/shared': 1.13.13 - '@orpc/standard-server': 1.13.13 + '@orpc/shared': 1.13.14 + '@orpc/standard-server': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/standard-server-peer@1.13.13': + '@orpc/standard-server-peer@1.13.14': dependencies: - '@orpc/shared': 1.13.13 - '@orpc/standard-server': 1.13.13 + '@orpc/shared': 1.13.14 + '@orpc/standard-server': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/standard-server@1.13.13': + '@orpc/standard-server@1.13.14': dependencies: - '@orpc/shared': 1.13.13 + '@orpc/shared': 1.13.14 transitivePeerDependencies: - '@opentelemetry/api' - '@orpc/tanstack-query@1.13.13(@orpc/client@1.13.13)(@tanstack/query-core@5.96.2)': + '@orpc/tanstack-query@1.13.14(@orpc/client@1.13.14)(@tanstack/query-core@5.99.0)': dependencies: - '@orpc/client': 1.13.13 - '@orpc/shared': 1.13.13 - '@tanstack/query-core': 5.96.2 + '@orpc/client': 1.13.14 + '@orpc/shared': 1.13.14 + '@tanstack/query-core': 5.99.0 transitivePeerDependencies: - '@opentelemetry/api' @@ -10107,9 +10385,9 @@ snapshots: '@oxc-parser/binding-openharmony-arm64@0.121.0': optional: true - '@oxc-parser/binding-wasm32-wasi@0.121.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)': + '@oxc-parser/binding-wasm32-wasi@0.121.0(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)': dependencies: - '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) transitivePeerDependencies: - '@emnapi/core' - '@emnapi/runtime' @@ -10124,11 +10402,11 @@ snapshots: '@oxc-parser/binding-win32-x64-msvc@0.121.0': optional: true - '@oxc-project/runtime@0.123.0': {} + '@oxc-project/runtime@0.124.0': {} '@oxc-project/types@0.121.0': {} - '@oxc-project/types@0.123.0': {} + '@oxc-project/types@0.124.0': {} '@oxc-resolver/binding-android-arm-eabi@11.19.1': optional: true @@ -10178,9 +10456,9 @@ snapshots: '@oxc-resolver/binding-openharmony-arm64@11.19.1': optional: true - '@oxc-resolver/binding-wasm32-wasi@11.19.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1)': + '@oxc-resolver/binding-wasm32-wasi@11.19.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2)': dependencies: - '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + '@napi-rs/wasm-runtime': 1.1.2(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) transitivePeerDependencies: - '@emnapi/core' - '@emnapi/runtime' @@ -10195,61 +10473,61 @@ snapshots: '@oxc-resolver/binding-win32-x64-msvc@11.19.1': optional: true - '@oxfmt/binding-android-arm-eabi@0.43.0': + '@oxfmt/binding-android-arm-eabi@0.45.0': optional: true - '@oxfmt/binding-android-arm64@0.43.0': + '@oxfmt/binding-android-arm64@0.45.0': optional: true - '@oxfmt/binding-darwin-arm64@0.43.0': + '@oxfmt/binding-darwin-arm64@0.45.0': optional: true - '@oxfmt/binding-darwin-x64@0.43.0': + '@oxfmt/binding-darwin-x64@0.45.0': optional: true - '@oxfmt/binding-freebsd-x64@0.43.0': + '@oxfmt/binding-freebsd-x64@0.45.0': optional: true - '@oxfmt/binding-linux-arm-gnueabihf@0.43.0': + '@oxfmt/binding-linux-arm-gnueabihf@0.45.0': optional: true - '@oxfmt/binding-linux-arm-musleabihf@0.43.0': + '@oxfmt/binding-linux-arm-musleabihf@0.45.0': optional: true - '@oxfmt/binding-linux-arm64-gnu@0.43.0': + '@oxfmt/binding-linux-arm64-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-arm64-musl@0.43.0': + '@oxfmt/binding-linux-arm64-musl@0.45.0': optional: true - '@oxfmt/binding-linux-ppc64-gnu@0.43.0': + '@oxfmt/binding-linux-ppc64-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-riscv64-gnu@0.43.0': + '@oxfmt/binding-linux-riscv64-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-riscv64-musl@0.43.0': + '@oxfmt/binding-linux-riscv64-musl@0.45.0': optional: true - '@oxfmt/binding-linux-s390x-gnu@0.43.0': + '@oxfmt/binding-linux-s390x-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-x64-gnu@0.43.0': + '@oxfmt/binding-linux-x64-gnu@0.45.0': optional: true - '@oxfmt/binding-linux-x64-musl@0.43.0': + '@oxfmt/binding-linux-x64-musl@0.45.0': optional: true - '@oxfmt/binding-openharmony-arm64@0.43.0': + '@oxfmt/binding-openharmony-arm64@0.45.0': optional: true - '@oxfmt/binding-win32-arm64-msvc@0.43.0': + '@oxfmt/binding-win32-arm64-msvc@0.45.0': optional: true - '@oxfmt/binding-win32-ia32-msvc@0.43.0': + '@oxfmt/binding-win32-ia32-msvc@0.45.0': optional: true - '@oxfmt/binding-win32-x64-msvc@0.43.0': + '@oxfmt/binding-win32-x64-msvc@0.45.0': optional: true '@oxlint-tsgolint/darwin-arm64@0.20.0': @@ -10270,61 +10548,61 @@ snapshots: '@oxlint-tsgolint/win32-x64@0.20.0': optional: true - '@oxlint/binding-android-arm-eabi@1.58.0': + '@oxlint/binding-android-arm-eabi@1.60.0': optional: true - '@oxlint/binding-android-arm64@1.58.0': + '@oxlint/binding-android-arm64@1.60.0': optional: true - '@oxlint/binding-darwin-arm64@1.58.0': + '@oxlint/binding-darwin-arm64@1.60.0': optional: true - '@oxlint/binding-darwin-x64@1.58.0': + '@oxlint/binding-darwin-x64@1.60.0': optional: true - '@oxlint/binding-freebsd-x64@1.58.0': + '@oxlint/binding-freebsd-x64@1.60.0': optional: true - '@oxlint/binding-linux-arm-gnueabihf@1.58.0': + '@oxlint/binding-linux-arm-gnueabihf@1.60.0': optional: true - '@oxlint/binding-linux-arm-musleabihf@1.58.0': + '@oxlint/binding-linux-arm-musleabihf@1.60.0': optional: true - '@oxlint/binding-linux-arm64-gnu@1.58.0': + '@oxlint/binding-linux-arm64-gnu@1.60.0': optional: true - '@oxlint/binding-linux-arm64-musl@1.58.0': + '@oxlint/binding-linux-arm64-musl@1.60.0': optional: true - '@oxlint/binding-linux-ppc64-gnu@1.58.0': + '@oxlint/binding-linux-ppc64-gnu@1.60.0': optional: true - '@oxlint/binding-linux-riscv64-gnu@1.58.0': + '@oxlint/binding-linux-riscv64-gnu@1.60.0': optional: true - '@oxlint/binding-linux-riscv64-musl@1.58.0': + '@oxlint/binding-linux-riscv64-musl@1.60.0': optional: true - '@oxlint/binding-linux-s390x-gnu@1.58.0': + '@oxlint/binding-linux-s390x-gnu@1.60.0': optional: true - '@oxlint/binding-linux-x64-gnu@1.58.0': + '@oxlint/binding-linux-x64-gnu@1.60.0': optional: true - '@oxlint/binding-linux-x64-musl@1.58.0': + '@oxlint/binding-linux-x64-musl@1.60.0': optional: true - '@oxlint/binding-openharmony-arm64@1.58.0': + '@oxlint/binding-openharmony-arm64@1.60.0': optional: true - '@oxlint/binding-win32-arm64-msvc@1.58.0': + '@oxlint/binding-win32-arm64-msvc@1.60.0': optional: true - '@oxlint/binding-win32-ia32-msvc@1.58.0': + '@oxlint/binding-win32-ia32-msvc@1.60.0': optional: true - '@oxlint/binding-win32-x64-msvc@1.58.0': + '@oxlint/binding-win32-x64-msvc@1.60.0': optional: true '@parcel/watcher-android-arm64@2.5.6': @@ -10396,7 +10674,7 @@ snapshots: '@polka/url@1.0.0-next.29': {} - '@preact/signals-core@1.14.0': {} + '@preact/signals-core@1.14.1': {} '@radix-ui/primitive@1.1.3': {} @@ -10692,7 +10970,7 @@ snapshots: '@rgrove/parse-xml@4.2.0': {} - '@rolldown/pluginutils@1.0.0-rc.13': {} + '@rolldown/pluginutils@1.0.0-rc.15': {} '@rolldown/pluginutils@1.0.0-rc.7': {} @@ -10786,38 +11064,38 @@ snapshots: '@rollup/rollup-win32-x64-msvc@4.59.0': optional: true - '@sentry-internal/browser-utils@10.47.0': + '@sentry-internal/browser-utils@10.48.0': dependencies: - '@sentry/core': 10.47.0 + '@sentry/core': 10.48.0 - '@sentry-internal/feedback@10.47.0': + '@sentry-internal/feedback@10.48.0': dependencies: - '@sentry/core': 10.47.0 + '@sentry/core': 10.48.0 - '@sentry-internal/replay-canvas@10.47.0': + '@sentry-internal/replay-canvas@10.48.0': dependencies: - '@sentry-internal/replay': 10.47.0 - '@sentry/core': 10.47.0 + '@sentry-internal/replay': 10.48.0 + '@sentry/core': 10.48.0 - '@sentry-internal/replay@10.47.0': + '@sentry-internal/replay@10.48.0': dependencies: - '@sentry-internal/browser-utils': 10.47.0 - '@sentry/core': 10.47.0 + '@sentry-internal/browser-utils': 10.48.0 + '@sentry/core': 10.48.0 - '@sentry/browser@10.47.0': + '@sentry/browser@10.48.0': dependencies: - '@sentry-internal/browser-utils': 10.47.0 - '@sentry-internal/feedback': 10.47.0 - '@sentry-internal/replay': 10.47.0 - '@sentry-internal/replay-canvas': 10.47.0 - '@sentry/core': 10.47.0 + '@sentry-internal/browser-utils': 10.48.0 + '@sentry-internal/feedback': 10.48.0 + '@sentry-internal/replay': 10.48.0 + '@sentry-internal/replay-canvas': 10.48.0 + '@sentry/core': 10.48.0 - '@sentry/core@10.47.0': {} + '@sentry/core@10.48.0': {} - '@sentry/react@10.47.0(react@19.2.5)': + '@sentry/react@10.48.0(react@19.2.5)': dependencies: - '@sentry/browser': 10.47.0 - '@sentry/core': 10.47.0 + '@sentry/browser': 10.48.0 + '@sentry/core': 10.48.0 react: 19.2.5 '@shikijs/core@4.0.2': @@ -10867,6 +11145,8 @@ snapshots: '@sindresorhus/base62@1.0.0': {} + '@socket.io/component-emitter@3.1.2': {} + '@solid-primitives/event-listener@2.4.5(solid-js@1.9.11)': dependencies: '@solid-primitives/utils': 6.4.0(solid-js@1.9.11) @@ -10905,10 +11185,10 @@ snapshots: '@standard-schema/spec@1.1.0': {} - '@storybook/addon-docs@10.3.5(@types/react@19.2.14)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/addon-docs@10.3.5(@types/react@19.2.14)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: '@mdx-js/react': 3.1.1(@types/react@19.2.14)(react@19.2.5) - '@storybook/csf-plugin': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(uglify-js@3.19.3)) + '@storybook/csf-plugin': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@storybook/icons': 2.0.1(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@storybook/react-dom-shim': 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)) react: 19.2.5 @@ -10938,25 +11218,26 @@ snapshots: storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) ts-dedent: 2.2.0 - '@storybook/builder-vite@10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/builder-vite@10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: - '@storybook/csf-plugin': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(uglify-js@3.19.3)) + '@storybook/csf-plugin': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) ts-dedent: 2.2.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - esbuild - rollup - webpack - '@storybook/csf-plugin@10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/csf-plugin@10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) unplugin: 2.3.11 optionalDependencies: + esbuild: 0.27.2 rollup: 4.59.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - webpack: 5.105.4(uglify-js@3.19.3) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + webpack: 5.105.4(esbuild@0.27.2)(uglify-js@3.19.3) '@storybook/global@5.0.0': {} @@ -10965,18 +11246,18 @@ snapshots: react: 19.2.5 react-dom: 19.2.5(react@19.2.5) - '@storybook/nextjs-vite@10.3.5(@babel/core@7.29.0)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/nextjs-vite@10.3.5(@babel/core@7.29.0)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: - '@storybook/builder-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(uglify-js@3.19.3)) + '@storybook/builder-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@storybook/react': 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) - '@storybook/react-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(uglify-js@3.19.3)) + '@storybook/react-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) next: 16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0) react: 19.2.5 react-dom: 19.2.5(react@19.2.5) storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) styled-jsx: 5.1.6(@babel/core@7.29.0)(react@19.2.5) - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - vite-plugin-storybook-nextjs: 3.2.4(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite-plugin-storybook-nextjs: 3.2.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) optionalDependencies: typescript: 6.0.2 transitivePeerDependencies: @@ -10993,11 +11274,11 @@ snapshots: react-dom: 19.2.5(react@19.2.5) storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) - '@storybook/react-vite@10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(uglify-js@3.19.3))': + '@storybook/react-vite@10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3))': dependencies: - '@joshwooding/vite-plugin-react-docgen-typescript': 0.7.0(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) + '@joshwooding/vite-plugin-react-docgen-typescript': 0.7.0(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) '@rollup/pluginutils': 5.3.0(rollup@4.59.0) - '@storybook/builder-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(uglify-js@3.19.3)) + '@storybook/builder-vite': 10.3.5(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(rollup@4.59.0)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) '@storybook/react': 10.3.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2) empathic: 2.0.0 magic-string: 0.30.21 @@ -11007,7 +11288,7 @@ snapshots: resolve: 1.22.11 storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) tsconfig-paths: 4.2.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - esbuild - rollup @@ -11041,7 +11322,7 @@ snapshots: '@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1))': dependencies: '@eslint-community/eslint-utils': 4.9.1(eslint@10.2.0(jiti@2.6.1)) - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/types': 8.58.2 eslint: 10.2.0(jiti@2.6.1) eslint-visitor-keys: 4.2.1 espree: 10.4.0 @@ -11146,12 +11427,12 @@ snapshots: postcss-selector-parser: 6.0.10 tailwindcss: 4.2.2 - '@tailwindcss/vite@4.2.2(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': + '@tailwindcss/vite@4.2.2(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': dependencies: '@tailwindcss/node': 4.2.2 '@tailwindcss/oxide': 4.2.2 tailwindcss: 4.2.2 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' '@tanstack/devtools-client@0.0.6': dependencies: @@ -11197,26 +11478,26 @@ snapshots: - csstype - utf-8-validate - '@tanstack/eslint-plugin-query@5.96.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@tanstack/eslint-plugin-query@5.99.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) optionalDependencies: typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@tanstack/form-core@1.28.6': + '@tanstack/form-core@1.29.0': dependencies: '@tanstack/devtools-event-client': 0.4.3 '@tanstack/pacer-lite': 0.1.1 '@tanstack/store': 0.9.3 - '@tanstack/form-devtools@0.2.20(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11)': + '@tanstack/form-devtools@0.2.21(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11)': dependencies: '@tanstack/devtools-ui': 0.5.1(csstype@3.2.3)(solid-js@1.9.11) '@tanstack/devtools-utils': 0.4.0(@types/react@19.2.14)(react@19.2.5)(solid-js@1.9.11) - '@tanstack/form-core': 1.28.6 + '@tanstack/form-core': 1.29.0 clsx: 2.1.1 dayjs: 1.11.20 goober: 2.1.18(csstype@3.2.3) @@ -11230,9 +11511,9 @@ snapshots: '@tanstack/pacer-lite@0.1.1': {} - '@tanstack/query-core@5.96.2': {} + '@tanstack/query-core@5.99.0': {} - '@tanstack/query-devtools@5.96.2': {} + '@tanstack/query-devtools@5.99.0': {} '@tanstack/react-devtools@0.10.2(@types/react-dom@19.2.3(@types/react@19.2.14))(@types/react@19.2.14)(csstype@3.2.3)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(solid-js@1.9.11)': dependencies: @@ -11247,10 +11528,10 @@ snapshots: - solid-js - utf-8-validate - '@tanstack/react-form-devtools@0.2.20(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11)': + '@tanstack/react-form-devtools@0.2.21(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11)': dependencies: '@tanstack/devtools-utils': 0.4.0(@types/react@19.2.14)(react@19.2.5)(solid-js@1.9.11) - '@tanstack/form-devtools': 0.2.20(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11) + '@tanstack/form-devtools': 0.2.21(@types/react@19.2.14)(csstype@3.2.3)(react@19.2.5)(solid-js@1.9.11) react: 19.2.5 transitivePeerDependencies: - '@types/react' @@ -11259,23 +11540,23 @@ snapshots: - solid-js - vue - '@tanstack/react-form@1.28.6(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': + '@tanstack/react-form@1.29.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': dependencies: - '@tanstack/form-core': 1.28.6 + '@tanstack/form-core': 1.29.0 '@tanstack/react-store': 0.9.3(react-dom@19.2.5(react@19.2.5))(react@19.2.5) react: 19.2.5 transitivePeerDependencies: - react-dom - '@tanstack/react-query-devtools@5.96.2(@tanstack/react-query@5.96.2(react@19.2.5))(react@19.2.5)': + '@tanstack/react-query-devtools@5.99.0(@tanstack/react-query@5.99.0(react@19.2.5))(react@19.2.5)': dependencies: - '@tanstack/query-devtools': 5.96.2 - '@tanstack/react-query': 5.96.2(react@19.2.5) + '@tanstack/query-devtools': 5.99.0 + '@tanstack/react-query': 5.99.0(react@19.2.5) react: 19.2.5 - '@tanstack/react-query@5.96.2(react@19.2.5)': + '@tanstack/react-query@5.99.0(react@19.2.5)': dependencies: - '@tanstack/query-core': 5.96.2 + '@tanstack/query-core': 5.99.0 react: 19.2.5 '@tanstack/react-store@0.9.3(react-dom@19.2.5(react@19.2.5))(react@19.2.5)': @@ -11331,11 +11612,11 @@ snapshots: dependencies: '@testing-library/dom': 10.4.1 - '@tsslint/cli@3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2)': + '@tsslint/cli@3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2)': dependencies: '@clack/prompts': 0.8.2 - '@tsslint/config': 3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) - '@tsslint/core': 3.0.2 + '@tsslint/config': 3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2) + '@tsslint/core': 3.0.3 '@volar/language-core': 2.4.28 '@volar/language-hub': 0.0.1 '@volar/typescript': 2.4.28 @@ -11345,32 +11626,32 @@ snapshots: - '@tsslint/compat-eslint' - tsl - '@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2)': + '@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2)': dependencies: - '@tsslint/types': 3.0.2 - '@typescript-eslint/parser': 8.58.1(eslint@9.27.0(jiti@2.6.1))(typescript@6.0.2) + '@tsslint/types': 3.0.3 + '@typescript-eslint/parser': 8.58.2(eslint@9.27.0(jiti@2.6.1))(typescript@6.0.2) eslint: 9.27.0(jiti@2.6.1) transitivePeerDependencies: - jiti - supports-color - typescript - '@tsslint/config@3.0.2(@tsslint/compat-eslint@3.0.2(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2)': + '@tsslint/config@3.0.3(@tsslint/compat-eslint@3.0.3(jiti@2.6.1)(typescript@6.0.2))(typescript@6.0.2)': dependencies: - '@tsslint/types': 3.0.2 + '@tsslint/types': 3.0.3 minimatch: 10.2.4 ts-api-utils: 2.5.0(typescript@6.0.2) optionalDependencies: - '@tsslint/compat-eslint': 3.0.2(jiti@2.6.1)(typescript@6.0.2) + '@tsslint/compat-eslint': 3.0.3(jiti@2.6.1)(typescript@6.0.2) transitivePeerDependencies: - typescript - '@tsslint/core@3.0.2': + '@tsslint/core@3.0.3': dependencies: - '@tsslint/types': 3.0.2 + '@tsslint/types': 3.0.3 minimatch: 10.2.4 - '@tsslint/types@3.0.2': {} + '@tsslint/types@3.0.3': {} '@tybys/wasm-util@0.10.1': dependencies: @@ -11574,15 +11855,15 @@ snapshots: '@types/negotiator@0.6.4': {} - '@types/node@25.5.2': + '@types/node@25.6.0': dependencies: - undici-types: 7.18.2 + undici-types: 7.19.2 '@types/normalize-package-data@2.4.4': {} '@types/papaparse@5.5.2': dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 '@types/qs@6.15.0': {} @@ -11609,23 +11890,23 @@ snapshots: '@types/ws@8.18.1': dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 '@types/yauzl@2.10.3': dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 optional: true '@types/zen-observable@0.8.3': {} - '@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: '@eslint-community/regexpp': 4.12.2 - '@typescript-eslint/parser': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/parser': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/visitor-keys': 8.58.2 eslint: 10.2.0(jiti@2.6.1) ignore: 7.0.5 natural-compare: 1.4.0 @@ -11646,24 +11927,24 @@ snapshots: transitivePeerDependencies: - supports-color - '@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/visitor-keys': 8.58.2 debug: 4.4.3(supports-color@8.1.1) eslint: 10.2.0(jiti@2.6.1) typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@typescript-eslint/parser@8.58.1(eslint@9.27.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/parser@8.58.2(eslint@9.27.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/visitor-keys': 8.58.2 debug: 4.4.3(supports-color@8.1.1) eslint: 9.27.0(jiti@2.6.1) typescript: 6.0.2 @@ -11672,17 +11953,17 @@ snapshots: '@typescript-eslint/project-service@8.57.2(typescript@6.0.2)': dependencies: - '@typescript-eslint/tsconfig-utils': 8.58.1(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/tsconfig-utils': 8.58.2(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 debug: 4.4.3(supports-color@8.1.1) typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@typescript-eslint/project-service@8.58.1(typescript@6.0.2)': + '@typescript-eslint/project-service@8.58.2(typescript@6.0.2)': dependencies: - '@typescript-eslint/tsconfig-utils': 8.58.1(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/tsconfig-utils': 8.58.2(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 debug: 4.4.3(supports-color@8.1.1) typescript: 6.0.2 transitivePeerDependencies: @@ -11707,24 +11988,24 @@ snapshots: '@typescript-eslint/types': 8.57.2 '@typescript-eslint/visitor-keys': 8.57.2 - '@typescript-eslint/scope-manager@8.58.1': + '@typescript-eslint/scope-manager@8.58.2': dependencies: - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/visitor-keys': 8.58.2 '@typescript-eslint/tsconfig-utils@8.57.2(typescript@6.0.2)': dependencies: typescript: 6.0.2 - '@typescript-eslint/tsconfig-utils@8.58.1(typescript@6.0.2)': + '@typescript-eslint/tsconfig-utils@8.58.2(typescript@6.0.2)': dependencies: typescript: 6.0.2 - '@typescript-eslint/type-utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/type-utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) debug: 4.4.3(supports-color@8.1.1) eslint: 10.2.0(jiti@2.6.1) ts-api-utils: 2.5.0(typescript@6.0.2) @@ -11734,7 +12015,7 @@ snapshots: '@typescript-eslint/types@8.57.2': {} - '@typescript-eslint/types@8.58.1': {} + '@typescript-eslint/types@8.58.2': {} '@typescript-eslint/typescript-estree@8.57.2(typescript@6.0.2)': dependencies: @@ -11745,18 +12026,18 @@ snapshots: debug: 4.4.3(supports-color@8.1.1) minimatch: 10.2.4 semver: 7.7.4 - tinyglobby: 0.2.15 + tinyglobby: 0.2.16 ts-api-utils: 2.5.0(typescript@6.0.2) typescript: 6.0.2 transitivePeerDependencies: - supports-color - '@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2)': + '@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2)': dependencies: - '@typescript-eslint/project-service': 8.58.1(typescript@6.0.2) - '@typescript-eslint/tsconfig-utils': 8.58.1(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/visitor-keys': 8.58.1 + '@typescript-eslint/project-service': 8.58.2(typescript@6.0.2) + '@typescript-eslint/tsconfig-utils': 8.58.2(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/visitor-keys': 8.58.2 debug: 4.4.3(supports-color@8.1.1) minimatch: 10.2.4 semver: 7.7.4 @@ -11777,12 +12058,12 @@ snapshots: transitivePeerDependencies: - supports-color - '@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': dependencies: '@eslint-community/eslint-utils': 4.9.1(eslint@10.2.0(jiti@2.6.1)) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) typescript: 6.0.2 transitivePeerDependencies: @@ -11793,41 +12074,41 @@ snapshots: '@typescript-eslint/types': 8.57.2 eslint-visitor-keys: 5.0.1 - '@typescript-eslint/visitor-keys@8.58.1': + '@typescript-eslint/visitor-keys@8.58.2': dependencies: - '@typescript-eslint/types': 8.58.1 + '@typescript-eslint/types': 8.58.2 eslint-visitor-keys: 5.0.1 - '@typescript/native-preview-darwin-arm64@7.0.0-dev.20260408.1': + '@typescript/native-preview-darwin-arm64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-darwin-x64@7.0.0-dev.20260408.1': + '@typescript/native-preview-darwin-x64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-linux-arm64@7.0.0-dev.20260408.1': + '@typescript/native-preview-linux-arm64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-linux-arm@7.0.0-dev.20260408.1': + '@typescript/native-preview-linux-arm@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-linux-x64@7.0.0-dev.20260408.1': + '@typescript/native-preview-linux-x64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-win32-arm64@7.0.0-dev.20260408.1': + '@typescript/native-preview-win32-arm64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview-win32-x64@7.0.0-dev.20260408.1': + '@typescript/native-preview-win32-x64@7.0.0-dev.20260413.1': optional: true - '@typescript/native-preview@7.0.0-dev.20260408.1': + '@typescript/native-preview@7.0.0-dev.20260413.1': optionalDependencies: - '@typescript/native-preview-darwin-arm64': 7.0.0-dev.20260408.1 - '@typescript/native-preview-darwin-x64': 7.0.0-dev.20260408.1 - '@typescript/native-preview-linux-arm': 7.0.0-dev.20260408.1 - '@typescript/native-preview-linux-arm64': 7.0.0-dev.20260408.1 - '@typescript/native-preview-linux-x64': 7.0.0-dev.20260408.1 - '@typescript/native-preview-win32-arm64': 7.0.0-dev.20260408.1 - '@typescript/native-preview-win32-x64': 7.0.0-dev.20260408.1 + '@typescript/native-preview-darwin-arm64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-darwin-x64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-linux-arm': 7.0.0-dev.20260413.1 + '@typescript/native-preview-linux-arm64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-linux-x64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-win32-arm64': 7.0.0-dev.20260413.1 + '@typescript/native-preview-win32-x64': 7.0.0-dev.20260413.1 '@ungap/structured-clone@1.3.0': {} @@ -11857,12 +12138,12 @@ snapshots: '@resvg/resvg-wasm': 2.4.0 satori: 0.16.0 - '@vitejs/devtools-kit@0.1.11(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0)': + '@vitejs/devtools-kit@0.1.11(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0)': dependencies: '@vitejs/devtools-rpc': 0.1.11(typescript@6.0.2)(ws@8.20.0) birpc: 4.0.0 ohash: 2.0.11 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - typescript - ws @@ -11879,14 +12160,14 @@ snapshots: transitivePeerDependencies: - typescript - '@vitejs/plugin-react@6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': + '@vitejs/plugin-react@6.0.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': dependencies: '@rolldown/pluginutils': 1.0.0-rc.7 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - '@vitejs/plugin-rsc@0.5.23(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.5)': + '@vitejs/plugin-rsc@0.5.24(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(react@19.2.5)': dependencies: - '@rolldown/pluginutils': 1.0.0-rc.13 + '@rolldown/pluginutils': 1.0.0-rc.15 es-module-lexer: 2.0.0 estree-walker: 3.0.3 magic-string: 0.30.21 @@ -11895,15 +12176,15 @@ snapshots: srvx: 0.11.15 strip-literal: 3.1.0 turbo-stream: 3.2.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - vitefu: 1.1.3(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vitefu: 1.1.3(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) optionalDependencies: - react-server-dom-webpack: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)) + react-server-dom-webpack: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) - '@vitest/coverage-v8@4.1.3(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': + '@vitest/coverage-v8@4.1.4(@voidzero-dev/vite-plus-test@0.1.18)': dependencies: '@bcoe/v8-coverage': 1.0.2 - '@vitest/utils': 4.1.3 + '@vitest/utils': 4.1.4 ast-v8-to-istanbul: 1.0.0 istanbul-lib-coverage: 3.2.2 istanbul-lib-report: 3.0.1 @@ -11912,17 +12193,44 @@ snapshots: obug: 2.1.1 std-env: 4.0.0 tinyrainbow: 3.1.0 - vitest: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vitest: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - '@vitest/eslint-plugin@1.6.14(@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + '@vitest/coverage-v8@4.1.4(vitest@4.1.4)': dependencies: - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@bcoe/v8-coverage': 1.0.2 + '@vitest/utils': 4.1.4 + ast-v8-to-istanbul: 1.0.0 + istanbul-lib-coverage: 3.2.2 + istanbul-lib-report: 3.0.1 + istanbul-reports: 3.2.0 + magicast: 0.5.2 + obug: 2.1.1 + std-env: 4.0.0 + tinyrainbow: 3.1.0 + vitest: 4.1.4(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.9.0) + optional: true + + '@vitest/eslint-plugin@1.6.15(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@voidzero-dev/vite-plus-test@0.1.18)(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)': + dependencies: + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) optionalDependencies: - '@typescript-eslint/eslint-plugin': 8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) typescript: 6.0.2 - vitest: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vitest: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + transitivePeerDependencies: + - supports-color + + '@vitest/eslint-plugin@1.6.15(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2)(vitest@4.1.4)': + dependencies: + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint: 10.2.0(jiti@2.6.1) + optionalDependencies: + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + typescript: 6.0.2 + vitest: 4.1.4(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.9.0) transitivePeerDependencies: - supports-color @@ -11934,38 +12242,75 @@ snapshots: chai: 5.3.3 tinyrainbow: 2.0.0 + '@vitest/expect@4.1.4': + dependencies: + '@standard-schema/spec': 1.1.0 + '@types/chai': 5.2.3 + '@vitest/spy': 4.1.4 + '@vitest/utils': 4.1.4 + chai: 6.2.2 + tinyrainbow: 3.1.0 + optional: true + + '@vitest/mocker@4.1.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))': + dependencies: + '@vitest/spy': 4.1.4 + estree-walker: 3.0.3 + magic-string: 0.30.21 + optionalDependencies: + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + optional: true + '@vitest/pretty-format@3.2.4': dependencies: tinyrainbow: 2.0.0 - '@vitest/pretty-format@4.1.3': + '@vitest/pretty-format@4.1.4': dependencies: tinyrainbow: 3.1.0 + '@vitest/runner@4.1.4': + dependencies: + '@vitest/utils': 4.1.4 + pathe: 2.0.3 + optional: true + + '@vitest/snapshot@4.1.4': + dependencies: + '@vitest/pretty-format': 4.1.4 + '@vitest/utils': 4.1.4 + magic-string: 0.30.21 + pathe: 2.0.3 + optional: true + '@vitest/spy@3.2.4': dependencies: tinyspy: 4.0.4 + '@vitest/spy@4.1.4': + optional: true + '@vitest/utils@3.2.4': dependencies: '@vitest/pretty-format': 3.2.4 loupe: 3.2.1 tinyrainbow: 2.0.0 - '@vitest/utils@4.1.3': + '@vitest/utils@4.1.4': dependencies: - '@vitest/pretty-format': 4.1.3 + '@vitest/pretty-format': 4.1.4 convert-source-map: 2.0.0 tinyrainbow: 3.1.0 - '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)': + '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)': dependencies: - '@oxc-project/runtime': 0.123.0 - '@oxc-project/types': 0.123.0 + '@oxc-project/runtime': 0.124.0 + '@oxc-project/types': 0.124.0 lightningcss: 1.32.0 postcss: 8.5.9 optionalDependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 + esbuild: 0.27.2 fsevents: 2.3.3 jiti: 2.6.1 sass: 1.98.0 @@ -11974,29 +12319,29 @@ snapshots: typescript: 6.0.2 yaml: 2.8.3 - '@voidzero-dev/vite-plus-darwin-arm64@0.1.16': + '@voidzero-dev/vite-plus-darwin-arm64@0.1.18': optional: true - '@voidzero-dev/vite-plus-darwin-x64@0.1.16': + '@voidzero-dev/vite-plus-darwin-x64@0.1.18': optional: true - '@voidzero-dev/vite-plus-linux-arm64-gnu@0.1.16': + '@voidzero-dev/vite-plus-linux-arm64-gnu@0.1.18': optional: true - '@voidzero-dev/vite-plus-linux-arm64-musl@0.1.16': + '@voidzero-dev/vite-plus-linux-arm64-musl@0.1.18': optional: true - '@voidzero-dev/vite-plus-linux-x64-gnu@0.1.16': + '@voidzero-dev/vite-plus-linux-x64-gnu@0.1.18': optional: true - '@voidzero-dev/vite-plus-linux-x64-musl@0.1.16': + '@voidzero-dev/vite-plus-linux-x64-musl@0.1.18': optional: true - '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)': + '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)': dependencies: '@standard-schema/spec': 1.1.0 '@types/chai': 5.2.3 - '@voidzero-dev/vite-plus-core': 0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + '@voidzero-dev/vite-plus-core': 0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) es-module-lexer: 1.7.0 obug: 2.1.1 pixelmatch: 7.1.0 @@ -12005,12 +12350,13 @@ snapshots: std-env: 4.0.0 tinybench: 2.9.0 tinyexec: 1.0.4 - tinyglobby: 0.2.15 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + tinyglobby: 0.2.16 + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' ws: 8.20.0 optionalDependencies: - '@types/node': 25.5.2 - happy-dom: 20.8.9 + '@types/node': 25.6.0 + '@vitest/coverage-v8': 4.1.4(@voidzero-dev/vite-plus-test@0.1.18) + happy-dom: 20.9.0 transitivePeerDependencies: - '@arethetypeswrong/core' - '@tsdown/css' @@ -12032,10 +12378,10 @@ snapshots: - utf-8-validate - yaml - '@voidzero-dev/vite-plus-win32-arm64-msvc@0.1.16': + '@voidzero-dev/vite-plus-win32-arm64-msvc@0.1.18': optional: true - '@voidzero-dev/vite-plus-win32-x64-msvc@0.1.16': + '@voidzero-dev/vite-plus-win32-x64-msvc@0.1.18': optional: true '@volar/language-core@2.4.28': @@ -12378,6 +12724,9 @@ snapshots: loupe: 3.2.1 pathval: 2.0.1 + chai@6.2.2: + optional: true + chalk@4.1.1: dependencies: ansi-styles: 4.3.0 @@ -12793,6 +13142,8 @@ snapshots: d3: 7.9.0 lodash-es: 4.18.0 + date-fns@4.1.0: {} + dayjs@1.11.20: {} debug@4.4.3(supports-color@8.1.1): @@ -12870,7 +13221,7 @@ snapshots: optionalDependencies: '@types/trusted-types': 2.0.7 - dompurify@3.3.3: + dompurify@3.4.0: optionalDependencies: '@types/trusted-types': 2.0.7 @@ -12929,6 +13280,20 @@ snapshots: dependencies: once: 1.4.0 + engine.io-client@6.6.4: + dependencies: + '@socket.io/component-emitter': 3.1.2 + debug: 4.4.3(supports-color@8.1.1) + engine.io-parser: 5.2.3 + ws: 8.18.3 + xmlhttprequest-ssl: 2.1.2 + transitivePeerDependencies: + - bufferutil + - supports-color + - utf-8-validate + + engine.io-parser@5.2.3: {} + enhanced-resolve@5.20.1: dependencies: graceful-fs: 4.2.11 @@ -13044,30 +13409,30 @@ snapshots: dependencies: eslint: 10.2.0(jiti@2.6.1) - eslint-plugin-better-tailwindcss@4.3.2(eslint@10.2.0(jiti@2.6.1))(oxlint@1.58.0(oxlint-tsgolint@0.20.0))(tailwindcss@4.2.2)(typescript@6.0.2): + eslint-plugin-better-tailwindcss@4.4.1(eslint@10.2.0(jiti@2.6.1))(oxlint@1.60.0(oxlint-tsgolint@0.20.0))(tailwindcss@4.2.2)(typescript@6.0.2): dependencies: - '@eslint/css-tree': 3.6.9 + '@eslint/css-tree': 4.0.1 '@valibot/to-json-schema': 1.6.0(valibot@1.3.1(typescript@6.0.2)) enhanced-resolve: 5.20.1 jiti: 2.6.1 synckit: 0.11.12 - tailwind-csstree: 0.1.5 + tailwind-csstree: 0.3.1 tailwindcss: 4.2.2 tsconfig-paths-webpack-plugin: 4.2.0 valibot: 1.3.1(typescript@6.0.2) optionalDependencies: eslint: 10.2.0(jiti@2.6.1) - oxlint: 1.58.0(oxlint-tsgolint@0.20.0) + oxlint: 1.60.0(oxlint-tsgolint@0.20.0) transitivePeerDependencies: - '@eslint/css' - typescript - eslint-plugin-command@3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.1(typescript@6.0.2))(@typescript-eslint/utils@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)): + eslint-plugin-command@3.5.2(@typescript-eslint/rule-tester@8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(@typescript-eslint/typescript-estree@8.58.2(typescript@6.0.2))(@typescript-eslint/utils@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)): dependencies: '@es-joy/jsdoccomment': 0.84.0 '@typescript-eslint/rule-tester': 8.57.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/typescript-estree': 8.58.1(typescript@6.0.2) - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/typescript-estree': 8.58.2(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) eslint-plugin-depend@1.5.0(eslint@10.2.0(jiti@2.6.1)): @@ -13127,7 +13492,7 @@ snapshots: transitivePeerDependencies: - '@eslint/json' - eslint-plugin-markdown-preferences@0.41.0(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)): + eslint-plugin-markdown-preferences@0.41.1(@eslint/markdown@8.0.1)(eslint@10.2.0(jiti@2.6.1)): dependencies: '@eslint/markdown': 8.0.1 diff-sequences: 29.6.3 @@ -13162,11 +13527,11 @@ snapshots: transitivePeerDependencies: - typescript - eslint-plugin-no-barrel-files@1.2.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2): + eslint-plugin-no-barrel-files@1.3.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2): dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + eslint: 10.2.0(jiti@2.6.1) transitivePeerDependencies: - - eslint - supports-color - typescript @@ -13174,7 +13539,7 @@ snapshots: eslint-plugin-perfectionist@5.8.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2): dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) natural-orderby: 5.0.0 transitivePeerDependencies: @@ -13188,7 +13553,7 @@ snapshots: jsonc-eslint-parser: 3.1.0 pathe: 2.0.3 pnpm-workspace-yaml: 1.6.0 - tinyglobby: 0.2.15 + tinyglobby: 0.2.16 yaml: 2.8.3 yaml-eslint-parser: 2.0.0 @@ -13198,9 +13563,9 @@ snapshots: '@eslint-react/core': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) compare-versions: 6.1.1 eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 @@ -13214,10 +13579,10 @@ snapshots: '@eslint-react/core': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) compare-versions: 6.1.1 eslint: 10.2.0(jiti@2.6.1) string-ts: 2.3.1 @@ -13235,10 +13600,10 @@ snapshots: '@eslint-react/ast': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 typescript: 6.0.2 @@ -13251,9 +13616,9 @@ snapshots: '@eslint-react/core': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) birecord: 0.1.1 eslint: 10.2.0(jiti@2.6.1) ts-pattern: 5.9.0 @@ -13267,10 +13632,10 @@ snapshots: '@eslint-react/core': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/shared': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) '@eslint-react/var': 3.0.0(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/scope-manager': 8.58.1 - '@typescript-eslint/type-utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - '@typescript-eslint/types': 8.58.1 - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/scope-manager': 8.58.2 + '@typescript-eslint/type-utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/types': 8.58.2 + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) compare-versions: 6.1.1 eslint: 10.2.0(jiti@2.6.1) string-ts: 2.3.1 @@ -13298,7 +13663,7 @@ snapshots: bytes: 3.1.2 eslint: 10.2.0(jiti@2.6.1) functional-red-black-tree: 1.0.1 - globals: 17.4.0 + globals: 17.5.0 jsx-ast-utils-x: 0.1.0 lodash.merge: 4.6.2 minimatch: 10.2.4 @@ -13309,7 +13674,7 @@ snapshots: eslint-plugin-storybook@10.3.5(eslint@10.2.0(jiti@2.6.1))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2): dependencies: - '@typescript-eslint/utils': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/utils': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint: 10.2.0(jiti@2.6.1) storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) transitivePeerDependencies: @@ -13337,7 +13702,7 @@ snapshots: core-js-compat: 3.49.0 eslint: 10.2.0(jiti@2.6.1) find-up-simple: 1.0.1 - globals: 17.4.0 + globals: 17.5.0 indent-string: 5.0.0 is-builtin-module: 5.0.0 jsesc: 3.1.0 @@ -13347,13 +13712,13 @@ snapshots: semver: 7.7.4 strip-indent: 4.1.1 - eslint-plugin-unused-imports@4.4.1(@typescript-eslint/eslint-plugin@8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)): + eslint-plugin-unused-imports@4.4.1(@typescript-eslint/eslint-plugin@8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1)): dependencies: eslint: 10.2.0(jiti@2.6.1) optionalDependencies: - '@typescript-eslint/eslint-plugin': 8.58.1(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/eslint-plugin': 8.58.2(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) - eslint-plugin-vue@10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))): + eslint-plugin-vue@10.8.0(@stylistic/eslint-plugin@5.10.0(eslint@10.2.0(jiti@2.6.1)))(@typescript-eslint/parser@8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2))(eslint@10.2.0(jiti@2.6.1))(vue-eslint-parser@10.4.0(eslint@10.2.0(jiti@2.6.1))): dependencies: '@eslint-community/eslint-utils': 4.9.1(eslint@10.2.0(jiti@2.6.1)) eslint: 10.2.0(jiti@2.6.1) @@ -13365,7 +13730,7 @@ snapshots: xml-name-validator: 4.0.0 optionalDependencies: '@stylistic/eslint-plugin': 5.10.0(eslint@10.2.0(jiti@2.6.1)) - '@typescript-eslint/parser': 8.58.1(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) + '@typescript-eslint/parser': 8.58.2(eslint@10.2.0(jiti@2.6.1))(typescript@6.0.2) eslint-plugin-yml@3.3.1(eslint@10.2.0(jiti@2.6.1)): dependencies: @@ -13556,6 +13921,9 @@ snapshots: expand-template@2.0.3: optional: true + expect-type@1.3.0: + optional: true + exsolve@1.0.8: {} extend@3.0.2: {} @@ -13712,7 +14080,7 @@ snapshots: globals@15.15.0: {} - globals@17.4.0: {} + globals@17.5.0: {} globrex@0.1.2: {} @@ -13724,9 +14092,9 @@ snapshots: hachure-fill@0.5.2: {} - happy-dom@20.8.9: + happy-dom@20.9.0: dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 '@types/whatwg-mimetype': 3.0.2 '@types/ws': 8.18.1 entities: 7.0.1 @@ -13891,7 +14259,7 @@ snapshots: hex-rgb@4.3.0: {} - hono@4.12.12: {} + hono@4.12.14: {} hosted-git-info@9.0.2: dependencies: @@ -14052,7 +14420,7 @@ snapshots: jest-worker@27.5.1: dependencies: - '@types/node': 25.5.2 + '@types/node': 25.6.0 merge-stream: 2.0.0 supports-color: 8.1.1 @@ -14125,7 +14493,7 @@ snapshots: khroma@2.1.0: {} - knip@6.3.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1): + knip@6.4.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2): dependencies: '@nodelib/fs.walk': 1.2.8 fast-glob: 3.3.3 @@ -14133,8 +14501,8 @@ snapshots: get-tsconfig: 4.13.7 jiti: 2.6.1 minimist: 1.2.8 - oxc-parser: 0.121.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) - oxc-resolver: 11.19.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + oxc-parser: 0.121.0(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) + oxc-resolver: 11.19.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) picocolors: 1.1.1 picomatch: 4.0.4 smol-toml: 1.6.1 @@ -14180,12 +14548,12 @@ snapshots: prelude-ls: 1.2.1 type-check: 0.4.0 - lexical-code-no-prism@0.41.0(@lexical/utils@0.42.0)(lexical@0.42.0): + lexical-code-no-prism@0.41.0(@lexical/utils@0.43.0)(lexical@0.43.0): dependencies: - '@lexical/utils': 0.42.0 - lexical: 0.42.0 + '@lexical/utils': 0.43.0 + lexical: 0.43.0 - lexical@0.42.0: {} + lexical@0.43.0: {} lib0@0.2.117: dependencies: @@ -14273,6 +14641,8 @@ snapshots: dependencies: js-tokens: 4.0.0 + loro-crdt@1.10.8: {} + loupe@3.2.1: {} lower-case@2.0.2: @@ -14522,7 +14892,7 @@ snapshots: mdn-data@2.0.30: {} - mdn-data@2.23.0: {} + mdn-data@2.27.1: {} merge-stream@2.0.0: {} @@ -15041,7 +15411,7 @@ snapshots: type-check: 0.4.0 word-wrap: 1.2.5 - oxc-parser@0.121.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1): + oxc-parser@0.121.0(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2): dependencies: '@oxc-project/types': 0.121.0 optionalDependencies: @@ -15061,7 +15431,7 @@ snapshots: '@oxc-parser/binding-linux-x64-gnu': 0.121.0 '@oxc-parser/binding-linux-x64-musl': 0.121.0 '@oxc-parser/binding-openharmony-arm64': 0.121.0 - '@oxc-parser/binding-wasm32-wasi': 0.121.0(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + '@oxc-parser/binding-wasm32-wasi': 0.121.0(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) '@oxc-parser/binding-win32-arm64-msvc': 0.121.0 '@oxc-parser/binding-win32-ia32-msvc': 0.121.0 '@oxc-parser/binding-win32-x64-msvc': 0.121.0 @@ -15069,7 +15439,7 @@ snapshots: - '@emnapi/core' - '@emnapi/runtime' - oxc-resolver@11.19.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1): + oxc-resolver@11.19.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2): optionalDependencies: '@oxc-resolver/binding-android-arm-eabi': 11.19.1 '@oxc-resolver/binding-android-arm64': 11.19.1 @@ -15087,7 +15457,7 @@ snapshots: '@oxc-resolver/binding-linux-x64-gnu': 11.19.1 '@oxc-resolver/binding-linux-x64-musl': 11.19.1 '@oxc-resolver/binding-openharmony-arm64': 11.19.1 - '@oxc-resolver/binding-wasm32-wasi': 11.19.1(@emnapi/core@1.9.1)(@emnapi/runtime@1.9.1) + '@oxc-resolver/binding-wasm32-wasi': 11.19.1(@emnapi/core@1.9.2)(@emnapi/runtime@1.9.2) '@oxc-resolver/binding-win32-arm64-msvc': 11.19.1 '@oxc-resolver/binding-win32-ia32-msvc': 11.19.1 '@oxc-resolver/binding-win32-x64-msvc': 11.19.1 @@ -15095,29 +15465,29 @@ snapshots: - '@emnapi/core' - '@emnapi/runtime' - oxfmt@0.43.0: + oxfmt@0.45.0: dependencies: tinypool: 2.1.0 optionalDependencies: - '@oxfmt/binding-android-arm-eabi': 0.43.0 - '@oxfmt/binding-android-arm64': 0.43.0 - '@oxfmt/binding-darwin-arm64': 0.43.0 - '@oxfmt/binding-darwin-x64': 0.43.0 - '@oxfmt/binding-freebsd-x64': 0.43.0 - '@oxfmt/binding-linux-arm-gnueabihf': 0.43.0 - '@oxfmt/binding-linux-arm-musleabihf': 0.43.0 - '@oxfmt/binding-linux-arm64-gnu': 0.43.0 - '@oxfmt/binding-linux-arm64-musl': 0.43.0 - '@oxfmt/binding-linux-ppc64-gnu': 0.43.0 - '@oxfmt/binding-linux-riscv64-gnu': 0.43.0 - '@oxfmt/binding-linux-riscv64-musl': 0.43.0 - '@oxfmt/binding-linux-s390x-gnu': 0.43.0 - '@oxfmt/binding-linux-x64-gnu': 0.43.0 - '@oxfmt/binding-linux-x64-musl': 0.43.0 - '@oxfmt/binding-openharmony-arm64': 0.43.0 - '@oxfmt/binding-win32-arm64-msvc': 0.43.0 - '@oxfmt/binding-win32-ia32-msvc': 0.43.0 - '@oxfmt/binding-win32-x64-msvc': 0.43.0 + '@oxfmt/binding-android-arm-eabi': 0.45.0 + '@oxfmt/binding-android-arm64': 0.45.0 + '@oxfmt/binding-darwin-arm64': 0.45.0 + '@oxfmt/binding-darwin-x64': 0.45.0 + '@oxfmt/binding-freebsd-x64': 0.45.0 + '@oxfmt/binding-linux-arm-gnueabihf': 0.45.0 + '@oxfmt/binding-linux-arm-musleabihf': 0.45.0 + '@oxfmt/binding-linux-arm64-gnu': 0.45.0 + '@oxfmt/binding-linux-arm64-musl': 0.45.0 + '@oxfmt/binding-linux-ppc64-gnu': 0.45.0 + '@oxfmt/binding-linux-riscv64-gnu': 0.45.0 + '@oxfmt/binding-linux-riscv64-musl': 0.45.0 + '@oxfmt/binding-linux-s390x-gnu': 0.45.0 + '@oxfmt/binding-linux-x64-gnu': 0.45.0 + '@oxfmt/binding-linux-x64-musl': 0.45.0 + '@oxfmt/binding-openharmony-arm64': 0.45.0 + '@oxfmt/binding-win32-arm64-msvc': 0.45.0 + '@oxfmt/binding-win32-ia32-msvc': 0.45.0 + '@oxfmt/binding-win32-x64-msvc': 0.45.0 oxlint-tsgolint@0.20.0: optionalDependencies: @@ -15128,27 +15498,27 @@ snapshots: '@oxlint-tsgolint/win32-arm64': 0.20.0 '@oxlint-tsgolint/win32-x64': 0.20.0 - oxlint@1.58.0(oxlint-tsgolint@0.20.0): + oxlint@1.60.0(oxlint-tsgolint@0.20.0): optionalDependencies: - '@oxlint/binding-android-arm-eabi': 1.58.0 - '@oxlint/binding-android-arm64': 1.58.0 - '@oxlint/binding-darwin-arm64': 1.58.0 - '@oxlint/binding-darwin-x64': 1.58.0 - '@oxlint/binding-freebsd-x64': 1.58.0 - '@oxlint/binding-linux-arm-gnueabihf': 1.58.0 - '@oxlint/binding-linux-arm-musleabihf': 1.58.0 - '@oxlint/binding-linux-arm64-gnu': 1.58.0 - '@oxlint/binding-linux-arm64-musl': 1.58.0 - '@oxlint/binding-linux-ppc64-gnu': 1.58.0 - '@oxlint/binding-linux-riscv64-gnu': 1.58.0 - '@oxlint/binding-linux-riscv64-musl': 1.58.0 - '@oxlint/binding-linux-s390x-gnu': 1.58.0 - '@oxlint/binding-linux-x64-gnu': 1.58.0 - '@oxlint/binding-linux-x64-musl': 1.58.0 - '@oxlint/binding-openharmony-arm64': 1.58.0 - '@oxlint/binding-win32-arm64-msvc': 1.58.0 - '@oxlint/binding-win32-ia32-msvc': 1.58.0 - '@oxlint/binding-win32-x64-msvc': 1.58.0 + '@oxlint/binding-android-arm-eabi': 1.60.0 + '@oxlint/binding-android-arm64': 1.60.0 + '@oxlint/binding-darwin-arm64': 1.60.0 + '@oxlint/binding-darwin-x64': 1.60.0 + '@oxlint/binding-freebsd-x64': 1.60.0 + '@oxlint/binding-linux-arm-gnueabihf': 1.60.0 + '@oxlint/binding-linux-arm-musleabihf': 1.60.0 + '@oxlint/binding-linux-arm64-gnu': 1.60.0 + '@oxlint/binding-linux-arm64-musl': 1.60.0 + '@oxlint/binding-linux-ppc64-gnu': 1.60.0 + '@oxlint/binding-linux-riscv64-gnu': 1.60.0 + '@oxlint/binding-linux-riscv64-musl': 1.60.0 + '@oxlint/binding-linux-s390x-gnu': 1.60.0 + '@oxlint/binding-linux-x64-gnu': 1.60.0 + '@oxlint/binding-linux-x64-musl': 1.60.0 + '@oxlint/binding-openharmony-arm64': 1.60.0 + '@oxlint/binding-win32-arm64-msvc': 1.60.0 + '@oxlint/binding-win32-ia32-msvc': 1.60.0 + '@oxlint/binding-win32-x64-msvc': 1.60.0 oxlint-tsgolint: 0.20.0 p-limit@3.1.0: @@ -15260,7 +15630,7 @@ snapshots: picomatch@4.0.4: {} - pinyin-pro@3.28.0: {} + pinyin-pro@3.28.1: {} pixelmatch@7.1.0: dependencies: @@ -15518,13 +15888,13 @@ snapshots: react-draggable: 4.5.0(react-dom@19.2.5(react@19.2.5))(react@19.2.5) tslib: 2.6.2 - react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)): + react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)): dependencies: acorn-loose: 8.5.2 neo-async: 2.6.2 react: 19.2.5 react-dom: 19.2.5(react@19.2.5) - webpack: 5.105.4(uglify-js@3.19.3) + webpack: 5.105.4(esbuild@0.27.2)(uglify-js@3.19.3) webpack-sources: 3.3.4 react-sortablejs@6.1.4(@types/sortablejs@1.15.9)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sortablejs@1.15.7): @@ -15943,6 +16313,9 @@ snapshots: '@shikijs/vscode-textmate': 10.0.2 '@types/hast': 3.0.4 + siginfo@2.0.0: + optional: true + simple-concat@1.0.1: optional: true @@ -15965,6 +16338,24 @@ snapshots: smol-toml@1.6.1: {} + socket.io-client@4.8.3: + dependencies: + '@socket.io/component-emitter': 3.1.2 + debug: 4.4.3(supports-color@8.1.1) + engine.io-client: 6.6.4 + socket.io-parser: 4.2.6 + transitivePeerDependencies: + - bufferutil + - supports-color + - utf-8-validate + + socket.io-parser@4.2.6: + dependencies: + '@socket.io/component-emitter': 3.1.2 + debug: 4.4.3(supports-color@8.1.1) + transitivePeerDependencies: + - supports-color + solid-js@1.9.11: dependencies: csstype: 3.2.3 @@ -16007,6 +16398,9 @@ snapshots: srvx@0.11.15: {} + stackback@0.0.2: + optional: true + stackframe@1.3.4: {} state-local@1.0.7: {} @@ -16151,7 +16545,7 @@ snapshots: tagged-tag@1.0.0: {} - tailwind-csstree@0.1.5: {} + tailwind-csstree@0.3.1: {} tailwind-merge@3.5.0: {} @@ -16184,14 +16578,15 @@ snapshots: minizlib: 3.1.0 yallist: 5.0.0 - terser-webpack-plugin@5.4.0(uglify-js@3.19.3)(webpack@5.105.4(uglify-js@3.19.3)): + terser-webpack-plugin@5.4.0(esbuild@0.27.2)(uglify-js@3.19.3)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)): dependencies: '@jridgewell/trace-mapping': 0.3.31 jest-worker: 27.5.1 schema-utils: 4.3.3 terser: 5.46.1 - webpack: 5.105.4(uglify-js@3.19.3) + webpack: 5.105.4(esbuild@0.27.2)(uglify-js@3.19.3) optionalDependencies: + esbuild: 0.27.2 uglify-js: 3.19.3 terser@5.46.1: @@ -16226,6 +16621,11 @@ snapshots: fdir: 6.5.0(picomatch@4.0.4) picomatch: 4.0.4 + tinyglobby@0.2.16: + dependencies: + fdir: 6.5.0(picomatch@4.0.4) + picomatch: 4.0.4 + tinypool@2.1.0: {} tinyrainbow@2.0.0: {} @@ -16335,7 +16735,7 @@ snapshots: unbash@2.2.0: {} - undici-types@7.18.2: {} + undici-types@7.19.2: {} undici@7.24.0: {} @@ -16500,21 +16900,21 @@ snapshots: '@types/unist': 3.0.3 vfile-message: 4.0.3 - vinext@0.0.41(@mdx-js/rollup@3.1.1(rollup@4.59.0))(@vitejs/plugin-react@6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)))(@vitejs/plugin-rsc@0.5.23(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.5))(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.5)(typescript@6.0.2): + vinext@0.0.41(453b4e184a832f83060410b31544dc36): dependencies: '@unpic/react': 1.0.2(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(react-dom@19.2.5(react@19.2.5))(react@19.2.5) '@vercel/og': 0.8.6 - '@vitejs/plugin-react': 6.0.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + '@vitejs/plugin-react': 6.0.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) magic-string: 0.30.21 react: 19.2.5 react-dom: 19.2.5(react@19.2.5) - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' vite-plugin-commonjs: 0.10.4 - vite-tsconfig-paths: 6.1.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) + vite-tsconfig-paths: 6.1.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) optionalDependencies: '@mdx-js/rollup': 3.1.1(rollup@4.59.0) - '@vitejs/plugin-rsc': 0.5.23(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)))(react@19.2.5) - react-server-dom-webpack: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(uglify-js@3.19.3)) + '@vitejs/plugin-rsc': 0.5.24(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(react-dom@19.2.5(react@19.2.5))(react-server-dom-webpack@19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)))(react@19.2.5) + react-server-dom-webpack: 19.2.5(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) transitivePeerDependencies: - next - supports-color @@ -16533,9 +16933,9 @@ snapshots: fast-glob: 3.3.3 magic-string: 0.30.21 - vite-plugin-inspect@12.0.0-beta.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0): + vite-plugin-inspect@12.0.0-beta.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0): dependencies: - '@vitejs/devtools-kit': 0.1.11(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0) + '@vitejs/devtools-kit': 0.1.11(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2)(ws@8.20.0) ansis: 4.2.0 error-stack-parser-es: 1.0.5 obug: 2.1.1 @@ -16544,12 +16944,12 @@ snapshots: perfect-debounce: 2.1.0 sirv: 3.0.2 unplugin-utils: 0.3.1 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - typescript - ws - vite-plugin-storybook-nextjs@3.2.4(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2): + vite-plugin-storybook-nextjs@3.2.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(next@16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0))(storybook@10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5))(typescript@6.0.2): dependencies: '@next/env': 16.0.0 image-size: 2.0.2 @@ -16558,29 +16958,29 @@ snapshots: next: 16.2.3(@babel/core@7.29.0)(@playwright/test@1.59.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5)(sass@1.98.0) storybook: 10.3.5(@testing-library/dom@10.4.1)(react-dom@19.2.5(react@19.2.5))(react@19.2.5) ts-dedent: 2.2.0 - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - vite-tsconfig-paths: 5.1.4(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite-tsconfig-paths: 5.1.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2) transitivePeerDependencies: - supports-color - typescript - vite-plus@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3): + vite-plus@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3): dependencies: - '@oxc-project/types': 0.123.0 - '@voidzero-dev/vite-plus-core': 0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) - '@voidzero-dev/vite-plus-test': 0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) - oxfmt: 0.43.0 - oxlint: 1.58.0(oxlint-tsgolint@0.20.0) + '@oxc-project/types': 0.124.0 + '@voidzero-dev/vite-plus-core': 0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + '@voidzero-dev/vite-plus-test': 0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3) + oxfmt: 0.45.0 + oxlint: 1.60.0(oxlint-tsgolint@0.20.0) oxlint-tsgolint: 0.20.0 optionalDependencies: - '@voidzero-dev/vite-plus-darwin-arm64': 0.1.16 - '@voidzero-dev/vite-plus-darwin-x64': 0.1.16 - '@voidzero-dev/vite-plus-linux-arm64-gnu': 0.1.16 - '@voidzero-dev/vite-plus-linux-arm64-musl': 0.1.16 - '@voidzero-dev/vite-plus-linux-x64-gnu': 0.1.16 - '@voidzero-dev/vite-plus-linux-x64-musl': 0.1.16 - '@voidzero-dev/vite-plus-win32-arm64-msvc': 0.1.16 - '@voidzero-dev/vite-plus-win32-x64-msvc': 0.1.16 + '@voidzero-dev/vite-plus-darwin-arm64': 0.1.18 + '@voidzero-dev/vite-plus-darwin-x64': 0.1.18 + '@voidzero-dev/vite-plus-linux-arm64-gnu': 0.1.18 + '@voidzero-dev/vite-plus-linux-arm64-musl': 0.1.18 + '@voidzero-dev/vite-plus-linux-x64-gnu': 0.1.18 + '@voidzero-dev/vite-plus-linux-x64-musl': 0.1.18 + '@voidzero-dev/vite-plus-win32-arm64-msvc': 0.1.18 + '@voidzero-dev/vite-plus-win32-x64-msvc': 0.1.18 transitivePeerDependencies: - '@arethetypeswrong/core' - '@edge-runtime/vm' @@ -16589,6 +16989,8 @@ snapshots: - '@tsdown/exe' - '@types/node' - '@vitejs/devtools' + - '@vitest/coverage-istanbul' + - '@vitest/coverage-v8' - '@vitest/ui' - bufferutil - esbuild @@ -16609,36 +17011,66 @@ snapshots: - vite - yaml - vite-tsconfig-paths@5.1.4(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2): + vite-tsconfig-paths@5.1.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2): dependencies: debug: 4.4.3(supports-color@8.1.1) globrex: 0.1.2 tsconfck: 3.1.6(typescript@6.0.2) optionalDependencies: - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - supports-color - typescript - vite-tsconfig-paths@6.1.1(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2): + vite-tsconfig-paths@6.1.1(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(typescript@6.0.2): dependencies: debug: 4.4.3(supports-color@8.1.1) globrex: 0.1.2 tsconfck: 3.1.6(typescript@6.0.2) - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' transitivePeerDependencies: - supports-color - typescript - vitefu@1.1.3(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)): + vitefu@1.1.3(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)): optionalDependencies: - vite: '@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' - vitest-canvas-mock@1.1.4(@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)): + vitest-canvas-mock@1.1.4(@voidzero-dev/vite-plus-test@0.1.18): dependencies: cssfontparser: 1.2.1 moo-color: 1.0.3 - vitest: '@voidzero-dev/vite-plus-test@0.1.16(@types/node@25.5.2)(@voidzero-dev/vite-plus-core@0.1.16(@types/node@25.5.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.8.9)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + vitest: '@voidzero-dev/vite-plus-test@0.1.18(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(esbuild@0.27.2)(happy-dom@20.9.0)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + + vitest@4.1.4(@types/node@25.6.0)(@vitest/coverage-v8@4.1.4)(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3))(happy-dom@20.9.0): + dependencies: + '@vitest/expect': 4.1.4 + '@vitest/mocker': 4.1.4(@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)) + '@vitest/pretty-format': 4.1.4 + '@vitest/runner': 4.1.4 + '@vitest/snapshot': 4.1.4 + '@vitest/spy': 4.1.4 + '@vitest/utils': 4.1.4 + es-module-lexer: 2.0.0 + expect-type: 1.3.0 + magic-string: 0.30.21 + obug: 2.1.1 + pathe: 2.0.3 + picomatch: 4.0.4 + std-env: 4.0.0 + tinybench: 2.9.0 + tinyexec: 1.0.4 + tinyglobby: 0.2.16 + tinyrainbow: 3.1.0 + vite: '@voidzero-dev/vite-plus-core@0.1.18(@types/node@25.6.0)(esbuild@0.27.2)(jiti@2.6.1)(sass@1.98.0)(terser@5.46.1)(tsx@4.21.0)(typescript@6.0.2)(yaml@2.8.3)' + why-is-node-running: 2.3.0 + optionalDependencies: + '@types/node': 25.6.0 + '@vitest/coverage-v8': 4.1.4(vitest@4.1.4) + happy-dom: 20.9.0 + transitivePeerDependencies: + - msw + optional: true void-elements@3.1.0: {} @@ -16686,7 +17118,7 @@ snapshots: webpack-virtual-modules@0.6.2: {} - webpack@5.105.4(uglify-js@3.19.3): + webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3): dependencies: '@types/eslint-scope': 3.7.7 '@types/estree': 1.0.8 @@ -16710,7 +17142,7 @@ snapshots: neo-async: 2.6.2 schema-utils: 4.3.3 tapable: 2.3.2 - terser-webpack-plugin: 5.4.0(uglify-js@3.19.3)(webpack@5.105.4(uglify-js@3.19.3)) + terser-webpack-plugin: 5.4.0(esbuild@0.27.2)(uglify-js@3.19.3)(webpack@5.105.4(esbuild@0.27.2)(uglify-js@3.19.3)) watchpack: 2.5.1 webpack-sources: 3.3.4 transitivePeerDependencies: @@ -16730,10 +17162,18 @@ snapshots: dependencies: isexe: 2.0.0 + why-is-node-running@2.3.0: + dependencies: + siginfo: 2.0.0 + stackback: 0.0.2 + optional: true + word-wrap@1.2.5: {} wrappy@1.0.2: {} + ws@8.18.3: {} + ws@8.20.0: {} wsl-utils@0.1.0: @@ -16749,6 +17189,8 @@ snapshots: xmlbuilder@15.1.1: {} + xmlhttprequest-ssl@2.1.2: {} + yallist@3.1.1: {} yallist@5.0.0: {} diff --git a/pnpm-workspace.yaml b/pnpm-workspace.yaml index d0f9c4e1be..0bd1303fb3 100644 --- a/pnpm-workspace.yaml +++ b/pnpm-workspace.yaml @@ -1,133 +1,100 @@ -catalogMode: prefer -trustPolicy: no-downgrade -trustPolicyExclude: - - chokidar@4.0.3 - - reselect@5.1.1 - - semver@6.3.1 -blockExoticSubdeps: true -strictDepBuilds: true -allowBuilds: - "@parcel/watcher": false - canvas: false - esbuild: false - sharp: false packages: - web - e2e - sdks/nodejs-client - packages/* -overrides: - "@lexical/code": npm:lexical-code-no-prism@0.41.0 - "@monaco-editor/loader": 1.7.0 - brace-expansion@>=2.0.0 <2.0.3: 2.0.3 - canvas: ^3.2.2 - dompurify@>=3.1.3 <=3.3.1: 3.3.2 - esbuild@<0.27.2: 0.27.2 - flatted@<=3.4.1: 3.4.2 - glob@>=10.2.0 <10.5.0: 11.1.0 - is-core-module: npm:@nolyfill/is-core-module@^1.0.39 - lodash@>=4.0.0 <= 4.17.23: 4.18.0 - lodash-es@>=4.0.0 <= 4.17.23: 4.18.0 - picomatch@<2.3.2: 2.3.2 - picomatch@>=4.0.0 <4.0.4: 4.0.4 - rollup@>=4.0.0 <4.59.0: 4.59.0 - safe-buffer: ^5.2.1 - safer-buffer: npm:@nolyfill/safer-buffer@^1.0.44 - side-channel: npm:@nolyfill/side-channel@^1.0.44 - smol-toml@<1.6.1: 1.6.1 - solid-js: 1.9.11 - string-width: ~8.2.0 - svgo@>=3.0.0 <3.3.3: 3.3.3 - tar@<=7.5.10: 7.5.11 - undici@>=7.0.0 <7.24.0: 7.24.0 - vite: npm:@voidzero-dev/vite-plus-core@0.1.16 - vitest: npm:@voidzero-dev/vite-plus-test@0.1.16 - yaml@>=2.0.0 <2.8.3: 2.8.3 - yauzl@<3.2.1: 3.2.1 +allowBuilds: + '@parcel/watcher': false + canvas: false + esbuild: false + sharp: false +blockExoticSubdeps: true catalog: - "@amplitude/analytics-browser": 2.38.1 - "@amplitude/plugin-session-replay-browser": 1.27.6 - "@antfu/eslint-config": 8.1.1 - "@base-ui/react": 1.3.0 - "@chromatic-com/storybook": 5.1.1 - "@cucumber/cucumber": 12.7.0 - "@egoist/tailwindcss-icons": 1.9.2 - "@emoji-mart/data": 1.2.1 - "@eslint-react/eslint-plugin": 3.0.0 - "@eslint/js": 10.0.1 - "@floating-ui/react": 0.27.19 - "@formatjs/intl-localematcher": 0.8.2 - "@headlessui/react": 2.2.10 - "@heroicons/react": 2.2.0 - "@hono/node-server": 1.19.13 - "@iconify-json/heroicons": 1.2.3 - "@iconify-json/ri": 1.2.10 - "@lexical/code": 0.42.0 - "@lexical/link": 0.42.0 - "@lexical/list": 0.42.0 - "@lexical/react": 0.42.0 - "@lexical/selection": 0.42.0 - "@lexical/text": 0.42.0 - "@lexical/utils": 0.42.0 - "@mdx-js/loader": 3.1.1 - "@mdx-js/react": 3.1.1 - "@mdx-js/rollup": 3.1.1 - "@monaco-editor/react": 4.7.0 - "@next/eslint-plugin-next": 16.2.3 - "@next/mdx": 16.2.3 - "@orpc/client": 1.13.13 - "@orpc/contract": 1.13.13 - "@orpc/openapi-client": 1.13.13 - "@orpc/tanstack-query": 1.13.13 - "@playwright/test": 1.59.1 - "@remixicon/react": 4.9.0 - "@rgrove/parse-xml": 4.2.0 - "@sentry/react": 10.47.0 - "@storybook/addon-docs": 10.3.5 - "@storybook/addon-links": 10.3.5 - "@storybook/addon-onboarding": 10.3.5 - "@storybook/addon-themes": 10.3.5 - "@storybook/nextjs-vite": 10.3.5 - "@storybook/react": 10.3.5 - "@streamdown/math": 1.0.2 - "@svgdotjs/svg.js": 3.2.5 - "@t3-oss/env-nextjs": 0.13.11 - "@tailwindcss/postcss": 4.2.2 - "@tailwindcss/typography": 0.5.19 - "@tailwindcss/vite": 4.2.2 - "@tanstack/eslint-plugin-query": 5.96.2 - "@tanstack/react-devtools": 0.10.2 - "@tanstack/react-form": 1.28.6 - "@tanstack/react-form-devtools": 0.2.20 - "@tanstack/react-query": 5.96.2 - "@tanstack/react-query-devtools": 5.96.2 - "@tanstack/react-virtual": 3.13.23 - "@testing-library/dom": 10.4.1 - "@testing-library/jest-dom": 6.9.1 - "@testing-library/react": 16.3.2 - "@testing-library/user-event": 14.6.1 - "@tsslint/cli": 3.0.2 - "@tsslint/compat-eslint": 3.0.2 - "@tsslint/config": 3.0.2 - "@types/js-cookie": 3.0.6 - "@types/js-yaml": 4.0.9 - "@types/negotiator": 0.6.4 - "@types/node": 25.5.2 - "@types/postcss-js": 4.1.0 - "@types/qs": 6.15.0 - "@types/react": 19.2.14 - "@types/react-dom": 19.2.3 - "@types/sortablejs": 1.15.9 - "@typescript-eslint/eslint-plugin": 8.58.1 - "@typescript-eslint/parser": 8.58.1 - "@typescript/native-preview": 7.0.0-dev.20260408.1 - "@vitejs/plugin-react": 6.0.1 - "@vitejs/plugin-rsc": 0.5.23 - "@vitest/coverage-v8": 4.1.3 + '@amplitude/analytics-browser': 2.39.0 + '@amplitude/plugin-session-replay-browser': 1.27.7 + '@antfu/eslint-config': 8.2.0 + '@base-ui/react': 1.4.0 + '@chromatic-com/storybook': 5.1.2 + '@cucumber/cucumber': 12.8.0 + '@date-fns/tz': 1.4.1 + '@egoist/tailwindcss-icons': 1.9.2 + '@emoji-mart/data': 1.2.1 + '@eslint-react/eslint-plugin': 3.0.0 + '@eslint/js': 10.0.1 + '@floating-ui/react': 0.27.19 + '@formatjs/intl-localematcher': 0.8.3 + '@headlessui/react': 2.2.10 + '@heroicons/react': 2.2.0 + '@hono/node-server': 1.19.14 + '@iconify-json/heroicons': 1.2.3 + '@iconify-json/ri': 1.2.10 + '@lexical/code': 0.43.0 + '@lexical/link': 0.43.0 + '@lexical/list': 0.43.0 + '@lexical/react': 0.43.0 + '@lexical/selection': 0.43.0 + '@lexical/text': 0.43.0 + '@lexical/utils': 0.43.0 + '@mdx-js/loader': 3.1.1 + '@mdx-js/react': 3.1.1 + '@mdx-js/rollup': 3.1.1 + '@monaco-editor/react': 4.7.0 + '@next/eslint-plugin-next': 16.2.3 + '@next/mdx': 16.2.3 + '@orpc/client': 1.13.14 + '@orpc/contract': 1.13.14 + '@orpc/openapi-client': 1.13.14 + '@orpc/tanstack-query': 1.13.14 + '@playwright/test': 1.59.1 + '@remixicon/react': 4.9.0 + '@rgrove/parse-xml': 4.2.0 + '@sentry/react': 10.48.0 + '@storybook/addon-docs': 10.3.5 + '@storybook/addon-links': 10.3.5 + '@storybook/addon-onboarding': 10.3.5 + '@storybook/addon-themes': 10.3.5 + '@storybook/nextjs-vite': 10.3.5 + '@storybook/react': 10.3.5 + '@streamdown/math': 1.0.2 + '@svgdotjs/svg.js': 3.2.5 + '@t3-oss/env-nextjs': 0.13.11 + '@tailwindcss/postcss': 4.2.2 + '@tailwindcss/typography': 0.5.19 + '@tailwindcss/vite': 4.2.2 + '@tanstack/eslint-plugin-query': 5.99.0 + '@tanstack/react-devtools': 0.10.2 + '@tanstack/react-form': 1.29.0 + '@tanstack/react-form-devtools': 0.2.21 + '@tanstack/react-query': 5.99.0 + '@tanstack/react-query-devtools': 5.99.0 + '@tanstack/react-virtual': 3.13.23 + '@testing-library/dom': 10.4.1 + '@testing-library/jest-dom': 6.9.1 + '@testing-library/react': 16.3.2 + '@testing-library/user-event': 14.6.1 + '@tsdown/css': 0.21.8 + '@tsslint/cli': 3.0.3 + '@tsslint/compat-eslint': 3.0.3 + '@tsslint/config': 3.0.3 + '@types/js-cookie': 3.0.6 + '@types/js-yaml': 4.0.9 + '@types/negotiator': 0.6.4 + '@types/node': 25.6.0 + '@types/postcss-js': 4.1.0 + '@types/qs': 6.15.0 + '@types/react': 19.2.14 + '@types/react-dom': 19.2.3 + '@types/sortablejs': 1.15.9 + '@typescript-eslint/eslint-plugin': 8.58.2 + '@typescript-eslint/parser': 8.58.2 + '@typescript/native-preview': 7.0.0-dev.20260413.1 + '@vitejs/plugin-react': 6.0.1 + '@vitejs/plugin-rsc': 0.5.24 + '@vitest/coverage-v8': 4.1.4 abcjs: 6.6.2 agentation: 3.0.2 ahooks: 3.9.7 - autoprefixer: 10.4.27 + autoprefixer: 10.5.0 class-variance-authority: 0.7.1 client-only: 0.0.1 clsx: 2.1.1 @@ -135,9 +102,10 @@ catalog: code-inspector-plugin: 1.5.1 copy-to-clipboard: 3.3.3 cron-parser: 5.5.0 + date-fns: 4.1.0 dayjs: 1.11.20 decimal.js: 10.6.0 - dompurify: 3.3.3 + dompurify: 3.4.0 echarts: 6.0.0 echarts-for-react: 3.0.6 elkjs: 0.11.1 @@ -147,17 +115,17 @@ catalog: es-toolkit: 1.45.1 eslint: 10.2.0 eslint-markdown: 0.6.1 - eslint-plugin-better-tailwindcss: 4.3.2 + eslint-plugin-better-tailwindcss: 4.4.1 eslint-plugin-hyoban: 0.14.1 - eslint-plugin-markdown-preferences: 0.41.0 - eslint-plugin-no-barrel-files: 1.2.2 + eslint-plugin-markdown-preferences: 0.41.1 + eslint-plugin-no-barrel-files: 1.3.1 eslint-plugin-react-refresh: 0.5.2 eslint-plugin-sonarjs: 4.0.2 eslint-plugin-storybook: 10.3.5 fast-deep-equal: 3.1.3 - happy-dom: 20.8.9 + happy-dom: 20.9.0 hast-util-to-jsx-runtime: 2.3.6 - hono: 4.12.12 + hono: 4.12.14 html-entities: 2.6.0 html-to-image: 1.11.13 i18next: 26.0.4 @@ -170,10 +138,11 @@ catalog: js-yaml: 4.1.1 jsonschema: 1.5.0 katex: 0.16.45 - knip: 6.3.1 + knip: 6.4.1 ky: 2.0.0 lamejs: 1.2.1 - lexical: 0.42.0 + lexical: 0.43.0 + loro-crdt: 1.10.8 mermaid: 11.14.0 mime: 4.1.0 mitt: 3.0.1 @@ -181,7 +150,7 @@ catalog: next: 16.2.3 next-themes: 0.4.6 nuqs: 2.8.9 - pinyin-pro: 3.28.0 + pinyin-pro: 3.28.1 postcss: 8.5.9 postcss-js: 5.1.0 qrcode.react: 4.2.0 @@ -204,6 +173,7 @@ catalog: scheduler: 0.27.0 sharp: 0.34.5 shiki: 4.0.2 + socket.io-client: 4.8.3 sortablejs: 1.15.7 std-semver: 1.0.8 storybook: 10.3.5 @@ -212,7 +182,7 @@ catalog: tailwind-merge: 3.5.0 tailwindcss: 4.2.2 tldts: 7.0.28 - tsdown: 0.21.7 + tsdown: 0.21.8 tsx: 4.21.0 typescript: 6.0.2 uglify-js: 3.19.3 @@ -220,11 +190,46 @@ catalog: use-context-selector: 2.0.0 uuid: 13.0.0 vinext: 0.0.41 - vite: npm:@voidzero-dev/vite-plus-core@0.1.16 + vite: npm:@voidzero-dev/vite-plus-core@0.1.18 vite-plugin-inspect: 12.0.0-beta.1 - vite-plus: 0.1.16 - vitest: npm:@voidzero-dev/vite-plus-test@0.1.16 + vite-plus: 0.1.18 + vitest: npm:@voidzero-dev/vite-plus-test@0.1.18 vitest-canvas-mock: 1.1.4 zod: 4.3.6 zundo: 2.3.0 zustand: 5.0.12 +catalogMode: prefer +overrides: + '@lexical/code': npm:lexical-code-no-prism@0.41.0 + '@monaco-editor/loader': 1.7.0 + brace-expansion@>=2.0.0 <2.0.3: 2.0.3 + canvas: ^3.2.2 + dompurify@>=3.1.3 <=3.3.1: 3.3.2 + esbuild@<0.27.2: 0.27.2 + flatted@<=3.4.1: 3.4.2 + glob@>=10.2.0 <10.5.0: 11.1.0 + is-core-module: npm:@nolyfill/is-core-module@^1.0.39 + lodash-es@>=4.0.0 <= 4.17.23: 4.18.0 + lodash@>=4.0.0 <= 4.17.23: 4.18.0 + picomatch@<2.3.2: 2.3.2 + picomatch@>=4.0.0 <4.0.4: 4.0.4 + rollup@>=4.0.0 <4.59.0: 4.59.0 + safe-buffer: ^5.2.1 + safer-buffer: npm:@nolyfill/safer-buffer@^1.0.44 + side-channel: npm:@nolyfill/side-channel@^1.0.44 + smol-toml@<1.6.1: 1.6.1 + solid-js: 1.9.11 + string-width: ~8.2.0 + svgo@>=3.0.0 <3.3.3: 3.3.3 + tar@<=7.5.10: 7.5.11 + undici@>=7.0.0 <7.24.0: 7.24.0 + vite: npm:@voidzero-dev/vite-plus-core@0.1.18 + vitest: npm:@voidzero-dev/vite-plus-test@0.1.18 + yaml@>=2.0.0 <2.8.3: 2.8.3 + yauzl@<3.2.1: 3.2.1 +strictDepBuilds: true +trustPolicy: no-downgrade +trustPolicyExclude: + - chokidar@4.0.3 + - reselect@5.1.1 + - semver@6.3.1 diff --git a/sdks/nodejs-client/package.json b/sdks/nodejs-client/package.json index e058edb0ca..28ebcb89c2 100644 --- a/sdks/nodejs-client/package.json +++ b/sdks/nodejs-client/package.json @@ -48,13 +48,14 @@ "build": "vp pack", "lint": "eslint", "lint:fix": "eslint --fix", - "type-check": "tsc -p tsconfig.json --noEmit", + "type-check": "tsc", "test": "vp test", "test:coverage": "vp test --coverage", "publish:check": "./scripts/publish.sh --dry-run", "publish:npm": "./scripts/publish.sh" }, "devDependencies": { + "@dify/tsconfig": "workspace:*", "@eslint/js": "catalog:", "@types/node": "catalog:", "@typescript-eslint/eslint-plugin": "catalog:", diff --git a/sdks/nodejs-client/src/http/sse.test.ts b/sdks/nodejs-client/src/http/sse.test.ts index 70cd11007d..83cde28de3 100644 --- a/sdks/nodejs-client/src/http/sse.test.ts +++ b/sdks/nodejs-client/src/http/sse.test.ts @@ -14,8 +14,8 @@ describe("sse parsing", () => { events.push(event); } expect(events).toHaveLength(1); - expect(events[0].event).toBe("message"); - expect(events[0].data).toEqual({ answer: "hi" }); + expect(events[0]!.event).toBe("message"); + expect(events[0]!.data).toEqual({ answer: "hi" }); }); it("handles multi-line data payloads", async () => { @@ -24,8 +24,8 @@ describe("sse parsing", () => { for await (const event of parseSseStream(stream)) { events.push(event); } - expect(events[0].raw).toBe("line1\nline2"); - expect(events[0].data).toBe("line1\nline2"); + expect(events[0]!.raw).toBe("line1\nline2"); + expect(events[0]!.data).toBe("line1\nline2"); }); it("ignores comments and flushes the last event without a trailing separator", async () => { diff --git a/sdks/nodejs-client/src/index.test.ts b/sdks/nodejs-client/src/index.test.ts index d194680379..8d56b994c4 100644 --- a/sdks/nodejs-client/src/index.test.ts +++ b/sdks/nodejs-client/src/index.test.ts @@ -99,7 +99,7 @@ describe("File uploads", () => { super(); } - _read() {} + override _read() {} append() {} diff --git a/sdks/nodejs-client/tsconfig.json b/sdks/nodejs-client/tsconfig.json index 46055447be..1e55007ed0 100644 --- a/sdks/nodejs-client/tsconfig.json +++ b/sdks/nodejs-client/tsconfig.json @@ -1,18 +1,14 @@ { + "extends": "@dify/tsconfig/node.json", "compilerOptions": { - "target": "ES2022", - "module": "ESNext", - "moduleResolution": "Bundler", + "lib": ["ES2023", "DOM", "DOM.Iterable"], "rootDir": ".", "outDir": "dist", + "noEmit": false, "declaration": true, "declarationMap": true, "sourceMap": true, - "strict": true, - "esModuleInterop": true, - "forceConsistentCasingInFileNames": true, - "skipLibCheck": true, "types": ["node"] }, - "include": ["src/**/*.ts", "tests/**/*.ts"] + "include": ["src/**/*.ts", "tests/**/*.ts", "vite.config.ts"] } diff --git a/vite.config.ts b/vite.config.ts index a34932a4ef..aebcaf8f73 100644 --- a/vite.config.ts +++ b/vite.config.ts @@ -1,5 +1,7 @@ import { defineConfig } from 'vite-plus' export default defineConfig({ - staged: {}, + staged: { + '*': 'eslint --fix --pass-on-unpruned-suppressions', + }, }) diff --git a/web/.env.example b/web/.env.example index 93cbc22fc8..643aba482e 100644 --- a/web/.env.example +++ b/web/.env.example @@ -14,6 +14,8 @@ NEXT_PUBLIC_API_PREFIX=http://localhost:5001/console/api NEXT_PUBLIC_PUBLIC_API_PREFIX=http://localhost:5001/api # When the frontend and backend run on different subdomains, set NEXT_PUBLIC_COOKIE_DOMAIN=1. NEXT_PUBLIC_COOKIE_DOMAIN= +# WebSocket server URL. +NEXT_PUBLIC_SOCKET_URL=ws://localhost:5001 # Dev-only Hono proxy targets. # The frontend keeps requesting http://localhost:5001 directly, diff --git a/web/.gitignore b/web/.gitignore index a4ae324795..9de3dc83f9 100644 --- a/web/.gitignore +++ b/web/.gitignore @@ -64,5 +64,3 @@ public/fallback-*.js .vscode/settings.json .vscode/mcp.json - -.eslintcache diff --git a/web/.storybook/utils/form-story-wrapper.tsx b/web/.storybook/utils/form-story-wrapper.tsx index 90349a0325..7503e9905d 100644 --- a/web/.storybook/utils/form-story-wrapper.tsx +++ b/web/.storybook/utils/form-story-wrapper.tsx @@ -47,7 +47,7 @@ export const FormStoryWrapper = ({ {children(form)}